-
Notifications
You must be signed in to change notification settings - Fork 5
Description
Summary
When exporting a preprocessing pipeline (export_module), as the one described below:
torch.onnx.export(
export_module,
(dummy_xyz, dummy_normals, dummy_params),
output_path,
input_names=["xyz", "normals", "params"],
output_names=["xyz_scaled", "normals_scaled", "params_scaled"],
dynamic_axes={
"xyz": {0: "batch_size", 1: "num_points"},
"normals": {0: "batch_size", 1: "num_points"},
"params": {0: "batch_size"},
"xyz_scaled": {0: "batch_size", 1: "num_points"},
"normals_scaled": {0: "batch_size", 1: "num_points"},
"params_scaled": {0: "batch_size"},
},
opset_version=18,
do_constant_folding=True,
)The following TorchExportError is raised
TorchExportError: Failed to export the model with torch.export. This is step
1/3 of exporting the model to ONNX. Next steps:
- Modify the model code for `torch.export.export` to succeed. Refer to
https://pytorch.org/docs/stable/generated/exportdb/index.html for more
information.
- Debug `torch.export.export` and submit a PR to PyTorch.
- Create an issue in the PyTorch GitHub repository against the
*torch.export* component and attach the full error stack as well as
reproduction scripts.
## Exception summary
<class 'KeyError'>: 'getpwuid(): uid not found: 501'
Workaround
To circumvent this, the option dynamo=False must be set to the export function in order to avoid the error.
Why is this needed?
Use default export option from PyTorch (employs TorchDynamo instead of TorchScript)
Usage example
Minimal example of model export
def evaluate(inputs: Any) -> Any:
import torch
import torch.nn as nn
class SimpleMLP(nn.Module):
def __init__(self, in_features=4, hidden=32, out_features=1):
super().__init__()
self.net = nn.Sequential(
nn.Linear(in_features, hidden),
nn.ReLU(),
nn.Linear(hidden, out_features),
)
def forward(self, x):
return self.net(x)
def export_onnx(
model: nn.Module,
onnx_path: str = "simple_mlp.onnx",
in_features: int = 16,
):
model.eval()
# Dummy input defines input shape for tracing
x = torch.randn(1, in_features)
torch.onnx.export(
model,
x,
onnx_path,
export_params=True,
opset_version=17,
do_constant_folding=True,
input_names=["input"],
output_names=["output"],
dynamic_axes={
"input": {0: "batch"},
"output": {0: "batch"},
},
)
return onnx_path
model = SimpleMLP(in_features=4, hidden=32, out_features=1)
path = export_onnx(model, "simple_mlp.onnx", in_features=4)
return pathMetadata
Metadata
Assignees
Labels
No labels