Issues exporting to ONNX #524
-
Hi, I am doing my final year University project on deploying and optimizing image classification on FPGA using FINN. I have currently got my training model and have implemented quantization using Brevitas. I am at the point where I want to export the model to an ONNX file. I am attempting this using 'bo.export_finn_onnx(model, (1, 3, 3, 3), image_dir + "/trained_model.onnx")', where the model variable is the net(). This causes an error RuntimeError: mat1 and mat2 shapes cannot be multiplied (1x2048 and 18432x1024), this error is hit in the forward of the model, as if the model is attempting to run again but the matrices no longer match. I have attempted to load the model using 'my_model.load_state_dict(torch.load(load_file)['state_dict'])', and calling 'bo.export_finn_onnx(my_model, (1, 3, 3, 3), image_dir + "/trained_model.onnx")', however this produces the same error. I believe (1, 3, 3, 3) is correct as there are three channels and changing this causes an earlier error. Am I passing the model incorrectly? I have been trying to follow the FINN documentation however it is difficult to understand how the LFC model is being passed in at that point. Would appreciate any help and can provide any extra information necessary if this is not clear. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
The issue was in face to do with the (1, 3, 3, 3). This should have included the image size which is 150x150. There for when changed to (1, 150, 150, 3), the onnx file was exported. |
Beta Was this translation helpful? Give feedback.
The issue was in face to do with the (1, 3, 3, 3). This should have included the image size which is 150x150. There for when changed to (1, 150, 150, 3), the onnx file was exported.