How to run a built onnx neural network on CPU? #17813
                  
                    
                      FlintWangacc
                    
                  
                
                  started this conversation in
                General
              
            Replies: 1 comment
-
| You need to specify the entry function, yes. The function name is whatever you construct it as in the original program, or what the existing model used. You can find that by 
 | 
Beta Was this translation helpful? Give feedback.
                  
                    0 replies
                  
                
            
  
    Sign up for free
    to join this conversation on GitHub.
    Already have an account?
    Sign in to comment
  
        
    
Uh oh!
There was an error while loading. Please reload this page.
-
I use ```iree-import-onnx`` to transform an onnx file into a mlir file
iree-import-onnx $PATH/models/Computer_Vision/botnet26t_256_Opset16_timm/regnetv_064_Opset16.onnx -o ./regnetv_064_Opset16.mlirThen I use
iree-compileto compile this mlir fileThese operations all went well.
But when I try to run this neural network, I don't know what is the entry function. I didn't find any documentation about this.
Does this compile process need any new argument or I just need to find the $ENTRY_FUNCTION
Beta Was this translation helpful? Give feedback.
All reactions