-
Notifications
You must be signed in to change notification settings - Fork 100
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Recommended Requirements #101
Comments
I'm not sure about running the ONNX models on Mac (M-series chips). But it works very well on my MacBook (M2, 16GB memory) with original PyTorch weights (all versions), either by running my HF demo locally or using the notebooks in I'll try the ONNX later when I have free time (in two days). |
Thank you. In the meantime I'll try to run the model without ONNX. I'll keep you posted. |
Quick update: I've tried starting from one of your tutorials and it works just fine, so the problem seems to be caused by onnxruntime. |
Great! When I first tested the ONNX conversion colab script (refer to the ONNX conversion part in the model zoo section of README), it did cost quite a lot CPU memory when I tested. And the 12 GB colab memory cannot hold for more than one time inference (I was also confused about that). |
Hi,
I'm struggling to run the portrait model on my system (Mac Studio, Chip M2 Max, 32GB of Memory), the ANECompilerService keeps running forever and I have to kill it manually several times in order to get the expected result.
I've also tried to run general model but it crashes due to insufficient memory.
The general-lite model has the same issues of the portrait model.
How much memory is needed in order to run these models?
I'm using onnxruntime.
The text was updated successfully, but these errors were encountered: