Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Recommended Requirements #101

Open
PasqualePuzio opened this issue Oct 8, 2024 · 4 comments
Open

Recommended Requirements #101

PasqualePuzio opened this issue Oct 8, 2024 · 4 comments
Labels

Comments

@PasqualePuzio
Copy link

Hi,

I'm struggling to run the portrait model on my system (Mac Studio, Chip M2 Max, 32GB of Memory), the ANECompilerService keeps running forever and I have to kill it manually several times in order to get the expected result.

I've also tried to run general model but it crashes due to insufficient memory.

The general-lite model has the same issues of the portrait model.

How much memory is needed in order to run these models?

I'm using onnxruntime.

@ZhengPeng7
Copy link
Owner

ZhengPeng7 commented Oct 8, 2024

I'm not sure about running the ONNX models on Mac (M-series chips). But it works very well on my MacBook (M2, 16GB memory) with original PyTorch weights (all versions), either by running my HF demo locally or using the notebooks in tutorials.

I'll try the ONNX later when I have free time (in two days).

@PasqualePuzio
Copy link
Author

I'm not sure about running the ONNX models on Mac (M-series chips). But it works very well on my MacBook (M2, 16GB memory) with original PyTorch weights (all versions), either by running my HF demo locally or using the notebooks in tutorials.

I'll try the ONNX later when I have free time (in two days).

Thank you. In the meantime I'll try to run the model without ONNX. I'll keep you posted.

@PasqualePuzio
Copy link
Author

Quick update: I've tried starting from one of your tutorials and it works just fine, so the problem seems to be caused by onnxruntime.

@ZhengPeng7
Copy link
Owner

Great! When I first tested the ONNX conversion colab script (refer to the ONNX conversion part in the model zoo section of README), it did cost quite a lot CPU memory when I tested. And the 12 GB colab memory cannot hold for more than one time inference (I was also confused about that).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants