Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Possible ways to reduce memory usage #13

Open
Niu0914 opened this issue Nov 18, 2024 · 2 comments
Open

Possible ways to reduce memory usage #13

Niu0914 opened this issue Nov 18, 2024 · 2 comments

Comments

@Niu0914
Copy link

Niu0914 commented Nov 18, 2024

Hello and thanks again for your open source work! Now I'm in 24 GiB server running on the 'multi - concept DreamBooth training', however my code in the 'self. The accelerator. The backward (loss)' is always error (cuda out of memory), I read the answer from issue3. I think I should be able to run, but I don't know what might go wrong. Can you give me some tips

@YuliangXiu
Copy link
Owner

Maybe you can refer to #10 (comment), and Diffuser GPU Memory, and DreamBooth Training to further reduce the GPU memory (even on 8GB).

@Niu0914
Copy link
Author

Niu0914 commented Nov 21, 2024

Thank you, I can train dreambooth now!
However, when I was training main_mc, I found that the loss I printed was always 0. Why? (I did not change the source code given)
loss

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants