-
Notifications
You must be signed in to change notification settings - Fork 106
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
something went wrong: Response does not contain codes! #27
Comments
When you use omost, if the output program is truncated, this error will be reported. You can increase the max length to prevent the LLM output from being truncated. If you don't use an omost-specific model, but use another model instead, it is very likely that other models will not be able to output code normally, and this error will also be reported. If your problem has not been solved, please attach a full screenshot of your workflow. |
Yes, I've try use omost. Just I've load your "start_with_OMOST.json" workflow and click "Queue Prompt", then got error. |
I tried to double the maximum length up to 32768, but it didn't fix the error. |
Here |
I'll try an upgrade transformers, I need to see how to do it in MacOs |
Ok, now the error looks different - "gpu not found". |
If you choose |
|
Try this. |
I have Python 3.10.14 installed, and bitsandbytes 0.42.0 latest. |
Please match the versions of the three libraries I gave, there is a high probability that doing so will solve this problem. bitsandbytes 0.42.0 version is not quite right, please adjust to bitsandbytes == 0.43.1 |
|
Either try updating Python, but unfortunately, bitsandbytes requires cuda support. I doubt you can use this int4 model because bitsandbytes is the foundation for using this quantization model. |
May be, it can works with ROCm (Vulkan) device? |
I started playing with python versions and now I've broken everything. |
In my code, if you correctly install llama-cpp-python or llama_cpp, the program that automatically installs llama-cpp-python will be skipped. And "ERROR: llama_cpp_python - 0.2.79 - AVX2 - macosx_13_0_x86_64 .whl is not a valid wheel filename." The installer failed to recognize your mps device. Please check if you really have llama-cpp-python in your environment.
You can see that this part of my code, if there is llama_cpp_python in the environment, the installation program will not be executed. |
Ok, we just need to wait for support bitsandbytes-foundation/bitsandbytes#252 (comment) |
The text was updated successfully, but these errors were encountered: