-
Notifications
You must be signed in to change notification settings - Fork 7.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
support Chinese model ,please #1199
Comments
Is there a If not, you need to find someone to create that or try yourself. Then it should automatically be supported, as it's based on LLaMA. P.S. With "automatically supported" I mean that the model type would be, not that it would automatically be in the download list. But you could download that version from somewhere and put it next to your other models. |
Thank you ! |
I'm not sure if I understand you right. Have you tried for example: https://huggingface.co/TheBloke/baichuan-llama-7B-GGML/tree/main? I recommend This is the right format. If you download it and put it next to the other models (the download directory), it should just work. For https://github.com/ymcui/Chinese-LLaMA-Alpaca, it would need to be converted into a format like that. |
I guess some bugs in chat . |
Not necessarily. What's your CPU and how much RAM do you have? There's also a possibility the download didn't complete right, and maybe there is something wrong with that model. I might have to download it myself to compare. Edit: I've downloaded it, verified its checksum and tried to load it. I'm running into the same error as you. So I think there's something wrong with that model. I might have a closer look later. |
Thank you for your reply! Thank you for looking ! |
I've looked into it some more. The good news is that it is possible to get it to run by disabling a check. The bad news is: that check is there for a reason, it is used to tell LLaMA apart from Falcon. If you are not going to use a Falcon model and since you are able to compile yourself, you can disable the check on your own system if you want. In gpt4all/gpt4all-backend/llamamodel.cpp Lines 285 to 289 in 2d02c65
That is, you can change it to: llama_file_hparams hparams;
f.read(reinterpret_cast<char*>(&hparams), sizeof(hparams));
if (!(hparams.n_vocab >= 32000 && hparams.n_vocab <= 32100)) {
//return false; // not a llama.
} It's not generally recommended to do that, however. |
Thank you ! I had to change it. |
Not sure. As I said, the check is necessary to distinguish LLaMA from Falcon models, so it's kind of important. And this model is a bit special. Only remove the check if you are not going to use Falcon models. |
Thank you! |
文章有帮助 |
Duplicate of #176 |
Feature request
Here is the Chinese model git:https://github.com/ymcui/Chinese-LLaMA-Alpaca.
Support it please!
Motivation
support Chinese model
Your contribution
no
The text was updated successfully, but these errors were encountered: