Skip to content

Conversation

@Ghatage
Copy link

@Ghatage Ghatage commented Apr 15, 2023

I was trying to quantize a gptj model and the code failed to quantize it.
I was wondering what the problem was since the error message wasn't very detailed and the help message of the converters weren't very detailed either.

I followed some steps mentioned here: #32 (comment)
Turns out the convert code needs an output directory but the quantize code needs the absolute path to the file name and not an output directory.

So I added some detailed error messaging to clarify that. Now it prints the error code:

~/code/oss/cformers/cformers/cpp > ./quantize_gptj ~/.cformers/models/gptj/modelOne/modelOne-f32.bin . 2
gptj_model_quantize: loading model from '/Users/aghatage/.cformers/models/gptj/modelOne/modelOne-f32.bin'
gptj_model_quantize: failed to open '/Users/aghatage/.cformers/models/gptj/modelOne/modelOne-f32.bin' for reading: No such file or directory
main: failed to quantize model from '/Users/aghatage/.cformers/models/gptj/modelOne/modelOne-f32.bin'

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant