-
Notifications
You must be signed in to change notification settings - Fork 120
Open
Description
We are doing following :
- Pick 4 bit quantized model https://huggingface.co/unsloth/Llama-3.1-Nemotron-70B-Instruct-bnb-4bit
- Use that to finetune adapterH or bottleneck adapter.
We reduced bottleck_dimension to 64 instead of 256 .
It loads base model but during finetuning of adapter , it is giving following error .
mat1 and mat2 shapes cannot be multiplied (1024x8192 and 1x117440512)
Please suggest
Metadata
Metadata
Assignees
Labels
No labels