You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The error was as below. Please help.
Traceback (most recent call last):
File "/hail/cjs/finetune/ft0/lora.py", line 231, in
model, tokenizer = finetune_with_lora(model_name, output_dir)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/hail/cjs/finetune/ft0/lora.py", line 156, in finetune_with_lora
trainer = Trainer(
^^^^^^^^
File "/hail/cjs/anaconda3/envs/lora1/lib/python3.11/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/hail/cjs/anaconda3/envs/lora1/lib/python3.11/site-packages/transformers/trainer.py", line 572, in init
raise ValueError(
ValueError: The model you are trying to fine-tune is quantized with QuantizationMethod.FP8 but that quantization method do not support training. Please open an issue on GitHub: https://github.com/huggingface/transformers to request the support for training support for QuantizationMethod.FP8
Who can help?
No response
Information
The official example scripts
My own modified scripts
Tasks
An officially supported task in the examples folder (such as GLUE/SQuAD, ...)
My own task or dataset (give details below)
Reproduction
Finetune "deepseek-ai/DeepSeek-V3.1-Terminus" with lora, this error will happen.