Skip to content

"AttributeError: 'MatmulLtState' object has no attribute 'memory_efficient_backward" when running in load_8bit mode #78

@Lookieman

Description

@Lookieman

Hello,

I am running the finetune and evaluate code and everytime I used the load_8bit parameter i get the following error:

raceback (most recent call last): File "/content/drive/MyDrive/AI6130_A2/evaluate.py", line 302, in fire.Fire(main) File "/usr/local/lib/python3.11/dist-packages/fire/core.py", line 135, in Fire component_trace = Fire(component, args, parsedflag_args, context, name) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/fire/core.py", line 468, in Fire component, remainingargs = CallAndUpdateTrace( ^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/fire/core.py", line 684, in CallAndUpdateTrace component = fn(varargs, kwargs) ^^^^^^^^^^^^^^^^^^^^^^ File "/content/drive/MyDrive/AI6130_A2/evaluate.py", line 91, in main tokenizer, model = load_model(args) ^^^^^^^^^^^^^^^^ File "/content/drive/MyDrive/AI6130_A2/evaluate.py", line 222, in load_model model = PeftModel.from_pretrained( ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/content/drive/MyDrive/AI6130_A2/peft/src/peft/peft_model.py", line 147, in from_pretrained model = MODEL_TYPE_TO_PEFT_MODEL_MAPPING[config.task_type](model, config) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/content/drive/MyDrive/AI6130_A2/peft/src/peft/peft_model.py", line 518, in init super().init(model, peft_config) File "/content/drive/MyDrive/AI6130_A2/peft/src/peft/peft_model.py", line 80, in init self.base_model = LoraModel(peft_config, model) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/content/drive/MyDrive/AI6130_A2/peft/src/peft/tuners/lora.py", line 118, in init* self._find_and_replace() File "/content/drive/MyDrive/AI6130_A2/peft/src/peft/tuners/lora.py", line 154, in findand_replace "memory_efficient_backward": target.state.memory_efficient_backward, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ AttributeError: 'MatmulLtState' object has no attribute 'memory_efficient_backward'

The parameters used for evaluate:
!CUDA_VISIBLE_DEVICES=0 python evaluate.py
--adapter LoRA
--dataset AddSub
--base_model 'bigscience/bloomz-1b7'
--lora_weights ./trained_models/bloomz-lora
--load_8-bit

I get the same error for finetune sing the following parameters:
!CUDA_VISIBLE_DEVICES=0 python finetune.py
--base_model 'bigscience/bloomz-1b7'
--data_path './ft-training_set/math_7k.json'
--output_dir './trained_models/bloomz-lora'
--batch_size 4
--micro_batch_size 1
--num_epochs 2
--learning_rate 3e-4
--cutoff_len 256
--val_set_size 120
--adapter_name lora
--load_8-bit True

I have seen issue 55 but it doesnt seem to be the same issue. Any guides on what might be causing this issue?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions