-
Notifications
You must be signed in to change notification settings - Fork 126
Open
Description
I was trying to test my pruned model with the generate.py script but I get this error: AttributeError: 'GenerationConfig' object has no attribute 'prefill_chunk_size'. I suppose this problem is related to some upgrade to the transfomer library.
Error:
Click to expand
Traceback (most recent call last):
File "/home/robertovadacca/Git/LLM-Pruner/.venv/lib/python3.10/site-packages/gradio/queueing.py", line 625, in process_events
response = await route_utils.call_process_api(
File "/home/robertovadacca/Git/LLM-Pruner/.venv/lib/python3.10/site-packages/gradio/route_utils.py", line 322, in call_process_api
output = await app.get_blocks().process_api(
File "/home/robertovadacca/Git/LLM-Pruner/.venv/lib/python3.10/site-packages/gradio/blocks.py", line 2108, in process_api
result = await self.call_function(
File "/home/robertovadacca/Git/LLM-Pruner/.venv/lib/python3.10/site-packages/gradio/blocks.py", line 1667, in call_function
prediction = await utils.async_iteration(iterator)
File "/home/robertovadacca/Git/LLM-Pruner/.venv/lib/python3.10/site-packages/gradio/utils.py", line 735, in async_iteration
return await anext(iterator)
File "/home/robertovadacca/Git/LLM-Pruner/.venv/lib/python3.10/site-packages/gradio/utils.py", line 729, in __anext__
return await anyio.to_thread.run_sync(
File "/home/robertovadacca/Git/LLM-Pruner/.venv/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "/home/robertovadacca/Git/LLM-Pruner/.venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2461, in run_sync_in_worker_thread
return await future
File "/home/robertovadacca/Git/LLM-Pruner/.venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 962, in run
result = context.run(func, *args)
File "/home/robertovadacca/Git/LLM-Pruner/.venv/lib/python3.10/site-packages/gradio/utils.py", line 712, in run_sync_iterator_async
return next(iterator)
File "/home/robertovadacca/Git/LLM-Pruner/.venv/lib/python3.10/site-packages/gradio/utils.py", line 873, in gen_wrapper
response = next(iterator)
File "/home/robertovadacca/Git/LLM-Pruner/generate.py", line 68, in evaluate
generation_output = model.generate(
File "/home/robertovadacca/Git/LLM-Pruner/LLMPruner/peft/peft_model.py", line 717, in generate
outputs = self.base_model.generate(**kwargs)
File "/home/robertovadacca/Git/LLM-Pruner/.venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/home/robertovadacca/Git/LLM-Pruner/.venv/lib/python3.10/site-packages/transformers/generation/utils.py", line 2465, in generate
result = self._sample(
File "/home/robertovadacca/Git/LLM-Pruner/.venv/lib/python3.10/site-packages/transformers/generation/utils.py", line 3416, in _sample
if generation_config.prefill_chunk_size is not None:
AttributeError: 'GenerationConfig' object has no attribute 'prefill_chunk_size'
i had to set : model.generation_config.prefill_chunk_size = None
Metadata
Metadata
Assignees
Labels
No labels