-
Notifications
You must be signed in to change notification settings - Fork 40
Description
(matrix3d) root@autodl-container-5abc4eba69-b9b8fc1d:~/autodl-fs/FYP/Matrix-3D# python code/panoramic_image_generation.py
--mode=i2p
--input_image_path "./data/image2.jpg"
--output_path $output_dir
Keyword arguments {'device': 'cuda:0'} are not expected by FluxFillPipeline and will be ignored.
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 46.55it/s]
Loading pipeline components...: 29%|██████████████████████████████████████ | 2/7 [00:00<00:00, 14.58it/s]torch_dtype is deprecated! Use dtype instead!
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 80.91it/s]
You set add_prefix_space. The tokenizer needs to be converted from the slow tokenizers | 0/2 [00:00<?, ?it/s]
Loading pipeline components...: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:00<00:00, 14.04it/s]
Loading LoRA weights from: /root/autodl-fs/FYP/worldgen_img2scene.safetensors
No LoRA keys associated to CLIPTextModel found with the prefix='text_encoder'. This is safe to ignore if LoRA state dict didn't originally have any CLIPTextModel related params. You can also try specifying prefix=None to resolve the warning. Otherwise, open an issue if you think it's unexpected: https://github.com/huggingface/diffusers/issues/new
torch_dtype is deprecated! Use dtype instead!
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [01:10<00:00, 17.57s/it]
Traceback (most recent call last):
File "/autodl-fs/data/FYP/Matrix-3D/code/panoramic_image_generation.py", line 118, in
main(args)
File "/autodl-fs/data/FYP/Matrix-3D/code/panoramic_image_generation.py", line 67, in main
i2p_Pipeline = i2pano(device)
File "/autodl-fs/data/FYP/Matrix-3D/code/pano_init/i2p_model.py", line 34, in init
self.Lamma_Video = Lamma_Video(self.device)
File "/autodl-fs/data/FYP/Matrix-3D/code/pano_init/prompt/prompt.py", line 19, in init
self.processor = AutoProcessor.from_pretrained("/root/autodl-fs/VideoLLaMA", trust_remote_code=True)
File "/root/miniconda3/envs/matrix3d/lib/python3.10/site-packages/transformers/models/auto/processing_auto.py", line 382, in from_pretrained
processor_class = get_class_from_dynamic_module(
File "/root/miniconda3/envs/matrix3d/lib/python3.10/site-packages/transformers/dynamic_module_utils.py", line 581, in get_class_from_dynamic_module
return get_class_in_module(class_name, final_module, force_reload=force_download)
File "/root/miniconda3/envs/matrix3d/lib/python3.10/site-packages/transformers/dynamic_module_utils.py", line 276, in get_class_in_module
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "/root/.cache/huggingface/modules/transformers_modules/VideoLLaMA/processing_videollama3.py", line 27, in
from . import image_processing_videollama3
File "/root/.cache/huggingface/modules/transformers_modules/VideoLLaMA/image_processing_videollama3.py", line 36, in
from transformers.image_utils import (
ImportError: cannot import name 'VideoInput' from 'transformers.image_utils' (/root/miniconda3/envs/matrix3d/lib/python3.10/site-packages/transformers/image_utils.py)