Skip to content

Strange errors trying to run in a docker container #29

@geocybrid

Description

@geocybrid

Hi folks,

Thanks for sharing this very interesting project!

I've been trying to run gradio demo in a docker container with cuda 12.6 and flash attention 2.8.3.
I'm getting a strange error:

/usr/local/lib/python3.10/dist-packages/timm/models/layers/__init__.py:48: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers
  warnings.warn(f"Importing from {__name__} is deprecated, please import via timm.layers", FutureWarning)
[CustomFluxPipeline] Loading FLUX Pipeline
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00,  6.38it/s]
Loading pipeline components...:  86%|█████████████████████████████████████████████████████████████████████████████████████▋              | 6/7 [00:01<00:00,  3.86it/s]You set `add_prefix_space`. The tokenizer needs to be converted from the slow tokenizers
Loading pipeline components...: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:01<00:00,  4.82it/s]
[Quantization] Start freezing
[Quantization] Finished
Quantization time: 21.46683645248413
UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead.
UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=None`.
Downloading: "https://github.com/elliottzheng/face-detection/releases/download/0.0.1/Resnet50_Final.pth" to /root/.cache/torch/hub/checkpoints/Resnet50_Final.pth
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 104M/104M [00:01<00:00, 92.2MB/s]
[FlorenceSAM] init on device cuda
Some weights of Florence2ForConditionalGeneration were not initialized from the model checkpoint at /models/checkpoints/Florence-2-large and are newly initialized: ['language_model.lm_head.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Checkpoint root does not exist.
Init new modulation adapter
[load_dit_lora] no condition lora
Traceback (most recent call last):
  File "/app/run_gradio.py", line 88, in <module>
    load_dit_lora(model, model.pipe, config, dtype, init_device, f"{ckpt_root}", is_training=False)
  File "/app/src/flux/pipeline_tools.py", line 650, in load_dit_lora
    assert is_training
AssertionError

The models are located in /models/checkpoint directory vs. ./checkpoints and the environment variables are updated accordingly. I cant see any other changes that could have triggered this. Any thoughts?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions