Skip to content

[Bug] 微调后的checkpoint 在lmdeploy 上加载错误,如何解决? #1082

Open
@jimmysue

Description

@jimmysue

Checklist

  • 1. I have searched related issues but cannot get the expected help.
  • 2. The bug has not been fixed in the latest version.
  • 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.

Describe the bug

我使用 官方教程 对 internVL 2.5 8B 的模型进行微调, 使用 lmdeploy 进行serve 的时候出错,请问如何解决?, 出错的信息如下:

The argument `trust_remote_code` is to be used with Auto classes. It has no effect here and is ignored.
You are using a model of type internvl_chat to instantiate a model of type . This is not supported for all configurations of models and can yield errors.
Traceback (most recent call last):
  File "/mnt/zjqd/ts/sjz/envs/py311/bin/lmdeploy", line 8, in <module>
    sys.exit(run())
             ^^^^^
  File "/mnt/zjqd/ts/sjz/envs/py311/lib/python3.11/site-packages/lmdeploy/cli/entrypoint.py", line 39, in run
    args.run(args)
  File "/mnt/zjqd/ts/sjz/envs/py311/lib/python3.11/site-packages/lmdeploy/cli/serve.py", line 315, in api_server
    backend = autoget_backend(args.model_path)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/mnt/zjqd/ts/sjz/envs/py311/lib/python3.11/site-packages/lmdeploy/archs.py", line 40, in autoget_backend
    turbomind_has = is_supported_turbomind(model_path)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/mnt/zjqd/ts/sjz/envs/py311/lib/python3.11/site-packages/lmdeploy/turbomind/supported_models.py", line 113, in is_supported
    llm_arch = cfg.llm_config.architectures[0]
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'dict' object has no attribute 'architectures'

Reproduction

lmdeploy serve api_server internvl2_5_8b_dynamic_res_2nd_finetune_full 

Environment

sys.platform: linux
Python: 3.11.11 (main, Dec 11 2024, 16:28:39) [GCC 11.2.0]
CUDA available: True
MUSA available: False
numpy_random_seed: 2147483648
GPU 0,1,2,3,4,5,6,7: NVIDIA H100 80GB HBM3
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 12.4, V12.4.131
GCC: gcc (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
PyTorch: 2.6.0+cu124
PyTorch compiling details: PyTorch built with:
  - GCC 9.3
  - C++ Version: 201703
  - Intel(R) oneAPI Math Kernel Library Version 2024.2-Product Build 20240605 for Intel(R) 64 architecture applications
  - Intel(R) MKL-DNN v3.5.3 (Git Hash 66f0cb9eb66affd2da3bf5f8d897376f04aae6af)
  - OpenMP 201511 (a.k.a. OpenMP 4.5)
  - LAPACK is enabled (usually provided by MKL)
  - NNPACK is enabled
  - CPU capability usage: AVX512
  - CUDA Runtime 12.4
  - NVCC architecture flags: -gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_90,code=sm_90
  - CuDNN 90.1
  - Magma 2.6.1
  - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, COMMIT_SHA=2236df1770800ffea5697b11b0bb0d910b2e59e1, CUDA_VERSION=12.4, CUDNN_VERSION=9.1.0, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 -fabi-version=11 -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DLIBKINETO_NOXPUPTI=ON -DUSE_FBGEMM -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-strict-overflow -Wno-strict-aliasing -Wno-stringop-overflow -Wsuggest-override -Wno-psabi -Wno-error=old-style-cast -Wno-missing-braces -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, TORCH_VERSION=2.6.0, USE_CUDA=ON, USE_CUDNN=ON, USE_CUSPARSELT=1, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_GLOO=ON, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=1, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, USE_ROCM_KERNEL_ASSERT=OFF, 

TorchVision: 0.21.0+cu124
LMDeploy: 0.9.0+
transformers: 4.52.4
gradio: Not Found
fastapi: 0.115.12
pydantic: 2.11.7
triton: 3.2.0
NVIDIA Topology: 
        GPU0    GPU1    GPU2    GPU3    GPU4    GPU5    GPU6    GPU7    NIC0    NIC1    NIC2    NIC3    NIC4    NIC5    NIC6    NIC7    NIC8    CPU Affinity      NUMA Affinity   GPU NUMA ID
GPU0     X      NV18    NV18    NV18    NV18    NV18    NV18    NV18    PXB     PXB     NODE    NODE    NODE    SYS     SYS     SYS     SYS     0-31,64-950               N/A
GPU1    NV18     X      NV18    NV18    NV18    NV18    NV18    NV18    PXB     PXB     NODE    NODE    NODE    SYS     SYS     SYS     SYS     0-31,64-950               N/A
GPU2    NV18    NV18     X      NV18    NV18    NV18    NV18    NV18    NODE    NODE    NODE    PXB     PXB     SYS     SYS     SYS     SYS     0-31,64-950               N/A
GPU3    NV18    NV18    NV18     X      NV18    NV18    NV18    NV18    NODE    NODE    NODE    PXB     PXB     SYS     SYS     SYS     SYS     0-31,64-950               N/A
GPU4    NV18    NV18    NV18    NV18     X      NV18    NV18    NV18    SYS     SYS     SYS     SYS     SYS     PXB     PXB     NODE    NODE    32-63,96-127      1               N/A
GPU5    NV18    NV18    NV18    NV18    NV18     X      NV18    NV18    SYS     SYS     SYS     SYS     SYS     PXB     PXB     NODE    NODE    32-63,96-127      1               N/A
GPU6    NV18    NV18    NV18    NV18    NV18    NV18     X      NV18    SYS     SYS     SYS     SYS     SYS     NODE    NODE    PXB     PXB     32-63,96-127      1               N/A
GPU7    NV18    NV18    NV18    NV18    NV18    NV18    NV18     X      SYS     SYS     SYS     SYS     SYS     NODE    NODE    PXB     PXB     32-63,96-127      1               N/A
NIC0    PXB     PXB     NODE    NODE    SYS     SYS     SYS     SYS      X      PXB     NODE    NODE    NODE    SYS     SYS     SYS     SYS
NIC1    PXB     PXB     NODE    NODE    SYS     SYS     SYS     SYS     PXB      X      NODE    NODE    NODE    SYS     SYS     SYS     SYS
NIC2    NODE    NODE    NODE    NODE    SYS     SYS     SYS     SYS     NODE    NODE     X      NODE    NODE    SYS     SYS     SYS     SYS
NIC3    NODE    NODE    PXB     PXB     SYS     SYS     SYS     SYS     NODE    NODE    NODE     X      PXB     SYS     SYS     SYS     SYS
NIC4    NODE    NODE    PXB     PXB     SYS     SYS     SYS     SYS     NODE    NODE    NODE    PXB      X      SYS     SYS     SYS     SYS
NIC5    SYS     SYS     SYS     SYS     PXB     PXB     NODE    NODE    SYS     SYS     SYS     SYS     SYS      X      PXB     NODE    NODE
NIC6    SYS     SYS     SYS     SYS     PXB     PXB     NODE    NODE    SYS     SYS     SYS     SYS     SYS     PXB      X      NODE    NODE
NIC7    SYS     SYS     SYS     SYS     NODE    NODE    PXB     PXB     SYS     SYS     SYS     SYS     SYS     NODE    NODE     X      PXB
NIC8    SYS     SYS     SYS     SYS     NODE    NODE    PXB     PXB     SYS     SYS     SYS     SYS     SYS     NODE    NODE    PXB      X 

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

NIC Legend:

  NIC0: mlx5_0
  NIC1: mlx5_1
  NIC2: mlx5_2
  NIC3: mlx5_3
  NIC4: mlx5_4
  NIC5: mlx5_5
  NIC6: mlx5_6
  NIC7: mlx5_7
  NIC8: mlx5_8

Error traceback

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions