-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Description
Checklist:
- 查找历史相关issue寻求解答
- 翻阅FAQ
- 翻阅PaddleX 文档
- 确认bug是否在新版本里还未修复
描述问题
Paddlex v3.2.1 requires network connection, even with locally hosted model files. The same model file organization can be used by v2.1.4 (may be other version but not 3.2.1). In an Internet connected environment, the same docker image and model config can be launched without re-download model files
复现
docker image nvidia/cuda:12.6.3-cudnn-runtime-ubuntu22.04 as based and with following packages installed.
paddleocr==3.2.0
paddlex[ocr,serving,cv]==3.2.1
paddlepaddle-gpu==3.2.0
launch server by running
paddlex --serve --pipeline /paddle/config/instance_segmentation.yaml --save_path /paddle/output --device gpu --host 0.0.0.0 --port 8093
and the pipeline config file are as follow
pipeline_name: instance_segmentation
SubModules:
InstanceSegmentation:
module_name: instance_segmentation
model_name: Mask-RT-DETR-H
model_dir: null
batch_size: 8
threshold: 0.5
- 您使用的模型和数据集是?
Mask-RT-DETR-H hosted under /root/.paddlex/official_models
root@4cc011f6d8f5:~/.paddlex/official_models# ls
Mask-RT-DETR-H OCRNet_HRNet-W48 PP-DocLayout-L PP-LCNet_x0_25_textline_ori PP-LCNet_x1_0_doc_ori PP-OCRv4_server_rec PP-OCRv4_server_seal_det PP-OCRv5_mobile_det PP-OCRv5_mobile_rec PP-OCRv5_server_det PP-OCRv5_server_rec RT-DETR-H_layout_3cls UVDoc
- 请提供您出现的报错信息及相关log
No model hoster is available! Please check your network connection to one of the following model hosts:
HuggingFace (https://huggingface.co),
ModelScope (https://modelscope.cn),
AIStudio (https://aistudio.baidu.com), or
BOS (https://paddle-model-ecology.bj.bcebos.com).
Otherwise, only local models can be used.
No model hoster is available! Please check your network connection to one of the following model hosts:
HuggingFace (https://huggingface.co),
ModelScope (https://modelscope.cn),
AIStudio (https://aistudio.baidu.com), or
BOS (https://paddle-model-ecology.bj.bcebos.com).
Otherwise, only local models can be used.
Creating model: ('OCRNet_HRNet-W48', None)
No available model hosting platforms detected. Please check your network connection.
Creating model: ('Mask-RT-DETR-H', None)
No available model hosting platforms detected. Please check your network connection.
Traceback (most recent call last):
File "/usr/local/bin/paddlex", line 10, in
sys.exit(console_entry())
File "/usr/local/lib/python3.10/dist-packages/paddlex/main.py", line 26, in console_entry
main()
File "/usr/local/lib/python3.10/dist-packages/paddlex/paddlex_cli.py", line 481, in main
serve(
File "/usr/local/lib/python3.10/dist-packages/paddlex/paddlex_cli.py", line 380, in serve
pipeline = create_pipeline(
File "/usr/local/lib/python3.10/dist-packages/paddlex/inference/pipelines/init.py", line 166, in create_pipeline
pipeline = BasePipeline.get(pipeline_name)(
File "/usr/local/lib/python3.10/dist-packages/paddlex/utils/deps.py", line 202, in _wrapper
return old_init_func(self, *args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/paddlex/inference/pipelines/_parallel.py", line 103, in init
self._pipeline = self._create_internal_pipeline(config, self.device)
File "/usr/local/lib/python3.10/dist-packages/paddlex/inference/pipelines/_parallel.py", line 158, in _create_internal_pipeline
return self._pipeline_cls(
File "/usr/local/lib/python3.10/dist-packages/paddlex/inference/pipelines/semantic_segmentation/pipeline.py", line 60, in init
self.semantic_segmentation_model = self.create_model(
File "/usr/local/lib/python3.10/dist-packages/paddlex/inference/pipelines/base.py", line 105, in create_model
model = create_predictor(
File "/usr/local/lib/python3.10/dist-packages/paddlex/inference/models/init.py", line 69, in create_predictor
model_dir = official_models[model_name]
File "/usr/local/lib/python3.10/dist-packages/paddlex/inference/utils/official_models.py", line 582, in getitem
return self._get_model_local_path(model_name)
File "/usr/local/lib/python3.10/dist-packages/paddlex/inference/utils/official_models.py", line 557, in _get_model_local_path
raise Exception(msg)
Exception: No available model hosting platforms detected. Please check your network connection.
环境
- 请提供您使用的PaddlePaddle、PaddleX版本号、Python版本号
see above - 请提供您使用的操作系统信息,如Linux/Windows/MacOS
docker in Linux - 请问您使用的CUDA/cuDNN的版本号是?
see above