Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

高新能插件推理支持自己训练之后的文档图像方向分类模型吗? #2865

Open
1274739295 opened this issue Jan 16, 2025 · 8 comments
Assignees

Comments

@1274739295
Copy link

序列申请的是

Image

推理代码是

加载模型

model = create_model("./output/best_model/inference/",
use_hpip=True,
hpi_params={"1988-EEEA-43DD-A321"})

outputs = model.predict("./dataset/text_image_orientation/val/img_1_1386.png")

报错:

Traceback (most recent call last):
File "/mnt/PaddleX/test_ocr.py", line 17, in
model = create_model("./output/best_model/inference/",
File "/mnt/PaddleX/paddlex/model.py", line 29, in create_model
return _ModelBasedInference(model, *args, **kwargs)
File "/mnt/PaddleX/paddlex/model.py", line 57, in init
self._predictor = create_predictor(*args, **kwargs)
File "/mnt/PaddleX/paddlex/inference/models/init.py", line 78, in create_predictor
return _create_hp_predictor(
File "/mnt/PaddleX/paddlex/inference/models/init.py", line 50, in _create_hp_predictor
predictor = HPPredictor.get(model_name)(
File "image_classification.py", line 30, in paddlex_hpi.models.image_classification.ClasPredictor.init
File "base.py", line 165, in paddlex_hpi.models.base.HPPredictorWithDataReader.init
File "base.py", line 55, in paddlex_hpi.models.base.HPPredictor.init
File "base.py", line 114, in paddlex_hpi.models.base.HPPredictor._get_hpi_config
AttributeError: 'set' object has no attribute 'get'

@1274739295 1274739295 changed the title 高新能插件推理支持自己训练之后的文本图像方向分类模型吗? 高新能插件推理支持自己训练之后的文档图像方向分类模型吗? Jan 16, 2025
@1274739295
Copy link
Author

补充:换成使用官方的模型还是报同样的错误

推理代码:
model = create_model("PP-LCNet_x1_0_doc_ori",
use_hpip=True,
hpi_params={"1988-EEEA-43DD-A321"})
outputs = model.predict("./dataset/text_image_orientation/val/img_1_1386.png")

@cuicheng01
Copy link
Collaborator

cuicheng01 commented Jan 16, 2025

不可以这么使用哦,高性能推理目前只能在产线中使用

@1274739295
Copy link
Author

那么我的文档方向分类不使用高性能推理,但是通用ocr使用高性能推理之后会报错

代码:

model = create_model("PP-LCNet_x1_0_doc_ori",)

pipeline = create_pipeline(pipeline="./my_path/OCR.yaml",
# pipeline="OCR",
# # device='gpu',
use_hpip=True,
hpi_params={"serial_number": "6D34-19B1-49BB-BC8B"})
start_time = time.time()
output = pipeline.predict("./dataset/text_image_orientation/val/img_1_1386.png")
outputs = model.predict("./dataset/text_image_orientation/val/img_1_1386.png")

for res in output:
res.print()
for res in outputs:
print("分类")
res.print()

报错:
Traceback (most recent call last):
File "/home/dell/anaconda3/envs/OCR/lib/python3.9/site-packages/fastdeploy/c_lib_wrap.py", line 165, in
from .libs.fastdeploy_main import *
ImportError: /home/dell/anaconda3/envs/OCR/lib/python3.9/site-packages/fastdeploy/libs/third_libs/paddle_inference/paddle/lib/libpaddle_inference.so: undefined symbol: ZN3phi23FusedLayerNormInferMetaERKNS_10MetaTensorES2_S2_S2_S2_ffififfPS0_S3_S3_S3

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/mnt/PaddleX/test_ocr.py", line 20, in
pipeline = create_pipeline(pipeline="./my_path/OCR.yaml",
File "/mnt/PaddleX/paddlex/inference/pipelines/init.py", line 119, in create_pipeline
return create_pipeline_from_config(
File "/mnt/PaddleX/paddlex/inference/pipelines/init.py", line 94, in create_pipeline_from_config
pipeline = BasePipeline.get(pipeline_name)(
File "/mnt/PaddleX/paddlex/inference/pipelines/base.py", line 39, in patched___init_
ret = ctx.run(init_func, self, *args, **kwargs)
File "/mnt/PaddleX/paddlex/inference/pipelines/ocr.py", line 36, in init
self._build_predictor(text_det_model, text_rec_model)
File "/mnt/PaddleX/paddlex/inference/pipelines/ocr.py", line 43, in _build_predictor
self.text_det_model = self._create(model=text_det_model)
File "/mnt/PaddleX/paddlex/inference/pipelines/base.py", line 71, in _create
return create_predictor(
File "/mnt/PaddleX/paddlex/inference/models/init.py", line 78, in create_predictor
return _create_hp_predictor(
File "/mnt/PaddleX/paddlex/inference/models/init.py", line 44, in _create_hp_predictor
from paddlex_hpi.models import HPPredictor
File "init.py", line 2, in init paddlex_hpi.models.init
File "anomaly_detection.py", line 4, in init paddlex_hpi.models.anomaly_detection
File "/home/dell/anaconda3/envs/OCR/lib/python3.9/site-packages/fastdeploy/init.py", line 127, in
from .c_lib_wrap import (
File "/home/dell/anaconda3/envs/OCR/lib/python3.9/site-packages/fastdeploy/c_lib_wrap.py", line 168, in
raise RuntimeError(f"FastDeploy initalized failed! Error: {e}")
RuntimeError: FastDeploy initalized failed! Error: /home/dell/anaconda3/envs/OCR/lib/python3.9/site-packages/fastdeploy/libs/third_libs/paddle_inference/paddle/lib/libpaddle_inference.so: undefined symbol: ZN3phi23FusedLayerNormInferMetaERKNS_10MetaTensorES2_S2_S2_S2_ffififfPS0_S3_S3_S3
这是因为什么 呀

@cuicheng01
Copy link
Collaborator

没太明白您的意思,可以再说的详细一些吗?

@1274739295
Copy link
Author

详细说明:我将文档分类和通用ocr产线放在一个.py文件中,文档方向分类是单模型,不使用高性能推理,通用产线ocr使用高性能插件推理;

整个py文件的代码如下:

cls_model = create_model("PP-LCNet_x1_0_doc_ori",)
ocr_model= create_pipeline(pipeline="./my_path/OCR.yaml", use_hpip=True, pi_params={"serial_number": "6D34-19B1-49BB-BC8B"})
ocr_output = ocr_model.predict("./dataset/text_image_orientation/val/img_1_1386.png")
cls_output = cls_model.predict("./dataset/text_image_orientation/val/img_1_1386.png")
for res in ocr_output:
res.print()
for res in cls_outputs:
print("分类")
res.print()

当我运行这个py文件时,报出错误

Traceback (most recent call last):
File "/home/dell/anaconda3/envs/OCR/lib/python3.9/site-packages/fastdeploy/c_lib_wrap.py", line 165, in
from .libs.fastdeploy_main import *
ImportError: /home/dell/anaconda3/envs/OCR/lib/python3.9/site-packages/fastdeploy/libs/third_libs/paddle_inference/paddle/lib/libpaddle_inference.so: undefined symbol: ZN3phi23FusedLayerNormInferMetaERKNS_10MetaTensorES2_S2_S2_S2_ffififfPS0_S3_S3_S3

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/mnt/PaddleX/test_ocr.py", line 20, in
pipeline = create_pipeline(pipeline="./my_path/OCR.yaml",
File "/mnt/PaddleX/paddlex/inference/pipelines/init.py", line 119, in create_pipeline
return create_pipeline_from_config(
File "/mnt/PaddleX/paddlex/inference/pipelines/init.py", line 94, in create_pipeline_from_config
pipeline = BasePipeline.get(pipeline_name)(
File "/mnt/PaddleX/paddlex/inference/pipelines/base.py", line 39, in patched___init_
ret = ctx.run(init_func, self, *args, **kwargs)
File "/mnt/PaddleX/paddlex/inference/pipelines/ocr.py", line 36, in init
self._build_predictor(text_det_model, text_rec_model)
File "/mnt/PaddleX/paddlex/inference/pipelines/ocr.py", line 43, in _build_predictor
self.text_det_model = self._create(model=text_det_model)
File "/mnt/PaddleX/paddlex/inference/pipelines/base.py", line 71, in _create
return create_predictor(
File "/mnt/PaddleX/paddlex/inference/models/init.py", line 78, in create_predictor
return _create_hp_predictor(
File "/mnt/PaddleX/paddlex/inference/models/init.py", line 44, in _create_hp_predictor
from paddlex_hpi.models import HPPredictor
File "init.py", line 2, in init paddlex_hpi.models.init
File "anomaly_detection.py", line 4, in init paddlex_hpi.models.anomaly_detection
File "/home/dell/anaconda3/envs/OCR/lib/python3.9/site-packages/fastdeploy/init.py", line 127, in
from .c_lib_wrap import (
File "/home/dell/anaconda3/envs/OCR/lib/python3.9/site-packages/fastdeploy/c_lib_wrap.py", line 168, in
raise RuntimeError(f"FastDeploy initalized failed! Error: {e}")
RuntimeError: FastDeploy initalized failed! Error: /home/dell/anaconda3/envs/OCR/lib/python3.9/site-packages/fastdeploy/libs/third_libs/paddle_inference/paddle/lib/libpaddle_inference.so: undefined symbol: ZN3phi23FusedLayerNormInferMetaERKNS_10MetaTensorES2_S2_S2_S2_ffififfPS0_S3_S3_S3

@1274739295
Copy link
Author

补充说明,当我把上面py文件中的文档方向分类模型的推理注释掉时,单独使用通用ocr产线的高性能推理时,推理正常;

整个py文件的代码如下:

cls_model = create_model("PP-LCNet_x1_0_doc_ori",)

ocr_model= create_pipeline(pipeline="./my_path/OCR.yaml", use_hpip=True, pi_params={"serial_number": "6D34-19B1-49BB-BC8B"})
ocr_output = ocr_model.predict("./dataset/text_image_orientation/val/img_1_1386.png")

cls_output = cls_model.predict("./dataset/text_image_orientation/val/img_1_1386.png")

for res in ocr_output:
res.print()

for res in cls_outputs:

print("分类")

res.print()

运行这个py文件推理正常进行,

问题:产线高性能推理和不使用高性能推理的单模型在一个项目中不能并存吗?

@1274739295
Copy link
Author

帮忙看看我上面的问题,谢谢

@cuicheng01
Copy link
Collaborator

两边都不使用高性能推理会有什么问题吗

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants