Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add pre-post processing yolo_detection from onnxruntime-extensions #471

Open
ductridev opened this issue Oct 23, 2024 · 3 comments
Open

Comments

@ductridev
Copy link

ductridev commented Oct 23, 2024

Environment:

numpy                  2.1.2
onnx                   1.17.0
onnxruntime            1.19.2
onnxruntime_extensions 0.12.0
onnxslim               0.1.35
torch                  2.5.0
torchvision            0.20.0
ultralytics            8.3.20
ultralytics-thop       2.0.9

I'm trying to add pre-post processing to exported onnx model by following this example

This is my code :

from ultralytics import YOLO
import os
from pathlib import Path
from onnxruntime_extensions.tools import add_pre_post_processing_to_model as add_ppp
import onnxruntime as ort
from onnxruntime_extensions import get_library_path

def add_pre_post_processing_to_yolo(input_model_file: Path, output_model_file: Path, num_classes: int = 80):
    """Construct the pipeline for an end2end model with pre and post processing. 
    The final model can take raw image binary as inputs and output the result in raw image file.

    Args:
        input_model_file (Path): The onnx yolo model.
        output_model_file (Path): where to save the final onnx model.
    """
    add_ppp.yolo_detection(
        input_model_file, output_model_file, "jpg", num_classes=num_classes, input_shape=(640, 640))


WEIGHT_DIR = os.getcwd() + "/id_verification/weights"

# Load the corner model
corner_model = YOLO(f"{WEIGHT_DIR}/cccd_corner.pt")

# Export the model to ONNX format
corner_model_exported_path = Path(corner_model.export(format="onnx", opset=16, simplify=True))

# Check if model exported
if not corner_model_exported_path.exists():
    raise FileExistsError(f"Cannot find model at {corner_model_exported_path.as_uri()}")

# Add pre-post to onnx model path
corner_e2e_model_path = corner_model_exported_path.with_suffix(suffix=".pre-post.onnx")

print("Adding pre/post processing...")
add_pre_post_processing_to_yolo(
    corner_model_exported_path, corner_e2e_model_path, num_classes=len(corner_model.names))

print("Testing exported model...")
session_options = ort.SessionOptions()
session_options.register_custom_ops_library(get_library_path())
session_options.enable_profiling = True
ort_session = ort.InferenceSession(
    corner_e2e_model_path,
    sess_options=session_options,
    providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])

But error has occurred:

onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Load model from exported_model.onnx failed:Node (post_process_3) Op (Split) [ShapeInferenceError] Mismatch between the sum of 'split' (9) and the split dimension of the input (6)

Any ideas about this problem?

@skyler9901
Copy link

I‘ve got the same problem as you. Have you solved your problem?

@ductridev
Copy link
Author

I have to add pre-post processsing step by myself

@skyler9901
Copy link

I know. Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants