Skip to content

Support for YOLO-Worldv2 models alongside currently supported YOLO-like ONNX model files #281

@TimC2225

Description

@TimC2225

Preliminary Checks

  • This issue is not a duplicate. Before opening a new issue, please search existing issues.
  • This issue is not a question, bug report, or anything other than a feature request directly related to this project.

Proposal

Support for YOLO-Worldv2 models could be added to the "Custom Object Detection with YOLO-like ONNX model file" feature. I have tried using persisting models saved using the following method provided by Ultralytics, but it fails to detect any objects when launching display_zed_cam.launch.py. The PT file saved was exported as ONNX through the CLI as given in the README.

from ultralytics import YOLO

# Initialize a YOLO-World model
model = YOLO("yolov8s-worldv2.pt")  # or select yolov8m/l-worldv2.pt

# Define custom classes
model.set_classes(["person", "bus"])

# Save the model with the defined offline vocabulary
model.save("custom_yolov8s.pt")

Ultralytics' documentation for YOLO-World states that the models saved using the above method behave like any other pre-trained YOLOv8 model, so I am unsure if the saved persisting YOLO-Worldv2 models should already work with the ZED ROS2 wrapper once exported as ONNX.

Use-Case

This would allow persisting YOLO-World models to display objects detected through the ROS 2 plugin used to visualize the results of the Object Detection, hopefully making zero-shot detection with dynamic custom classes possible with the ZED stereo cameras.

Anything else?

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions