-
Notifications
You must be signed in to change notification settings - Fork 235
Description
Preliminary Checks
- This issue is not a duplicate. Before opening a new issue, please search existing issues.
- This issue is not a question, bug report, or anything other than a feature request directly related to this project.
Proposal
Support for YOLO-Worldv2 models could be added to the "Custom Object Detection with YOLO-like ONNX model file" feature. I have tried using persisting models saved using the following method provided by Ultralytics, but it fails to detect any objects when launching display_zed_cam.launch.py
. The PT file saved was exported as ONNX through the CLI as given in the README.
from ultralytics import YOLO
# Initialize a YOLO-World model
model = YOLO("yolov8s-worldv2.pt") # or select yolov8m/l-worldv2.pt
# Define custom classes
model.set_classes(["person", "bus"])
# Save the model with the defined offline vocabulary
model.save("custom_yolov8s.pt")
Ultralytics' documentation for YOLO-World states that the models saved using the above method behave like any other pre-trained YOLOv8 model, so I am unsure if the saved persisting YOLO-Worldv2 models should already work with the ZED ROS2 wrapper once exported as ONNX.
Use-Case
This would allow persisting YOLO-World models to display objects detected through the ROS 2 plugin used to visualize the results of the Object Detection, hopefully making zero-shot detection with dynamic custom classes possible with the ZED stereo cameras.
Anything else?
No response