Skip to content

Deploy YOLOv8 Segmentation model #77

Open
@Flippchen

Description

@Flippchen

Hi there,

You have a tutorial up where you show how to use the yolov8 library (https://clear.ml/docs/latest/docs/integrations/yolov8/) with clearml. You also state that these models are easy to use. I have a few questions:

  1. I am using the yolov8-seg model. I have exported it as an onnx and would like to deploy it on the triton inference server. I think this is the intended way, am I right?
  2. When I want to deploy it I can only give one output dimension for the model with the clearml serving cli. But the model has two outputs, if I write two outputs into the CLI command the first one will be overwritten, is this a bug or am I doing something wrong?
  3. If this is not possible I have seen that I can use a custom model/Preprocess and then the ultralytics library and do the inference myself outside of triton. Is it possible there to make the model persistent like a class variable of the Process class or does the model get reloaded each time?
    4, If triton is the preferred way because yolov8 supports direct triton inference, does that work with clearml serving or because a wrapper is built around it it does not work?

Thanks in advance. Maybe you can provide an example on how to do it :)

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions