Replies: 1 comment
-
@zcasler0 hello! It's great to hear you're exploring the capabilities of YOLOv8. When it comes to bounding box predictions, the model should ideally predict boxes that are within the image domain. However, in some cases, due to the nature of the object's position or the prediction process, you might get boxes that extend beyond the image boundaries. In YOLOv8, the predictions are typically clipped to the image boundaries during the post-processing step. If you're observing boxes that extend outside the image domain, this might be an edge case or a bug. For segmentation tasks, the masks are inherently bound to the image domain, which is why you're seeing the expected behavior there. If you're consistently seeing this issue with bounding boxes, I would recommend reviewing the post-processing steps to ensure that the clipping is being applied correctly. If the problem persists, please feel free to raise an issue on the repo with details of your observations, and we'll be happy to look into it further. Remember to check out the Predict mode documentation on our Docs for more insights into how predictions are handled. 😊🔍 Keep up the great work, and thank you for being part of the YOLO community! |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I have not been able to find the answer despite my best research, if anyone has experience with YOLO predictions Is there any way to make the predictions stay within the image domain? or modify the box to the first pixel on the left or right of the image? It does not seem to behave the same with segmentation, the predictions all stay within the image domain.
Beta Was this translation helpful? Give feedback.
All reactions