Image resizing and annotations in YOLOv8 #4457
Unanswered
junbro1016
asked this question in
Q&A
Replies: 1 comment 1 reply
-
@HistoryDan check the documentation - Object detection label format: https://docs.ultralytics.com/datasets/detect/ consistent input sizes aligns can lead to better performance. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hello, I am a student participating in an object detection-related competition using the YOLOv8 model.
We downloaded an image dataset, and the annotations were provided in COCO format JSON files. We converted these annotations into YOLOv8 annotation text files by our own python codes. Afterward, we uploaded the original images and the COCO format annotation files to the Roboflow platform to obtain another dataset. We noticed that Roboflow resized the original images to 640 x 640. (The original image sizes were different) The annotation text files remained the same as the ones we created.
We trained the model using two different datasets: the one with our modified annotations and the original images, and the other with the resized images and annotations from Roboflow. Surprisingly, the model's performance was better with the dataset obtained from Roboflow.
Here are my questions:
Is resizing to 640 x 640 a factor that can improve the performance of the YOLOv8 model?
If it is, YOLOv8 uses the (x_center, y_center, width, height) format to represent bounding boxes. If the aspect ratio of the image changes due to resizing, wouldn't this format not align with the annotations? Could this potentially lead to a decrease in performance?
I would like to ask for your insights. I am still a beginner and facing difficulties, so I would be extremely grateful for help from experts. Thank you very much!
Beta Was this translation helpful? Give feedback.
All reactions