-
Notifications
You must be signed in to change notification settings - Fork 621
Closed
Labels
ext: referencesRelated to references folderRelated to references folderframework: pytorchRelated to PyTorch backendRelated to PyTorch backendtopic: object detectionRelated to the task of object detectionRelated to the task of object detectiontopic: text detectionRelated to the task of text detectionRelated to the task of text detectiontopic: text recognitionRelated to the task of text recognitionRelated to the task of text recognition
Milestone
Description
As discussed, we need to provide means to accurately evaluate the latency of models in a given environment. This comes with several challenges and questions:
- 1 script per framework?
- 1 script per task? (text detection, object detection, text recognition)
- should we benchmark the predictor or the DL model only? (for OCR we don't have any end-to-end DL model for instance, so that )
I would suggest focusing on mid-level tasks (excluding OCR for now since we don't have an end-to-end archi) and implementing:
- Text Detection benchmark script (feat: Added latency evaluation scripts for all tasks #746)
- Text Recognition benchmark script (feat: Added latency evaluation scripts for all tasks #746)
- Object Detection benchmark script (feat: Added latency evaluation scripts for all tasks #746)
What do you think @charlesmindee?
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
ext: referencesRelated to references folderRelated to references folderframework: pytorchRelated to PyTorch backendRelated to PyTorch backendtopic: object detectionRelated to the task of object detectionRelated to the task of object detectiontopic: text detectionRelated to the task of text detectionRelated to the task of text detectiontopic: text recognitionRelated to the task of text recognitionRelated to the task of text recognition