Skip to content

[scripts] Adding scripts to easily evaluate the latency of models #699

Closed
@fg-mindee

Description

@fg-mindee

As discussed, we need to provide means to accurately evaluate the latency of models in a given environment. This comes with several challenges and questions:

  • 1 script per framework?
  • 1 script per task? (text detection, object detection, text recognition)
  • should we benchmark the predictor or the DL model only? (for OCR we don't have any end-to-end DL model for instance, so that )

I would suggest focusing on mid-level tasks (excluding OCR for now since we don't have an end-to-end archi) and implementing:

What do you think @charlesmindee?

Metadata

Metadata

Assignees

Type

No type

Projects

No projects

Milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions