Skip to content

Evaluate lower onnx latency for Gdino #18

@rhysdg

Description

@rhysdg
  • Currently we're looking at about ~3x slower
  • Time has been reduced almost by half with TensorrtExecutionProvider in comparison to straight onnx with the CUDA execution provider - headed through an opset analysis etc

Metadata

Metadata

Assignees

Labels

bugSomething isn't working

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions