-
Notifications
You must be signed in to change notification settings - Fork 249
Open
Description
Hello everyone, right now I am trying to inference image captioning using OFA/Huge fine-tuned on CoCo with about 48k images ,but I am facing very slow speed due to 1 image per batch ( about 1 image / sec which means I have to wait for about 13 hours to inference entire dataset). is there any way to do batch inference on my test set and still keeping beam search generation ?
ShigemichiMatsuzaki and nermienkh
Metadata
Metadata
Assignees
Labels
No labels