Skip to content

Conversation

@sramshetty
Copy link
Contributor

Modified CoCa generation to allow more straightforward batching:
From:

with torch.no_grad(), torch.cuda.amp.autocast():
    generated = model.generate(imgs)

To:

with torch.no_grad(), torch.cuda.amp.autocast():
    generated = model.generate(imgs, device=device, batch_size=20)

OR

with torch.no_grad(), torch.cuda.amp.autocast():
    generated = model.generate(imgs, device=device)
  • device: Required argument since previous implementation was inferring device from image input, which would result in error if model and image were on separate devices. With this update, users can now directly set the generation to the same device as the model. Additionally, there was previously no guarantee that image and text were on the same device when text was passed as an argument.
  • batch_size: Optional argument that iterates over the input texts and images with set batch size. If batch_size=None`, then use the input size as batch.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant