Prompt caching for foundational models #2718
edmundhighcock
started this conversation in
Ideas
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi everyone,
I'm thinking about implementing prompt caching when query e.g. Bedrock, to reduce cost by using cached tokens.
Is anyone working on this? Is this feature already available and I have just missed it?
Beta Was this translation helpful? Give feedback.
All reactions