Skip to content
This repository has been archived by the owner on Oct 23, 2023. It is now read-only.

how to get stream of response by chunks without waiting when response done? #23

Answered by the-csaba
AndrewBPC asked this question in Q&A
Discussion options

You must be logged in to vote

Hi @AndrewBPC

You can set the stream parameter on the completions endpoint.

https://platform.openai.com/docs/api-reference/completions/create

When you specify stream=true in your request, you would then need to retrieve the response using the getResponse() method rather than toModel() or toArray().

You can then access the stream interface and loop over it, outputting it in chunks.

Unfortunately, we don't have any detailed documentation on that, but hopefully, this points you in the right direction.

Replies: 1 comment

Comment options

You must be logged in to vote
0 replies
Answer selected by AndrewBPC
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants