You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As a reference, this is how I have to handle OpenAIs Stream responses in my backend and send back to frontend:
consttextStream=awaitopenai.chat.completions.create({model: 'gpt-4',messages: [{role: 'system',content: SYSTEM_PROMT},{role: 'user',content: userText}],stream: true});constencoder=newTextEncoder();returnnewResponse(newReadableStream({asyncstart(controller){// Logic to handle each chunk from original streamforawait(constchunkoftextStream){// Get content from chunk as of OpenAI API response structureconstmessage=chunk.choices[0]?.delta?.content||'';controller.enqueue(encoder.encode(message));}// Close the stream once all chunks are processedcontroller.close();},cancel(){console.log('cancel and abort');}}),{headers: {'cache-control': 'no-cache','Content-Type': 'text/event-stream'}});
Ideally, the API should return a ReadableStream, so all I would need to do is wrap it into a response.
WHAT?
Add streaming support for API responses.
WHY?
Improves user experience for long or slow completions.
Additional requirements
REFERENCE
OpenAI supports this with Chat Completions and the Assistants API.
Reference: https://platform.openai.com/docs/api-reference/streaming
The text was updated successfully, but these errors were encountered: