Replies: 1 comment
-
apparently there is an open issue on this subject which as yet has not been resolves (since September 2024). I would be grateful if anyone at Cohere or LlamaIndex could address this issue. See : #15843 |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I have been trying to replicate the Event-Driven RAG based workflow system, recently demonstrated by LlamaIndex :
https://learn.deeplearning.ai/courses/event-driven-agentic-document-workflows/lesson/g2gfb/building-a-workflow
I am having some trouble getting the repsonse form either OpenAI llm or Cohere llm. The OpenAI is complaining about "exceeding quota", while only a single request is made, while Cohere appears not to have implemented the
astream.complete()
method to return a coroutine [not exactly sure about this, but it looks that way].Using Cohere:
llama_index.core.workflow.errors.WorkflowRuntimeError: Error in step "step_two": object async_generator cannot be used in "await" expression
Here's the code and the line that causes this issue:
It appears Cohere may have not implemented the
astream.complete()
function quite right as it may bot be returning a coroutine.This same code, when using OpenAI llm complains about exceeding quota, when only a single call is made to the llm, all be it, we do pull the response form the llm in "delta" sections.
Beta Was this translation helpful? Give feedback.
All reactions