Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

gpt-3.5-turbo #9

Open
MyraBaba opened this issue Jul 3, 2024 · 1 comment
Open

gpt-3.5-turbo #9

MyraBaba opened this issue Jul 3, 2024 · 1 comment

Comments

@MyraBaba
Copy link

MyraBaba commented Jul 3, 2024

How I can use the gpt-3 instead gpt 4 ?

Best

server = 'openai'
model = 'gpt-3.5-turbo'
model_endpoint = None

@MyraBaba
Copy link
Author

MyraBaba commented Jul 3, 2024

even the original gpt-4o gives below error in chainlit

model
none is not an allowed value (type=type_error.none.not_allowed)
Traceback (most recent call last):
File "/usr/local/Caskroom/miniconda/base/envs/agent_env/lib/python3.11/site-packages/chainlit/utils.py", line 40, in wrapper
return await user_function(**params_values)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/tulpar/Projects/ODS/graph_websearch_agent/app/chat.py", line 194, in main
response = await cl.make_async(chat_workflow.invoke_workflow)(message)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/Caskroom/miniconda/base/envs/agent_env/lib/python3.11/site-packages/asyncer/_main.py", line 358, in wrapper
return await anyio.to_thread.run_sync(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/Caskroom/miniconda/base/envs/agent_env/lib/python3.11/site-packages/anyio/to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/Caskroom/miniconda/base/envs/agent_env/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
^^^^^^^^^^^^
File "/usr/local/Caskroom/miniconda/base/envs/agent_env/lib/python3.11/asyncio/futures.py", line 287, in await
yield self # This tells Task to wait for completion.
^^^^^^^^^^
File "/usr/local/Caskroom/miniconda/base/envs/agent_env/lib/python3.11/asyncio/tasks.py", line 349, in __wakeup
future.result()
File "/usr/local/Caskroom/miniconda/base/envs/agent_env/lib/python3.11/asyncio/futures.py", line 203, in result
raise self._exception.with_traceback(self._exception_tb)
File "/usr/local/Caskroom/miniconda/base/envs/agent_env/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 807, in run
result = context.run(func, *args)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/tulpar/Projects/ODS/graph_websearch_agent/app/chat.py", line 61, in invoke_workflow
for event in self.workflow.stream(dict_inputs, limit):
File "/usr/local/Caskroom/miniconda/base/envs/agent_env/lib/python3.11/site-packages/langgraph/pregel/init.py", line 963, in stream
_panic_or_proceed(done, inflight, step)
File "/usr/local/Caskroom/miniconda/base/envs/agent_env/lib/python3.11/site-packages/langgraph/pregel/init.py", line 1489, in _panic_or_proceed
raise exc
File "/usr/local/Caskroom/miniconda/base/envs/agent_env/lib/python3.11/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/Caskroom/miniconda/base/envs/agent_env/lib/python3.11/site-packages/langgraph/pregel/retry.py", line 66, in run_with_retry
task.proc.invoke(task.input, task.config)
File "/usr/local/Caskroom/miniconda/base/envs/agent_env/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2406, in invoke
input = step.invoke(input, config, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/Caskroom/miniconda/base/envs/agent_env/lib/python3.11/site-packages/langgraph/utils.py", line 95, in invoke
ret = context.run(self.func, input, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/tulpar/Projects/ODS/graph_websearch_agent/agent_graph/graph.py", line 47, in
).invoke(
^^^^^^^
File "/Users/tulpar/Projects/ODS/graph_websearch_agent/agents/agents.py", line 92, in invoke
llm = self.get_llm()
^^^^^^^^^^^^^^
File "/Users/tulpar/Projects/ODS/graph_websearch_agent/agents/agents.py", line 33, in get_llm
return get_open_ai_json(model=self.model, temperature=self.temperature) if json_model else get_open_ai(model=self.model, temperature=self.temperature)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/tulpar/Projects/ODS/graph_websearch_agent/models/openai_models.py", line 18, in get_open_ai_json
llm = ChatOpenAI(
^^^^^^^^^^^
File "/usr/local/Caskroom/miniconda/base/envs/agent_env/lib/python3.11/site-packages/pydantic/v1/main.py", line 341, in init
raise validation_error
pydantic.v1.error_wrappers.ValidationError: 1 validation error for ChatOpenAI
model
none is not an allowed value (type=type_error.none.not_allowed)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant