-
-
Notifications
You must be signed in to change notification settings - Fork 11.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug] Azure OpenAI O1 models don’t work #4248
Comments
Thank you for raising an issue. We will investigate into the matter and get back to you as soon as possible. |
Hey @nikhil-pandey! 👋 I'm here to help you with any bugs, questions, or contributions you have. Let's squash those bugs together while we wait for a human maintainer. The error you're encountering is due to the use of the To resolve this issue, you need to modify the request construction in the codebase to replace If you haven't explicitly set a max token limit, ensure that the default configuration or any conditional logic in the code does not automatically include |
问题依旧,还没有修复吗? |
The question is still, hasn't it been repaired yet? |
求修复该问题 |
Seeking to repair this problem |
看起来是因为用了旧版本的azure openai package?"@azure/openai": "1.0.0-beta.12" |
It looks because of the old version of Azure Openai Package? "@Azure/Openai": "1.0.0- Beta.12" |
I'm currently working on this :) #6079 |
As a workaround you can use litellm to create a proxy that behaves like OpenAI. That way LobeChat will think it is talking to OpenAI o1/o3 (mini) and it will work. |
This issue is closed, If you have any questions, you can comment and reply. |
🎉 This issue has been resolved in version 1.55.1 🎉 The release is available on: Your semantic-release bot 📦🚀 |
📦 Environment
Vercel
📌 Version
Latest main / v1.21.4
💻 Operating System
macOS
🌐 Browser
Safari
🐛 Bug Description
{ "error": { "message": "Unsupported parameter: 'max_tokens' is not supported with this model. Use 'max_completion_tokens' instead.", "type": "invalid_request_error", "param": "max_tokens", "code": "unsupported_parameter" }, "endpoint": "https://***.openai.azure.com/", "provider": "azure" }
I haven’t set any max token limit. Max token field should be removed from the request body
📷 Recurrence Steps
Use Azure OpenAI o1 models
🚦 Expected Behavior
Should work
📝 Additional Information
The text was updated successfully, but these errors were encountered: