Skip to content

Conversation

@Bentlybro
Copy link
Member

This pr adds the latest gpt-5.1 and gpt-5.1-codex llm's from openai, as well as update the price of the gpt-5-chat model

https://platform.openai.com/docs/models/gpt-5.1
https://platform.openai.com/docs/models/gpt-5.1-codex

For code changes:

  • I have clearly listed my changes in the PR description
  • I have made a test plan
  • I have tested my changes according to the test plan:
    • Test the latest gpt-5.1 llm
    • Test the latest gpt-5.1-codex llm

@Bentlybro Bentlybro requested a review from a team as a code owner November 18, 2025 12:36
@Bentlybro Bentlybro requested review from 0ubbe and Swiftyos and removed request for a team November 18, 2025 12:36
@github-project-automation github-project-automation bot moved this to 🆕 Needs initial review in AutoGPT development kanban Nov 18, 2025
@netlify
Copy link

netlify bot commented Nov 18, 2025

Deploy Preview for auto-gpt-docs-dev canceled.

Name Link
🔨 Latest commit f193a48
🔍 Latest deploy log https://app.netlify.com/projects/auto-gpt-docs-dev/deploys/691c685a7e18720008c14e49

@coderabbitai
Copy link

coderabbitai bot commented Nov 18, 2025

Important

Review skipped

Auto reviews are disabled on this repository.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch add-gpt-5.1-and-gpt-5.1-codex

Tip

📝 Customizable high-level summaries are now available in beta!

You can now customize how CodeRabbit generates the high-level summary in your pull requests — including its content, structure, tone, and formatting.

  • Provide your own instructions using the high_level_summary_instructions setting.
  • Format the summary however you like (bullet lists, tables, multi-section layouts, contributor stats, etc.).
  • Use high_level_summary_in_walkthrough to move the summary from the description to the walkthrough section.

Example instruction:

"Divide the high-level summary into five sections:

  1. 📝 Description — Summarize the main change in 50–60 words, explaining what was done.
  2. 📓 References — List relevant issues, discussions, documentation, or related PRs.
  3. 📦 Dependencies & Requirements — Mention any new/updated dependencies, environment variable changes, or configuration updates.
  4. 📊 Contributor Summary — Include a Markdown table showing contributions:
    | Contributor | Lines Added | Lines Removed | Files Changed |
  5. ✔️ Additional Notes — Add any extra reviewer context.
    Keep each section concise (under 200 words) and use bullet or numbered lists for clarity."

Note: This feature is currently in beta for Pro-tier users, and pricing will be announced later.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@netlify
Copy link

netlify bot commented Nov 18, 2025

Deploy Preview for auto-gpt-docs canceled.

Name Link
🔨 Latest commit f193a48
🔍 Latest deploy log https://app.netlify.com/projects/auto-gpt-docs/deploys/691c685a49cfe900083e3ea4

@qodo-merge-pro
Copy link

You are nearing your monthly Qodo Merge usage quota. For more information, please visit here.

PR Reviewer Guide 🔍

Here are some key observations to aid the review process:

⏱️ Estimated effort to review: 2 🔵🔵⚪⚪⚪
🧪 No relevant tests
🔒 No security concerns identified
⚡ Recommended focus areas for review

Model Enum Consistency

Verify that any downstream logic (routing, providers, feature flags) recognizes the new LlmModel.GPT5_1 and LlmModel.GPT5_1_CODEX values and that they map to valid provider/model IDs in runtime calls.

GPT5 = "gpt-5-2025-08-07"
GPT5_1 = "gpt-5.1-2025-11-13"
GPT5_1_CODEX = "gpt-5.1-codex"
GPT5_MINI = "gpt-5-mini-2025-08-07"
GPT5_NANO = "gpt-5-nano-2025-08-07"
Cost Update Impact

Increasing GPT5_CHAT cost from 2 to 5 may affect quotas/billing; confirm product decision and update any UI hints, documentation, or alerts that display expected costs.

LlmModel.GPT5_CHAT: 5,
LlmModel.GPT41: 2,
Metadata Accuracy

Ensure the ModelMetadata token limits for GPT5_1 and GPT5_1_CODEX are correct per OpenAI docs; mismatches can cause truncation or provider-side errors.

LlmModel.GPT5: ModelMetadata("openai", 400000, 128000),
LlmModel.GPT5_1: ModelMetadata("openai", 400000, 128000),
LlmModel.GPT5_1_CODEX: ModelMetadata("openai", 400000, 128000),
LlmModel.GPT5_MINI: ModelMetadata("openai", 400000, 128000),

@deepsource-io
Copy link

deepsource-io bot commented Nov 18, 2025

Here's the code health analysis summary for commits 3b34c04..f193a48. View details on DeepSource ↗.

Analysis Summary

AnalyzerStatusSummaryLink
DeepSource JavaScript LogoJavaScript✅ SuccessView Check ↗
DeepSource Python LogoPython✅ SuccessView Check ↗

💡 If you’re a repository administrator, you can configure the quality gates from the settings.

# GPT-5 models
GPT5 = "gpt-5-2025-08-07"
GPT5_1 = "gpt-5.1-2025-11-13"
GPT5_1_CODEX = "gpt-5.1-codex"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug: The GPT5_1_CODEX model uses the incorrect OpenAI API, causing API rejection.
Severity: CRITICAL | Confidence: 1.00

🔍 Detailed Analysis

The GPT5_1_CODEX model is configured to use the OpenAI Chat Completions API (oai_client.chat.completions.create()). However, OpenAI's official documentation specifies that gpt-5.1-codex is exclusively served through the Responses API. This mismatch will cause OpenAI's API to reject requests for gpt-5.1-codex, leading to runtime crashes whenever this model is invoked by any LLM block.

💡 Suggested Fix

Implement support for OpenAI's Responses API within llm_call() for gpt-5.1-codex or remove the model if Responses API integration is not planned.

🤖 Prompt for AI Agent
Review the code at the location below. A potential bug has been identified by an AI
agent.
Verify if this is a real issue. If it is, propose a fix; if not, explain why it's not
valid.

Location: autogpt_platform/backend/backend/blocks/llm.py#L97

Potential issue: The `GPT5_1_CODEX` model is configured to use the OpenAI Chat
Completions API (`oai_client.chat.completions.create()`). However, OpenAI's official
documentation specifies that `gpt-5.1-codex` is exclusively served through the Responses
API. This mismatch will cause OpenAI's API to reject requests for `gpt-5.1-codex`,
leading to runtime crashes whenever this model is invoked by any LLM block.

Did we get this right? 👍 / 👎 to inform future reviews.

Reference_id: 2766327

O1_MINI = "o1-mini"
# GPT-5 models
GPT5 = "gpt-5-2025-08-07"
GPT5_1 = "gpt-5.1-2025-11-13"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug: The GPT5_1 model uses an invalid identifier not found in OpenAI documentation.
Severity: CRITICAL | Confidence: 1.00

🔍 Detailed Analysis

The model identifier gpt-5.1-2025-11-13 assigned to GPT5_1 is not listed in OpenAI's official API documentation for GPT-5.1 models. When oai_client.chat.completions.create() is called with this identifier, OpenAI's API will reject the request with a 'model not found' error, preventing the GPT5_1 model from being used.

💡 Suggested Fix

Update the GPT5_1 model identifier to a valid one from OpenAI's official documentation, such as gpt-5.1 or gpt-5.1-chat-latest.

🤖 Prompt for AI Agent
Review the code at the location below. A potential bug has been identified by an AI
agent.
Verify if this is a real issue. If it is, propose a fix; if not, explain why it's not
valid.

Location: autogpt_platform/backend/backend/blocks/llm.py#L96

Potential issue: The model identifier `gpt-5.1-2025-11-13` assigned to `GPT5_1` is not
listed in OpenAI's official API documentation for GPT-5.1 models. When
`oai_client.chat.completions.create()` is called with this identifier, OpenAI's API will
reject the request with a 'model not found' error, preventing the `GPT5_1` model from
being used.

Did we get this right? 👍 / 👎 to inform future reviews.

Reference_id: 2766327

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

Status: 🆕 Needs initial review

Development

Successfully merging this pull request may close these issues.

2 participants