Skip to content

fix(agents-api): remove continue_ field + filter out name field for non openai models #1530

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Jul 11, 2025

Conversation

Ahmad-mtos
Copy link
Contributor

@Ahmad-mtos Ahmad-mtos commented Jul 11, 2025

User description

  • fix(agents-api): remove continue_ field + filter out name field for non openai models
  • docs: update docs

PR Type

Bug fix, Enhancement


Description

  • Remove deprecated continue_ field from message models

  • Filter out name field for non-OpenAI models

  • Update documentation and schema definitions

  • Add new chat input properties for tool handling


Changes diagram

flowchart LR
  A["Message Models"] --> B["Remove continue_ field"]
  A --> C["Filter name field"]
  C --> D["Non-OpenAI models only"]
  E["Chat Input"] --> F["Add auto_run_tools"]
  E --> G["Add recall_tools"]
  H["Documentation"] --> I["Update examples"]
Loading

Changes walkthrough 📝

Relevant files
Bug fix
7 files
prompt_step.py
Filter name field for non-OpenAI models                                   
+11/-0   
Chat.py
Remove continue_ field from Delta and Message                       
+0/-8     
Tasks.py
Remove continue_ field from PromptItem                                     
+0/-4     
Chat.py
Remove continue_ field from Delta and Message                       
+0/-8     
Tasks.py
Remove continue_ field from PromptItem                                     
+0/-4     
models.tsp
Remove continue field from ChatMLMessage                                 
+0/-4     
openapi-1.0.0.yaml
Remove continue field from OpenAPI schema                               
+0/-12   
Documentation
2 files
chat.mdx
Remove continue_ field from documentation                               
+0/-1     
render.mdx
Remove continue_ field from documentation                               
+0/-1     
Configuration changes
1 files
litellm-config.yaml
Update Groq model comment                                                               
+1/-1     
Enhancement
2 files
create_agent_request.json
Add auto_run_tools and recall_tools properties                     
+11/-0   
create_task_request.json
Add auto_run_tools and recall_tools properties                     
+11/-0   

Need help?
  • Type /help how to ... in the comments thread for any questions about Qodo Merge usage.
  • Check out the documentation for more information.
  • Copy link

    claude bot commented Jul 11, 2025

    Claude finished @Ahmad-mtos's task —— View job


    PR Review Complete ✅

    I've reviewed this pull request for code quality, potential bugs, performance, security, and test coverage.

    Summary

    This PR removes the deprecated continue_ field from message models and adds proper field filtering for non-OpenAI models. The changes are well-structured and address compatibility issues with different LLM providers.

    Code Quality & Best Practices ✅

    • Consistent Implementation: Changes are applied consistently across both agents-api and integrations-service autogenerated files
    • Proper Field Filtering: The logic correctly identifies non-OpenAI models and filters out incompatible fields
    • Clean Code: The filtering logic is readable and follows the existing patterns in the codebase
    • Documentation Updates: Documentation files are properly updated to reflect the removed field

    Potential Issues & Bugs ⚠️

    1. Hardcoded Model Detection (prompt_step.py:52)

      if get_api_key_env_var_name(agent_model) != "OPENAI_API_KEY":
      • This logic depends on the litellm-config.yaml configuration
      • Could break if model configurations change without updating this code
      • Recommendation: Consider using a more explicit provider detection mechanism
    2. Groq Model Detection (prompt_step.py:172)

      is_groq_model = agent_model.lower().startswith("llama-3.1")
      • This is fragile - not all Groq models start with "llama-3.1"
      • Recommendation: Use a more robust provider detection method
    3. Duplicate Filtering Logic

      • The name field is filtered twice: once for all non-OpenAI models (lines 52-60) and again for Groq models (lines 174-181)
      • This could lead to unnecessary processing

    Performance Implications ✅

    • Minimal Impact: The filtering operations are O(n) where n is the number of messages, which is typically small
    • List Comprehensions: Efficient implementation using list comprehensions
    • No Memory Leaks: No new allocations that could cause memory issues

    Security Concerns ✅

    • No Security Issues: The changes only filter out fields and don't introduce new attack vectors
    • Data Integrity: Filtering is done safely without modifying the original data structure unexpectedly

    Test Coverage ⚠️

    • Missing Tests: No tests were added for the new filtering logic
    • Existing Tests: The changes to autogenerated files don't break existing tests (they're based on TypeSpec)
    • Recommendation: Add tests to verify field filtering works correctly for different model types

    Schema & Documentation ✅

    • Schema Consistency: The auto_run_tools and recall_tools fields are properly added to relevant schemas
    • Documentation Updated: References to continue_ field are correctly removed from documentation
    • TypeSpec Changes: The source-of-truth changes in models.tsp are properly reflected in generated code

    Specific Recommendations

    1. Refactor Model Detection (prompt_step.py:52 & prompt_step.py:172)

      # Instead of hardcoded checks, consider:
      provider = get_model_provider(agent_model)  # Extract provider logic
      if provider != "openai":
          # Filter non-OpenAI compatible fields
    2. Add Unit Tests

      • Test field filtering for different model types
      • Test edge cases (empty messages, missing fields)
      • Test provider detection logic
    3. Consider Consolidation

      • The field filtering logic could be consolidated into a single helper function
      • This would reduce duplication and improve maintainability

    Overall Assessment

    This is a good, clean PR that addresses real compatibility issues with different LLM providers. The implementation is mostly sound, but would benefit from more robust provider detection and additional test coverage.

    Recommendation: ✅ Approve with minor suggestions - The changes are beneficial and the identified issues are minor improvements rather than blocking problems.


    Copy link
    Contributor

    github-actions bot commented Jul 11, 2025

    Claude finished @Ahmad-mtos's task —— View job


    Changelog Bot Analysis Complete

    Acting as the changelog bot to review PR changes and create changelog entries if needed.

    Todo List:

    • Read diff.patch file to understand changes
    • Determine if changes are user-visible
    • Create/update changelog file if needed
    • Update home-summary.html (only if PR is on main) - Skipped: PR targets dev branch
    • Stage, commit, and push changes - Committed locally, push requires rebase

    Summary

    User-Visible Changes Detected: Yes - significant API changes found

    Changes Analyzed:

    • Breaking Change: Removed continue_ field from Delta, Message, and PromptItem API models
    • Enhancement: Added filtering for name field on non-OpenAI models in prompt processing
    • Feature: Added auto_run_tools and recall_tools fields to agent and task creation schemas
    • Documentation: Updated chat and render documentation to remove deprecated field references

    Actions Taken:

    1. ✅ Created changelog entry: changelog/dev/2025-07-11.md
    2. ✅ Staged and committed the changelog file locally
    3. ⚠️ Push requires rebase due to remote changes

    Note: The changelog entry has been created and committed locally. A rebase with the remote branch is needed before pushing. The changelog properly documents the API changes that will affect users of the Julep platform.

    Copy link
    Contributor

    PR Reviewer Guide 🔍

    Here are some key observations to aid the review process:

    ⏱️ Estimated effort to review: 3 🔵🔵🔵⚪⚪
    🧪 No relevant tests
    🔒 No security concerns identified
    ⚡ Recommended focus areas for review

    Logic Error

    The filtering logic for non-OpenAI models removes the 'name' field from all messages in the prompt, but this could break functionality for models that do support the 'name' field. The condition should be more specific about which models actually don't support this field.

    if get_api_key_env_var_name(agent_model) != "OPENAI_API_KEY":
        prompt = [
            {
                k: v
                for k, v in message.items()
                if k != 'name'
            }
            for message in prompt
        ]
    Configuration Issue

    The comment indicates that 'llama-3.1-70b-versatile' has been decommissioned, but the actual model configuration still uses this decommissioned model name. This could cause runtime failures when the model is called.

    - model_name: "llama-3.1-70b" # `llama-3.1-70b-versatile` has been decommissioned and is no longer supported
      litellm_params:
        model: "groq/llama-3.1-70b-versatile"
        tags: ["paid"]
        api_key: os.environ/GROQ_API_KEY

    Copy link
    Contributor

    PR Code Suggestions ✨

    No code suggestions found for the PR.

    Copy link
    Contributor

    @ellipsis-dev ellipsis-dev bot left a comment

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    Important

    Looks good to me! 👍

    Reviewed everything up to bc8431d in 1 minute and 47 seconds. Click for details.
    • Reviewed 279 lines of code in 12 files
    • Skipped 0 files when reviewing.
    • Skipped posting 12 draft comments. View those below.
    • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
    1. documentation/advanced/chat.mdx:55
    • Draft comment:
      Removal of the 'continue_' field from the chat documentation is consistent with the backend changes. Ensure that all usage examples and client references have been updated accordingly.
    • Reason this comment was not posted:
      Confidence changes required: 0% <= threshold 50% None
    2. documentation/advanced/render.mdx:53
    • Draft comment:
      The 'continue_' field removal in the render documentation is clear and in line with the API change. Confirm that the example remains accurate without this field.
    • Reason this comment was not posted:
      Confidence changes required: 0% <= threshold 50% None
    3. src/agents-api/agents_api/activities/task_steps/prompt_step.py:49
    • Draft comment:
      The new filtering block removes the 'name' field for non-OpenAI models using a list comprehension. This approach is concise; however, consider ensuring that 'prompt' is always a list of dictionaries and that its usage downstream remains valid. You might also consider naming the filtered output differently if the original value is needed later.
    • Reason this comment was not posted:
      Confidence changes required: 33% <= threshold 50% None
    4. src/agents-api/agents_api/autogen/Chat.py:303
    • Draft comment:
      The 'continue_' field has been removed from both the Delta and Message models in Chat.py. This is a breaking change for clients expecting that field. Ensure that all consumers of the API are updated accordingly.
    • Reason this comment was not posted:
      Comment did not seem useful. Confidence is useful = 0% <= threshold 50% The comment is pointing out a breaking change due to the removal of a field in the models. It suggests ensuring that all consumers of the API are updated accordingly, which is a form of asking the author to ensure something. This violates the rule against asking the author to ensure behavior is intended or tested.
    5. src/agents-api/agents_api/autogen/Tasks.py:884
    • Draft comment:
      The 'continue_' field has been removed from the PromptItem model in Tasks. Make sure that any systems relying on this field are adjusted for this breaking change.
    • Reason this comment was not posted:
      Comment did not seem useful. Confidence is useful = 0% <= threshold 50% The comment is asking the PR author to ensure that systems relying on the removed field are adjusted, which violates the rule against asking the author to ensure compatibility or test changes. It doesn't provide a specific suggestion or ask for confirmation of intent, which would be acceptable.
    6. src/integrations-service/integrations/autogen/Chat.py:285
    • Draft comment:
      The 'continue_' field removal from both Delta and Message models in the integrations service Chat module is consistent with changes elsewhere. Confirm that integration tests and client applications are updated accordingly.
    • Reason this comment was not posted:
      Comment did not seem useful. Confidence is useful = 0% <= threshold 50% This comment is asking the PR author to confirm that integration tests and client applications are updated, which violates the rule against asking for confirmation or ensuring testing. It does not provide a specific code suggestion or point out a specific issue with the code.
    7. src/integrations-service/integrations/autogen/Tasks.py:850
    • Draft comment:
      The 'continue_' field has been removed from the PromptItem model in the integrations service Tasks module. Ensure that consumers are prepared for this removal.
    • Reason this comment was not posted:
      Comment did not seem useful. Confidence is useful = 0% <= threshold 50% The comment is asking the author to ensure that consumers are prepared for the removal of a field, which violates the rule against asking the author to ensure something. It does not provide a specific suggestion or ask for confirmation of intention.
    8. src/llm-proxy/litellm-config.yaml:179
    • Draft comment:
      The updated comment clarifies that 'llama-3.1-70b-versatile' has been decommissioned. However, note that 'litellm_params' still reference 'groq/llama-3.1-70b-versatile'. Verify that this discrepancy is intentional or update the parameter accordingly.
    • Reason this comment was not posted:
      Comment was on unchanged code.
    9. src/schemas/create_agent_request.json:2383
    • Draft comment:
      New boolean fields 'auto_run_tools' and 'recall_tools' have been added, and 'tool_request' has been added to the event enum. Ensure that these additions are documented and that any existing clients are backwards–compatible with these schema changes.
    • Reason this comment was not posted:
      Comment did not seem useful. Confidence is useful = 0% <= threshold 50% The comment is asking the PR author to ensure documentation and backward compatibility, which falls under the rule of not asking the author to ensure or verify things. It doesn't provide a specific suggestion or point out a specific issue with the code.
    10. src/schemas/create_task_request.json:2383
    • Draft comment:
      Similar to the agent schema, new fields 'auto_run_tools' and 'recall_tools' and the new event type 'tool_request' have been introduced. Confirm that documentation and client handling are updated to match these schema changes.
    • Reason this comment was not posted:
      Comment did not seem useful. Confidence is useful = 0% <= threshold 50% The comment is asking the PR author to confirm that documentation and client handling are updated to match schema changes. This falls under the category of asking the author to ensure something is done, which is against the rules.
    11. src/typespec/entries/models.tsp:81
    • Draft comment:
      The removal of the 'continue' property from the ChatMLMessage model in the Typespec file is in line with the updated API. Ensure that any dependent consumers of this model are informed of the change.
    • Reason this comment was not posted:
      Comment did not seem useful. Confidence is useful = 0% <= threshold 50% This comment is purely informative and suggests ensuring that dependent consumers are informed, which violates the rules against making purely informative comments and asking the author to ensure something.
    12. src/typespec/tsp-output/@typespec/openapi3/openapi-1.0.0.yaml:2822
    • Draft comment:
      The 'continue' property has been removed from several schema definitions in the OpenAPI spec. This aligns with the backend changes, but ensure that all API consumers update their references to this field.
    • Reason this comment was not posted:
      Comment did not seem useful. Confidence is useful = 0% <= threshold 50% The comment is asking the PR author to ensure that all API consumers update their references to a removed field. This falls under the category of asking the author to ensure something is done, which violates the rules.

    Workflow ID: wflow_RbCz2OMgxaP7RaxI

    You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

    @Ahmad-mtos Ahmad-mtos merged commit 29c18f3 into dev Jul 11, 2025
    7 checks passed
    @Ahmad-mtos Ahmad-mtos deleted the x/groq-models branch July 11, 2025 12:38
    Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
    Projects
    None yet
    Development

    Successfully merging this pull request may close these issues.

    1 participant