Enable Distinct LLMs for Different Roles in Inline Assist and Conversational Tasks #27075
erik-balfe
started this conversation in
LLMs and Zed Agent
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Check for existing issues
Describe the feature
Current Challenges:
The role of AI in the main chat involves complex conversations and reasoning, while the 'Inline Assist' and LLM editor that apply changes in the 'Workflows' feature have different purposes. The current single-selection model approach does not optimize performance for these distinct tasks.
With the introduction of OpenAI’s new O1 models—designed for reasoning but not code generation—the need for specialization is increasingly critical. Simple coding tasks performed through 'Inline Assist' would benefit from efficient, lower-cost models that enhance performance without unnecessary overhead.
Proposed Solution:
Model Selection Configuration:
Enable users to choose distinct LLMs for 'Inline Assist' and the main chat feature via a configuration file, allowing for tailored performance based on task requirements.
Sync Flag Option:
Introduce an option to synchronize or separate the models used for 'Inline Assist' and main chat, giving users more control over their experience.
Rationale:
Existing Concerns:
Notably, the search for existing issues has revealed two similar threads:
These discussions address concerns about using different models for inline assist but do not capture the broader need for multiple specialized roles. My suggestion extends beyond current models to encompass future developments in LLM capabilities, advocating for at least two distinct roles for optimal integration within the project.
If applicable, add mockups / screenshots to help present your vision of the feature
No response
Beta Was this translation helpful? Give feedback.
All reactions