Skip to content

Add model selection within utterances. #169

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 6 commits into from
Mar 21, 2025

Conversation

wolfmanstout
Copy link
Collaborator

Enhances talon commands to allow specifying a model directly within the utterance. For example: 'four o mini explain this' will use gpt-4o-mini model.

  • Updated model.talon-list to include four o and four o mini models. I only included OpenAI models by default because it would be a significant maintenance effort to support every model supported by simonw/llm.
  • Modified send_request and supporting functions to accept and use model parameter
  • Updated all Talon command files to pass the model parameter
  • Fixed parameter ordering to ensure optional params come after required ones
  • Deprecated openai_model setting in favor of model_default_model

Note: I tested every command here. I fixed some issues with the "model find" command. I noticed that the Cursorless commands are broken, and there are some bugs in the beta "blend" commands. I compared to the baseline and these issues were already present. I can file separate bugs for those, since they seem to run a bit deeper.

wolfmanstout and others added 2 commits March 4, 2025 22:14
Enhances talon commands to allow specifying a model directly within the utterance.
For example: 'four o mini explain this' will use gpt-4o-mini model.

- Updated model.talon-list to include four o and four o mini models
- Modified send_request and supporting functions to accept and use model parameter
- Updated all Talon command files to pass the model parameter
- Fixed parameter ordering to ensure optional params come after required ones
- Deprecated openai_model setting in favor of model_default_model
@C-Loftus
Copy link
Owner

C-Loftus commented Mar 5, 2025

Thanks for this PR, may be a couple days until I properly review/test locally just fyi; busy next few days; feel free to ping if I forget

Copy link
Owner

@C-Loftus C-Loftus left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the PR and thank you for your patience while I was away w/ travel. Made a few general comments, but it all looks good. Once aligned we can merge.


def gpt_generate_shell(text_to_process: str) -> str:
def gpt_generate_shell(text_to_process: str, model: str) -> str:
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Totally good w me either way but curious on the rationale for passing in model to each function instead of just calling the setting for it since it is global state anyways. I suppose this is better if we ever set up test w/ mocking?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have to pass in the model in order to support models provided within the utterance. With this change, anywhere that you can say "model" you can now also name a specific model to use. The global state will always be the same, but this parameter passes along the list value from the utterance.

@wolfmanstout
Copy link
Collaborator Author

I've addressed all of your comments.

@wolfmanstout wolfmanstout requested a review from C-Loftus March 21, 2025 05:11
@C-Loftus
Copy link
Owner

Looks good, thanks!

@C-Loftus C-Loftus merged commit 82a3e11 into C-Loftus:main Mar 21, 2025
5 checks passed
@wolfmanstout wolfmanstout deleted the model-selection-in-utterance branch March 22, 2025 05:48
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants