-
-
Notifications
You must be signed in to change notification settings - Fork 22
Add model selection within utterances. #169
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add model selection within utterances. #169
Conversation
Enhances talon commands to allow specifying a model directly within the utterance. For example: 'four o mini explain this' will use gpt-4o-mini model. - Updated model.talon-list to include four o and four o mini models - Modified send_request and supporting functions to accept and use model parameter - Updated all Talon command files to pass the model parameter - Fixed parameter ordering to ensure optional params come after required ones - Deprecated openai_model setting in favor of model_default_model
for more information, see https://pre-commit.ci
Thanks for this PR, may be a couple days until I properly review/test locally just fyi; busy next few days; feel free to ping if I forget |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the PR and thank you for your patience while I was away w/ travel. Made a few general comments, but it all looks good. Once aligned we can merge.
|
||
def gpt_generate_shell(text_to_process: str) -> str: | ||
def gpt_generate_shell(text_to_process: str, model: str) -> str: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Totally good w me either way but curious on the rationale for passing in model to each function instead of just calling the setting for it since it is global state anyways. I suppose this is better if we ever set up test w/ mocking?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have to pass in the model in order to support models provided within the utterance. With this change, anywhere that you can say "model" you can now also name a specific model to use. The global state will always be the same, but this parameter passes along the list value from the utterance.
I've addressed all of your comments. |
Looks good, thanks! |
Enhances talon commands to allow specifying a model directly within the utterance. For example: 'four o mini explain this' will use gpt-4o-mini model.
Note: I tested every command here. I fixed some issues with the "model find" command. I noticed that the Cursorless commands are broken, and there are some bugs in the beta "blend" commands. I compared to the baseline and these issues were already present. I can file separate bugs for those, since they seem to run a bit deeper.