-
Notifications
You must be signed in to change notification settings - Fork 5
Improve Mistral completions #85
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
7d9884f
to
3efe92b
Compare
Indeed, I'm mostly testing with Also since we ship a couple of providers by default, maybe it could make sense to also ship a set of "good defaults", to have the best experience of |
Just pushed a small change to add the note about compatible models for completions: jupyterlite-ai-mistral-completion-models.mp4Also opened #98 for having separate models for chat and completions. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @jtpio, this looks good.
const content = choice.message.content | ||
.replace(CODE_BLOCK_START_REGEX, '') | ||
.replace(CODE_BLOCK_END_REGEX, ''); | ||
return { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this still necessary when using it with the codestral
model ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we could keep it, just in case?
Also wondering if instead of writing a completer from scratch for all providers, there could be some common logic shared between all the providers, and then each provider would only provide their "fetch" method.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
here could be some common logic shared between all the providers
With this I mean sharing utility and post-processing logic, such as cleaning up responses or even caching some results if possible.
this._controller.abort(); | ||
this._controller = new AbortController(); | ||
|
||
const response = await this._completer.completionWithRetry( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably for a future PR, but we should handle the status code 400 when the model is not compatible.
It could probably at least be logged in the console log of jupyterlab.
Let's merge this one since it improves greatly the completion with MistralAI. Follow up issues: |
Trying out a couple of things to improve the inline suggestions.
MistralAI
directly (instead ofChatMistralAI
) gives access to the completion API directly and allows providing more input parameters such as theprompt
, andsuffix
.df = pd.
would then autocomplete todf = pd..read_csv()
(with the two dots). Maybe because of the delay or the inline not getting the response for the correct requestjupyterlite-ai-mistral-completer.mp4