Skip to content

Improve Mistral completions #85

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Jun 23, 2025

Conversation

jtpio
Copy link
Member

@jtpio jtpio commented May 24, 2025

Trying out a couple of things to improve the inline suggestions.

  • Using MistralAI directly (instead of ChatMistralAI) gives access to the completion API directly and allows providing more input parameters such as the prompt, and suffix.
  • The use of the throttler seems to be leading to the "character duplication issues". For example typing df = pd. would then autocomplete to df = pd..read_csv() (with the two dots). Maybe because of the delay or the inline not getting the response for the correct request
jupyterlite-ai-mistral-completer.mp4

@jtpio jtpio added the enhancement New feature or request label May 24, 2025
@brichet
Copy link
Collaborator

brichet commented May 26, 2025

We should think of having 2 models, one for the chat and one for completion.
Looks like the fim API is not available for all models.

image

@jtpio jtpio force-pushed the better-mistral-completions branch from 7d9884f to 3efe92b Compare June 17, 2025 13:32
@jtpio
Copy link
Member Author

jtpio commented Jun 17, 2025

Looks like the fim API is not available for all models.

Indeed, I'm mostly testing with codestral-latest.

Also since we ship a couple of providers by default, maybe it could make sense to also ship a set of "good defaults", to have the best experience of @jupyterlite/ai out of the box. For example that could mean choosing some models by default for each of the providers that we know work well. Of course they would still be customizable via the settings.

@jtpio
Copy link
Member Author

jtpio commented Jun 17, 2025

Just pushed a small change to add the note about compatible models for completions:

jupyterlite-ai-mistral-completion-models.mp4

Also opened #98 for having separate models for chat and completions.

@jtpio jtpio marked this pull request as ready for review June 17, 2025 15:36
@jtpio jtpio requested a review from brichet June 17, 2025 15:39
Copy link
Collaborator

@brichet brichet left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @jtpio, this looks good.

Comment on lines +39 to +42
const content = choice.message.content
.replace(CODE_BLOCK_START_REGEX, '')
.replace(CODE_BLOCK_END_REGEX, '');
return {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this still necessary when using it with the codestral model ?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we could keep it, just in case?

Also wondering if instead of writing a completer from scratch for all providers, there could be some common logic shared between all the providers, and then each provider would only provide their "fetch" method.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

here could be some common logic shared between all the providers

With this I mean sharing utility and post-processing logic, such as cleaning up responses or even caching some results if possible.

this._controller.abort();
this._controller = new AbortController();

const response = await this._completer.completionWithRetry(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably for a future PR, but we should handle the status code 400 when the model is not compatible.
It could probably at least be logged in the console log of jupyterlab.

@brichet
Copy link
Collaborator

brichet commented Jun 23, 2025

Let's merge this one since it improves greatly the completion with MistralAI.

Follow up issues:

@brichet brichet merged commit 9826a7b into jupyterlite:main Jun 23, 2025
9 checks passed
@jtpio jtpio deleted the better-mistral-completions branch June 23, 2025 12:37
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants