-
Notifications
You must be signed in to change notification settings - Fork 390
Add docs how to configure Docker Model Runner as the AI backend #2218
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
WalkthroughA new section was added to the AI presets documentation, providing instructions and an example JSON configuration for connecting to a Docker Model Runner instance as a local AI model provider. The documentation update clarifies that local LLM support now includes both Ollama and Docker, not just Ollama. No changes were made to any code or exported/public entities; all modifications are limited to documentation content. The new section includes a link to the official Docker Model Runner documentation and is positioned between the existing Ollama and Azure OpenAI preset sections. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
docs/docs/ai-presets.mdx (1)
111-112
: Add clarifying commaMinor punctuation tweak for readability.
-Note: The `ai:apitoken` is required but can be any value as the Docker model runner ignores it. +Note: The `ai:apitoken` is required, but it can be any value because the Docker Model Runner ignores it.docs/docs/faq.mdx (1)
11-12
: Smooth out pronoun usageUsing “them” instead of “these” reads more naturally.
-The recommended way to configure these is through AI presets, which let you set up and easily switch between different providers and models. +The recommended way to configure them is through AI presets, which let you set up and easily switch between different providers and models.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
docs/docs/ai-presets.mdx
(1 hunks)docs/docs/faq.mdx
(1 hunks)
🧰 Additional context used
🪛 LanguageTool
docs/docs/faq.mdx
[grammar] ~11-~11: The verb ‘is’ is singular. Did you mean: “this is” or “these are”?
Context: ...exity. The recommended way to configure these is through AI presets, which let you set u...
(SINGULAR_VERB_AFTER_THESE_OR_THOSE)
docs/docs/ai-presets.mdx
[uncategorized] ~95-~95: Possible missing comma found.
Context: ...u have Docker Desktop or running Docker CE it can run LLMs the Docker Model Runner...
(AI_HYDRA_LEO_MISSING_COMMA)
[uncategorized] ~95-~95: Possible missing comma found.
Context: ...el Runner. To connect to a Docker Model Runner use a preset similar to the following: ...
(AI_HYDRA_LEO_MISSING_COMMA)
[uncategorized] ~111-~111: Possible missing comma found.
Context: ...``` Note: The ai:apitoken
is required but can be any value as the Docker model ru...
(AI_HYDRA_LEO_MISSING_COMMA)
[uncategorized] ~111-~111: Possible missing comma found.
Context: ...ai:apitoken` is required but can be any value as the Docker model runner ignores it. ...
(AI_HYDRA_LEO_MISSING_COMMA)
🔇 Additional comments (2)
docs/docs/ai-presets.mdx (2)
95-97
: Fix awkward wording & add missing commaThe introductory sentence is hard to parse and misses a comma after “Docker CE”.
[ suggest_nitpick]-If you have Docker Desktop or running Docker CE it can run LLMs the Docker Model Runner. To connect to a Docker Model Runner use a preset similar to the following: +If you have Docker Desktop installed, or are running Docker CE, you can run LLMs with the Docker Model Runner. To connect to a Docker Model Runner, use a preset similar to the following:
103-106
: Double-check default base URL / engine pathThe example mixes a Gemma image with the
llama.cpp
engine path. This may confuse users unless Gemma is actually exposed through that engine. Please verify the correct endpoint/port combination for Docker Model Runner before publishing.
This PR adds configuration documentation for Docker Model Runner integration, enabling WaveTerm users to connect to AI models running locally with privacy-first processing.
It's a docs only change, and Wave does work on my machine with the provided configuration values: