Skip to content
This repository was archived by the owner on Jul 23, 2025. It is now read-only.

Commit a29d7e6

Browse files
committed
Fix find & replace mistakes
1 parent 7be8875 commit a29d7e6

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

docs/partials/_aider-providers.mdx

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -75,7 +75,7 @@ export OLLAMA_API_BASE=http://localhost:8989/ollama
7575
:::note
7676

7777
To persist this setting, add it to your shell profile (e.g., `~/.bashrc` or
78-
`~/.zshrc`) or use one of's other
78+
`~/.zshrc`) or use one of's aider's other
7979
[supported configuration methods](https://aider.chat/docs/config/api-keys.html).
8080

8181
:::
@@ -96,7 +96,7 @@ Restart your shell after running `setx`.
9696
</TabItem>
9797
</Tabs>
9898

99-
Then run:
99+
Then run aider:
100100

101101
```bash
102102
aider --model ollama_chat/<MODEL_NAME>
@@ -114,7 +114,7 @@ CPU cores and 16GB of RAM. If you have more compute resources available, our
114114
experimentation shows that larger models do yield better results.
115115

116116
For more information, see the
117-
[ docs for connecting to Ollama](https://aider.chat/docs/llms/ollama.html).
117+
[aider docs for connecting to Ollama](https://aider.chat/docs/llms/ollama.html).
118118

119119
</TabItem>
120120
</Tabs>

0 commit comments

Comments
 (0)