Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ollma 403怎么办? #174

Open
zero617 opened this issue Aug 7, 2024 · 5 comments
Open

ollma 403怎么办? #174

zero617 opened this issue Aug 7, 2024 · 5 comments

Comments

@zero617
Copy link

zero617 commented Aug 7, 2024

logs:
PS C:\Users\60461> ollama serve
2024/08/07 14:40:41 routes.go:1011: INFO server config env="map[OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST: OLLAMA_KEEP_ALIVE: OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS: OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR:D:\Software\scoop\apps\ollama_cderv\current\ollama_runners OLLAMA_TMPDIR:]"
time=2024-08-07T14:40:41.359+08:00 level=INFO source=images.go:740 msg="total blobs: 5"
time=2024-08-07T14:40:41.360+08:00 level=INFO source=images.go:747 msg="total unused blobs removed: 0"
time=2024-08-07T14:40:41.360+08:00 level=INFO source=routes.go:1057 msg="Listening on 127.0.0.1:11434 (version 0.1.42)"
time=2024-08-07T14:40:41.361+08:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu_avx cpu_avx2 cuda_v11.3 rocm_v5.7 cpu]"
time=2024-08-07T14:40:41.532+08:00 level=INFO source=types.go:71 msg="inference compute" id=GPU-5392dfca-71f5-39f0-f508-c97cd317dd59 library=cuda compute=8.9 driver=12.4 name="NVIDIA GeForce RTX 4060 Ti" total="16.0 GiB" available="14.9 GiB"
[GIN] 2024/08/07 - 14:40:50 | 403 | 0s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/08/07 - 14:40:51 | 403 | 0s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/08/07 - 14:40:52 | 403 | 0s | 127.0.0.1 | POST "/api/generate"

@fishjar
Copy link
Owner

fishjar commented Aug 13, 2024

启动ollama时添加一个环境变量即可 OLLAMA_ORIGINS=*

详细解释看这里:https://github.com/ollama/ollama/blob/main/docs/faq.md#how-can-i-allow-additional-web-origins-to-access-ollama

@Derkida
Copy link

Derkida commented Oct 12, 2024

启动ollama时添加一个环境变量即可 OLLAMA_ORIGINS=*

详细解释看这里:https://github.com/ollama/ollama/blob/main/docs/faq.md#how-can-i-allow-additional-web-origins-to-access-ollama

添加了环境变量仍然403

@X-Bird
Copy link

X-Bird commented Nov 9, 2024

你本机命令行启动的时候用这个命令
OLLAMA_ORIGINS="*" ollama serve

然后要测试能不能行的话,另开一个控制台用这个命令

curl http://localhost:11434/api/generate -d '{
  "model": "llama3.2",
  "prompt":"Why is the sky blue?"
}'

测试通过的话就没啥问题

kiss-translator 插件配置的一个选项框需要填Model名字,你需要完全的准确的填准,也就是需要包括版本号

@adrianzhang
Copy link

adrianzhang commented Nov 22, 2024

你本机命令行启动的时候用这个命令 OLLAMA_ORIGINS="*" ollama serve

然后要测试能不能行的话,另开一个控制台用这个命令

curl http://localhost:11434/api/generate -d '{
  "model": "llama3.2",
  "prompt":"Why is the sky blue?"
}'

测试通过的话就没啥问题

kiss-translator 插件配置的一个选项框需要填Model名字,你需要完全的准确的填准,也就是需要包括版本号

测试总是报错。都是这样的消息:
{"model":"aya:latest","created_at":"2024-11-22T08:36:55.085113053Z","response":",","done":false}

模型本身没问题,用各种ChatGPT兼容界面去连接/v1/chat/completions,让它翻译,都能翻译过来。而且我用的格式就是KISS Ollama设置里面那种,只不过把{{from}}{{to}}{{text}}这些都替换成具体的值。

@adrianzhang
Copy link

你本机命令行启动的时候用这个命令 OLLAMA_ORIGINS="*" ollama serve

然后要测试能不能行的话,另开一个控制台用这个命令

curl http://localhost:11434/api/generate -d '{
  "model": "llama3.2",
  "prompt":"Why is the sky blue?"
}'

测试通过的话就没啥问题

kiss-translator 插件配置的一个选项框需要填Model名字,你需要完全的准确的填准,也就是需要包括版本号

看到了另一个issue
#112
把Ollama相关设置写到ChatGPT那里就可以。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants