GitHub Security Lab (GHSL) Vulnerability Report, AnythingLLM: GHSL-2025-056
The GitHub Security Lab team has identified a potential security vulnerability in AnythingLLM.
Summary
If AnythingLLM is configured to use Ollama with an authentication token, this token could be exposed in plain text to unauthenticated users at the /api/setup-complete
endpoint.
Project
AnythingLLM
Tested Version
v1.7.8
Details
Ollama token leak in systemSettings.js
(GHSL-2025-056
)
AnythingLLM has an endpoint /api/setup-complete
that does not require any credentials to use even if the main AnythingLLM is protected with authentication. This endpoint reveals some system information about the instance, but masks most of the sensitive values. At the same time, if AnythingLLM is set up to use Ollama with an authentication token, this token is not masked because of the error on line 475.
Proof of concept
curl localhost:3001/api/setup-complete | grep OllamaLLMAuthToken
Vulnerable code location
AnythingLLM
So this token can be exposed even to unauthenticated users.
Impact
Ollama token leakage on AnythingLLM grants complete access to the Ollama instance. Since Ollama offers an API for configuring the models, a potential attacker could modify the model's template or system prompt to change the model's behavior. This would enable attackers to hijack conversations of other users, invoke any tools or MCP servers utilized by AnythingLLM, and potentially access documents uploaded to AnythingLLM by other users.
Remediation
Conceal the Ollama token in systemSettings.js
, similar to the other tokens:
- OllamaLLMAuthToken: process.env.OLLAMA_AUTH_TOKEN ?? null,
+ OllamaLLMAuthToken: !!process.env.OLLAMA_AUTH_TOKEN,
CWE
- CWE-200: Exposure of Sensitive Information to an Unauthorized Actor
Credit
This issue was discovered and reported by GHSL team member @artsploit (Michael Stepankin).
Contact
You can contact the GHSL team at [email protected]
, please include a reference to GHSL-2025-056
in any communication regarding this issue.
We are committed to working with you to help resolve this issue. In this report you will find everything you need to effectively coordinate a resolution of this issue with the GHSL team.
If at any point you have concerns or questions about this process, please do not hesitate to reach out to us at [email protected]
(please include GHSL-2025-056
as a reference). See also this blog post written by GitHub's Advisory Curation team which explains what CVEs and advisories are, why they are important to track vulnerabilities and keep downstream users informed, the CVE assigning process, and how they are used to keep open source software secure.
Disclosure Policy
This report is subject to a 90-day disclosure deadline, as described in more detail in our coordinated disclosure policy.
GitHub Security Lab (GHSL) Vulnerability Report, AnythingLLM:
GHSL-2025-056
The GitHub Security Lab team has identified a potential security vulnerability in AnythingLLM.
Summary
If AnythingLLM is configured to use Ollama with an authentication token, this token could be exposed in plain text to unauthenticated users at the
/api/setup-complete
endpoint.Project
AnythingLLM
Tested Version
v1.7.8
Details
Ollama token leak in
systemSettings.js
(GHSL-2025-056
)AnythingLLM has an endpoint
/api/setup-complete
that does not require any credentials to use even if the main AnythingLLM is protected with authentication. This endpoint reveals some system information about the instance, but masks most of the sensitive values. At the same time, if AnythingLLM is set up to use Ollama with an authentication token, this token is not masked because of the error on line 475.Proof of concept
Vulnerable code location
AnythingLLM
So this token can be exposed even to unauthenticated users.
Impact
Ollama token leakage on AnythingLLM grants complete access to the Ollama instance. Since Ollama offers an API for configuring the models, a potential attacker could modify the model's template or system prompt to change the model's behavior. This would enable attackers to hijack conversations of other users, invoke any tools or MCP servers utilized by AnythingLLM, and potentially access documents uploaded to AnythingLLM by other users.
Remediation
Conceal the Ollama token in
systemSettings.js
, similar to the other tokens:CWE
Credit
This issue was discovered and reported by GHSL team member @artsploit (Michael Stepankin).
Contact
You can contact the GHSL team at
[email protected]
, please include a reference toGHSL-2025-056
in any communication regarding this issue.We are committed to working with you to help resolve this issue. In this report you will find everything you need to effectively coordinate a resolution of this issue with the GHSL team.
If at any point you have concerns or questions about this process, please do not hesitate to reach out to us at
[email protected]
(please includeGHSL-2025-056
as a reference). See also this blog post written by GitHub's Advisory Curation team which explains what CVEs and advisories are, why they are important to track vulnerabilities and keep downstream users informed, the CVE assigning process, and how they are used to keep open source software secure.Disclosure Policy
This report is subject to a 90-day disclosure deadline, as described in more detail in our coordinated disclosure policy.