Model Context Protocol Server for Uyuni Server API.
- get_list_of_active_systems
- get_cpu_of_a_system
- get_all_systems_cpu_info
- check_system_updates
- check_all_systems_for_updates
- schedule_apply_pending_updates_to_system
- schedule_apply_specific_update
- get_systems_needing_security_update_for_cve
- get_systems_needing_reboot
- schedule_system_reboot
- cancel_action
- list_all_scheduled_actions
You need uv
installed. See https://docs.astral.sh/uv
Once you have uv
, install the dependencies with:
uv sync
You need to create the .venv/credentials
with a content like this:
UYUNI_SERVER=192.168.1.124:8443
UYUNI_USER=admin
UYUNI_PASS=admin
Replace the values by the ones that make sense for you.
Then, you can use this command with an mcp-client
:
uv run --env-file=.venv/credentials --directory PATH OF THIS CHECKOUT mcp-server-uyuni
You can run
npx @modelcontextprotocol/inspector uv run --env-file=.venv/credentials --directory PATH OF THIS CHECKOUT mcp-server-uyuni
Open WebUI is an extensible, feature-rich, and user-friendly self-hosted AI platform designed to operate entirely offline. It supports various LLM runners like Ollama and OpenAI-compatible APIs, with built-in inference engine for RAG, making it a powerful AI deployment solution. More at https://docs.openwebui.com/
You need uv
installed. See https://docs.astral.sh/uv
Start v0.6.10 (for MCP support we need a version >= 0.6.7)
uv tool run [email protected] serve
Configure the OpenAI API URL by following these instructions:
https://docs.openwebui.com/getting-started/quick-start/starting-with-openai
For gemini, use the URL https://generativelanguage.googleapis.com/v1beta/openai and get the token API from the Google AI Studio https://aistudio.google.com/
You need to create the .venv/credentials
with a content like this:
UYUNI_SERVER=192.168.1.124:8443
UYUNI_USER=admin
UYUNI_PASS=admin
Then, you need a config.json
with a content like this. Replace the values by the ones that make sense for you.
{
"mcpServers": {
"mcp-server-uyuni": {
"command": "uv",
"args": ["run", "--env-file=PATH OF THIS CHECKOUT/.venv/credentials","--directory","PATH OF THIS CHECKOUT","mcp-server-uyuni]
}
}
}
Then, you can start the Model Context Protocol to Open API proxy server:
uvx mcpo --port 9000 --config ./config.json
And then you can add the tool to the Open Web UI. See https://docs.openwebui.com/openapi-servers/open-webui#step-2-connect-tool-server-in-open-webui .
Note the url should be http://localhost/mcp-server-uyuni as explained in https://docs.openwebui.com/openapi-servers/open-webui#-optional-using-a-config-file-with-mcpo
Docker build:
docker build -t mcp-server-uyuni .
Then, instead of using uv run
you can use docker run -i --rm --env-file .venv/credentials mcp-server-uyuni
at any of the mcp-client configurations explained above.
To create a new release for mcp-server-uyuni
, follow these steps:
- Update Documentation (
README.md
):- Ensure the list of available tools under the "## Tools" section is current and reflects all implemented tools in
srv/mcp-server-uyuni/server.py
. - Review and update any screenshots in the
docs/
directory and their references in thisREADME.md
to reflect the latest UI or functionality, if necessary. - Verify all usage instructions and examples are still accurate.
- Ensure the list of available tools under the "## Tools" section is current and reflects all implemented tools in
- Update Manual Test Cases (
TEST_CASES.md
):- Refer to the "How to Update for a New Tag/Release" section within
TEST_CASES.md
. - Add a new status column for the upcoming release version (e.g.,
Status (vX.Y.Z)
). - Execute all relevant manual test cases against the code to be released.
- Record the
Pass
,Fail
,Blocked
, orN/A
status for each test case in the new version column.
- Refer to the "How to Update for a New Tag/Release" section within
- Commit Changes: Commit all the updates to
README.md
,TEST_CASES.md
, and any other changed files. - Update version in pyproject.toml: Use semantic versioning to set the new version.
- Update CHANGELOG.md:
- Generate the changelog using
conventional-changelog-cli
. If you don't have it installed globally, you can usenpx
. - The command to generate the changelog using the
conventionalcommits
preset and output it toCHANGELOG.md
(prepending the new changes) is:npx conventional-changelog-cli -p conventionalcommits -i CHANGELOG.md -s
- Review the generated
CHANGELOG.md
for accuracy and formatting. - Commit the updated
CHANGELOG.md
.
- Generate the changelog using
- Create Git Tag: Create a new Git tag for the release (e.g.,
git tag vX.Y.Z
). Follow semantic versioning rules. - Push Changes and Tags: Push your commits (including the changelog update) and the new tag to the repository (e.g.,
git push && git push --tags
). - Automated Build and Push: Pushing the tag to GitHub will automatically trigger the "Docker Publish" GitHub Action. This action builds the Docker image and pushes it to the GitHub Container Registry (
ghcr.io
) with tags for the specific version (e.g.,v0.3.0
) and major.minor (e.g.,v0.3
). Pushing tomain
will update thelatest
tag. - Test the container: Pull the newly published image from
ghcr.io
and run the tests inTEST_CASES.md
against it.docker run -i --rm --env-file .venv/credentials ghcr.io/uyuni-project/mcp-server-uyuni:VERSION
(replace VERSION with the new tag).
MIT