Scalability and HA roadmap for large Uyuni deployments #11497
Replies: 3 comments
-
|
Hey @dzhamil-dzham A single uyuni server should be able to manage several thousands of systems, with proper hardware and tunning. More information in here
We are moving in the direction of splitting the large container, but you should not expect any significant changes that would affect scalability in the near future.
Not in the foreseeable future. That is a complex topic on uyuni, since a large amount of data would need to be synchronized between eah server (even in micro services) We have a large scale solution base on a HUB architecture. More information here
Currently on custom HA/workarounds.
As state before, we are moving in the direction of the HUB architecture, and rely on recurrent actions to interact with the machines. In mid term, we are also planning to add more capabilities to the recurrent actions, like what to do in case the minion is down or the action failed.
I hope I have answered your question. If anything is not clear, or if you need more details let me know. |
Beta Was this translation helpful? Give feedback.
-
|
@rjmateus thanks a lot for the detailed answers and for sharing the current direction of the project — this is very helpful for those of us planning larger deployments. We would like to clarify a few practical points to better understand what to expect in the near future: Container split in upcoming releases Shared external database usage Simultaneous active instances Understanding these points would really help the community decide whether to wait for upstream improvements or invest in custom HA orchestration now. Thanks again for your work on the project and for taking the time to clarify the roadmap. |
Beta Was this translation helpful? Give feedback.
-
No. In 2026 we are focusing on having uyuni server running on Kubernetes. Then we will revisit the topic of splitting the server container. Don't expect significant changes in 2026 for this topic.
Two active uyuni servers accessing the same database can lead to catastrophic results. You can configure a standby server and replicate the database. But uyuni also has data in the file systems that should be persistent to provide a recovery mechanism. Example: salt keys.
It has the potential to lead to inconsistent states, since uyuni also has data in the file system it can lead to inconsistent data.
At this stage yes.
The best approach could be to have a standby database replication and have periodic backups of the file system. The main part is the salt volumes. Other data, like channel metadata, are recriated in a daily basis from the information in the database. If you have a server in standby, if the main server goes down, it would be able to recover in a few minutes. Tipically people patch during a maintenance window, and don't need uyuni to have a 99,9% availability.
You're welcome. We like to engage with our community :) |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello Uyuni team,
I am currently running Uyuni 2025.10 in containers, and I noticed that the official server image still bundles multiple services into a single container. This works for small environments but raises concerns for large-scale deployments (thousands of managed systems).
Could you clarify:
Are there any plans to move towards a microservice architecture where each container runs a single service/process?
Will future releases support production-grade scalability and HA (e.g., multiple Uyuni server instances with proper load balancing, failover, and distributed workers)?
If not, should large-scale deployments rely on custom HA/workarounds, or is there an official path planned?
Understanding your roadmap here would help the community plan large deployments without resorting to fragile “scripts + manual failover” approaches.
Thank you in advance for any guidance.
Beta Was this translation helpful? Give feedback.
All reactions