-
Notifications
You must be signed in to change notification settings - Fork 4
KV Store Usage
Manticore uses Consul's KV store to determine the desired state of all sdl_core and HMI containers, similar to how job files in Nomad are a desired state of tasks and task groups. The reason for this separation is because Manticore cannot simply run these containers as soon as a request comes in; it needs to worry about memory and size constraints. At some point, there will be not enough resources in the cluster of machines for this system, and then issues on which users should get which cores/hmis will arise. To bring the situation to order when there is no more space, a waiting list is implemented.
All requests that Manticore sees will not immediately get processed. Instead, whichever Manticore gets sent the request will store the request as a key-value under /manticore/requests/data/<id>
, where the id is the identifier of the user and the value is the information about what type of containers to run for that user. Manticore will listen for changes in the KV store and then see the new request added. All functionality about the waiting list, the allocation data list, and the HAProxy configuration changes when the contents of /manticore/requests/data
change. This makes it easy to know what the state of the application should be.
You may notice a /manticore/requests/filler
value existing in the KV store, as well as in other locations. When the KVs of a directory such as /manticore/requests/
are empty then no KV updates are triggered. So, one element must always exist. This filler value solves that problem.
Once Manticore gets updated information about the user requests, it checks the data in the waiting list located at /manticore/waiting/data
which is stored as stringified JSON. That data is then updated by looking at the requests. If there are IDs in requests that aren't in the waiting list then they are added to the end of the waiting list. If there are IDs in the waiting list that aren't in requests then those IDs should be removed. Here is an example of the data in the waiting list:
{
"23465": {
queue: 0,
claimed: true
},
"21389": {
queue: 2,
claimed: false
}
}
The queue property for a user ID indicates the position of the user in waiting list. There is no guarantee that the queue numbers will be sequential, only that the numbers are unique across all users to determine order. The claimed property is true if Manticore had enough resources to allocate a core and HMI container for that user. Technically, all users are in the waiting list, but users are actually waiting only if their claimed
property is false. Once someone stops using their containers, their ID will be removed from the request list, and consequentially the waiting list, leaving room for more users to claim containers.
Once the HMI and core container are running we have all the information we need to inform the user of where these containers are. This information is difficult to compose since multiple HTTP API requests are needed to retrieve the address and port information of these services. So, once Manticore gets the information, Manticore stores it in the KV store for future reference. This is stored as a key in /manticore/allocations/data/<id>
, with the value being stringified JSON of the address information. Once the watch is triggered that is looking at that allocation store then that watch has up-to-date knowledge of all the locations of these containers. This is vital for the construction of the HAProxy configuration file so that routing can be handled correctly.
consul-template cannot simply take the information in /manticore/allocations/data
and make a useful configuration file out of that. Manticore needs to transform the data into something that consul-template can parse easily. The proxy folder's purpose is to do just that. Once all the information is digested the output of that is stored in the KV store again, but in a different location that is /haproxy/data/
. The information in /haproxy/data
is strictly for routing users to the locations of the HMI and sdl_core
HAProxy also needs to know information about where the Manticore web applications are. This information is curated from a watch that finds all Manticore services running and stores the information in /haproxy/webAppAddresses
. Additionally, the domain name and main port that HAProxy will listen on needs to be defined, and that is determined by environment variables passed in. This information is stored in /haproxy/domainName
and /haproxy/mainPort
.