v1.1.0
This release adds leader election via the --leader-elect
flag, along with other options.
The leader election uses a ConfigMap for storing the lock, and election events can be viewed using describe configmap <configmapname>
.
Config changes for Leader election
If this flag is enabled, the Pod that Escalator runs in will need an environment variable POD_NAME
set, exposed via the Downward API, to have the leader annotations and the events have the name of the pod. If not, a UUID will be used instead. Check the Deployment or AWS Deployment sample configs for details. Escalator's ClusterRole will also need updates, see Escalator RBAC for the details.
New command-line parameters
The new parameters: (from Command line options)
--leader-elect
Enable leader election behaviour. Note that Escalator uses a ConfigMap for the leader lock, not an Endpoint.
--leader-elect-lease-duration
Sets how long a nonleader will wait before it attempts to require the leadership. Measured against time of last observed ack.
--leader-elect-renew-deadline
Sets how long an acting leader will retry refreshing leadership before giving up.
--leader-elect-retry-period
Sets how long all the clients will wait in between attempts of any action.
--leader-elect-config-namespace
Sets the namespace where the configmap used for locking will be created or looked for. Defaults to kube-system
--leader-elect-config-name
Sets the name of the configmap used for locking. Defaults to escalator-leader-elect
Changelog
- Added leader election, #56
- Added tests for k8s code.
- Updated deployment YAML files with required extra parameters.
Docker Image
atlassian/escalator:v1.1.0