Skip to content

Commit 6d1d9c6

Browse files
Document docker image reconfiguration (#255)
Needed for people that need to bring their own registries
1 parent 4a85f4c commit 6d1d9c6

File tree

2 files changed

+96
-0
lines changed

2 files changed

+96
-0
lines changed

pages/deployments/sandboxing.md

Lines changed: 96 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -41,3 +41,99 @@ If your enterprise cannot accept external communication to github, we recommend
4141
The deployment-operator and scaffolds repos were both designed to be forked or vendored. Once you've decided upon a strategy for both, you can configure them as repositories in your console, then go to https://{your-plural-console}/cd/settings/repositories and chose to rewire the relevant repos as needed. You can also just directly modify the url and authorization information for the https://github.com/pluralsh/deployment-operator.git and other repos if you'd like too.
4242

4343
To reconfigure a self-managed repo for compatibilities and deprecations, you'll need to fork or vendor https://github.com/pluralsh/console then configure the `GITHUB_RAW_URL` env var to point to the new location. The current default is https://raw.githubusercontent.com/pluralsh/console. This will then be appended w/ the branch + path (eg "${GITHUB_RAW_URL}/master/static/compatibilities) to fetch the relevant data for both uis.
44+
45+
## Customizing Docker Registries
46+
47+
Lots of enterprises have strict requirements around the docker registries they use, or pull caches that whitelist a limited set of registries. We currently publish our images to dockerhub, gcr and our own registry, dkr.plural.sh. We are also adding quay.io in the near future for orgs that integrate with that as well. The important images for setting up your own instance are:
48+
49+
- pluralsh/console
50+
- pluralsh/kas
51+
- pluralsh/deployment-controller
52+
- pluralsh/deployment-operator
53+
- pluralsh/agentk
54+
55+
The first three will be configured in the main console chart and are installed once in your management cluster, the latter two are needed for your deployment agent pod, and require a bit more advanced configuration to manage them in bulk.
56+
57+
A starter values file for configuring images for your console in the management cluster would be:
58+
59+
```yaml
60+
# configure main console image
61+
image:
62+
repository: your.enterprise.registry/pluralsh/console
63+
tag: 0.8.7 # only if you want to pin a tag (not recommended as it's set by the chart already)
64+
65+
# configure console operator image
66+
controller:
67+
controllerManager:
68+
manager:
69+
image:
70+
repository: your.enterprise.registry/pluralsh/console
71+
72+
# configure kas image
73+
kas:
74+
image:
75+
repository: your.enterprise.registry/pluralsh/kas
76+
```
77+
78+
And for the agent it would be:
79+
80+
```yaml
81+
# configure main agent
82+
image:
83+
repository: your.enterprise.registry/pluralsh/deployment-operator
84+
85+
# configure agentk (if this isn't pullable kubernetes dashboarding functionality will break but deployments can still proceed)
86+
agentk:
87+
image:
88+
repository: your.enterprise.registry/pluralsh/agentk
89+
```
90+
91+
For more advanced configuration, we definitely recommend consulting the charts directly, they're both open source at https://github.com/pluralsh/console and https://github.com/pluralsh/deployment-operator.
92+
93+
## Configuring Agent Helm Values
94+
95+
Like we said, the main console deployment is pretty easy to configure, but the agents need to be handled specially since they need to be configured in bulk. We provide a number of utilities to make reconfiguration scalable.
96+
97+
First, you'll first want to use our agent settings to configure your helm updates for agents globally, done at `/cd/settings/agents`. You should see a screen like the following that allows you to edit the helm values for agent charts managed through Plural:
98+
99+
![](/assets/deployments/agent-update.png)
100+
101+
When you're installing an agent on a new cluster, you'll want to specify your custom values so agent pods can properly bootstrap as well. You have two main options, install via cli or terraform. To configure custom values when using the cli, there's a `--values` flag that can point to a yaml file for your custom values, eg something like:
102+
103+
```bash
104+
plural cd clusters bootstrap --name my-new-cluster --values ./agent-values.yaml
105+
```
106+
107+
This will merge in those values with the chart, and you can use the example yaml above to jumpstart writing the exact spec you need.
108+
109+
For terraform, our provider also supports passing custom values like the following for eks:
110+
111+
```tf
112+
data "aws_eks_cluster" "cluster" {
113+
name = var.cluster_name
114+
}
115+
116+
data "aws_eks_cluster_auth" "cluster" {
117+
name = var.cluster_name
118+
}
119+
120+
# store agent values in an adjacent file for the purpose of this example
121+
data "local_file" "agent_values" {
122+
filename = "${path.module}/../helm-values/agent.yaml"
123+
}
124+
125+
# this creates the cluster in our api, then performs a helm install w/ the agent chart in one tf resource
126+
resource "plural_cluster" "my-cluster" {
127+
handle = "my-cluster"
128+
name = var.cluster_name
129+
tags = var.tags
130+
131+
helm_values = data.local_file.agent_values.content # can also just be passed as a raw string instead of using the file import method
132+
133+
kubeconfig = {
134+
host = data.aws_eks_cluster.cluster.endpoint
135+
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
136+
token = data.aws_eks_cluster_auth.cluster.token
137+
}
138+
}
139+
```
394 KB
Loading

0 commit comments

Comments
 (0)