Skip to content

Commit 95a5700

Browse files
Add Cluster Docs
Also some more improvements for the sandboxing documentation
1 parent e03999b commit 95a5700

File tree

4 files changed

+169
-20
lines changed

4 files changed

+169
-20
lines changed

pages/getting-started/advanced-config/sandboxing.md

Lines changed: 62 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -73,21 +73,60 @@ This is a suitable replacement if you're ok with some data staleness and don't h
7373
7474
Lots of enterprises have strict requirements around the docker registries they use, or pull caches that whitelist a limited set of registries. The important images for setting up your own instance are:
7575
76-
- ghcr.io/pluralsh/console
77-
- ghcr.io/pluralsh/kas
78-
- ghcr.io/pluralsh/deployment-controller
79-
- ghcr.io/pluralsh/deployment-operator
80-
- ghcr.io/pluralsh/agentk
81-
- ghcr.io/pluralsh/git-server (optional if you want to use our vendored git server)
76+
Plural maintained images:
77+
Management Cluster:
78+
- ghcr.io/pluralsh/console
79+
- ghcr.io/pluralsh/kas
80+
- ghcr.io/pluralsh/deployment-controller
81+
- ghcr.io/pluralsh/git-server (optional if you want to use our vendored git server)
82+
Agent:
83+
- ghcr.io/pluralsh/agentk
84+
- ghcr.io/pluralsh/deployment-operator
85+
86+
Third party images used by our chart (these are often already vendored in an enterprise environment):
8287
- ghcr.io/pluralsh/registry/bitnami/redis:7.4.2-debian-12-r5
88+
- ghcr.io/pluralsh/registry/nginx:stable-alpine3.20-slim (can be any nginx image, ours is not customized)
89+
- docker.io/kubernetesui/dashboard-api - this is also available via `ghcr.io/pluralsh/registry/kubernetesui/dashboard-api`
90+
91+
If you want to deterministically extract the images from our charts, you can also just use yq, like so:
92+
93+
```sh
94+
git clone https://github.com/pluralsh/console.git
95+
cd console
96+
helm template charts/console | yq '..|.image? | select(.)' | sort -u
97+
98+
git clone https://github.com/pluralsh/deployment-operator.git
99+
cd deployment-operator
100+
helm template charts/console | yq '..|.image? | select(.)' | sort -u
101+
```
102+
103+
If you plan to utilize Stacks, Sentinels or our async coding agent harness, there are a few other images that are utilize by our deployment-operator that are as follows (they all follow the same versioning as the deployment-operator, although sometimes have tags that are parameterized by tool used):
104+
105+
* ghcr.io/pluralsh/harness
106+
* ghcr.io/pluralsh/sentinel-harness
107+
* ghcr.io/pluralsh/agent-harness
108+
109+
You can see them all [here](https://github.com/orgs/pluralsh/packages?repo_name=deployment-operator).
110+
111+
The product experience of all these allow bring-your-own image, but if you configure a pull-through cache for these images or vendor them consistently, you can have Plural auto-wire it against an internal registry with the following CRD:
112+
113+
```yaml
114+
apiVersion: deployments.plural.sh/v1alpha1
115+
kind: AgentConfiguration
116+
metadata:
117+
name: global
118+
namespace: plrl-deploy-operator
119+
spec:
120+
baseRegistryURL: your.enterprise.registry
121+
```
122+
123+
See more about this resource [here](/overview/agent-api-reference#agentconfigurationspec)
83124

84125
{% callout severity="info" %}
85126
All of these images follow semver, and are also published to `gcr.io` and `docker.io` as well for convenience, in the event that either of those are eligible for internal pull-through caches. The redis instance is not meaningfully customized and any bitnami or equivalent redis container image can theoretically work there.
86127
{% /callout %}
87128

88-
The first three will be configured in the main console chart and are installed once in your management cluster, the latter two are needed for your deployment agent pod, and require a bit more advanced configuration to manage them in bulk.
89-
90-
A starter values file for configuring images for your console in the management cluster would be:
129+
To configure your *management cluster* helm values, use the following template:
91130

92131
```yaml
93132
# configure main console image
@@ -103,9 +142,22 @@ kas:
103142
104143
image:
105144
repository: your.enterprise.registry/pluralsh/kas
145+
146+
redis:
147+
registry: your.enterprise.registry
148+
repository: redis
149+
150+
# if you need to enable the internal git server
151+
gitServer:
152+
repository: your.enterprise.registry/git-server
153+
154+
dashboard:
155+
api:
156+
image:
157+
repository: your.enterprise.registry/kubernetesui/dashboard
106158
```
107159

108-
And for the agent it would be:
160+
And for the *agent* it would be:
109161

110162
```yaml
111163
# configure main agent
Lines changed: 92 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,92 @@
1+
---
2+
title: Add A Cluster
3+
description: How To Add An Existing Cluster to Your Plural Instance
4+
---
5+
6+
Adding a new cluster to Plural is very simple, it's simply a matter of installing our agent onto any end cluster, and usually follows two paths:
7+
8+
1. Leverage our CLI which wraps a full install including registering with your Plural api and helm installing the agent on the cluster
9+
2. Use our terraform provider to wrap this whole process as Infrastructure as Code
10+
11+
Both are functional and fully supported, and execute equivalent code under the hood. If you set up your install with `plural up` we've already wrapped a ton of fully functional GitOps workflows for you, and those usually are more featureful workflows than doing this manually. If you want to read more about them, feel free to look at the guide here: [Create a Workload Cluster](/getting-started/how-to-use/workload-cluster).
12+
13+
{% callout severity="info" %}
14+
We strongly recommend leveraging a IaC based pattern, since it'll allow you to export terraform state into Plural for re-use and maximizes reproducibility
15+
{% /callout %}
16+
17+
18+
## Onboard a cluster with our CLI
19+
20+
To add a new cluster simply run with a valid kubeconfig set up locally:
21+
22+
```sh
23+
plural cd clusters bootstrap --name {your-cluster-name} --tag {tag}={value} --tag {tag2}={value2}
24+
```
25+
26+
To see all CLI options, feel free to use:
27+
28+
```sh
29+
plural cd clusters bootstrap --help
30+
```
31+
32+
If you need to reinstall our agent for any reason, just use:
33+
34+
```sh
35+
plural cd clusters reinstall @{cluster-handle}
36+
```
37+
38+
{% callout severity="info" %}
39+
The `@` character is required, as it allows our CLI to differentiate names from IDs.
40+
41+
You should also address the cluster by handle in the event name is not unique in your system.
42+
{% /callout %}
43+
44+
## Onboard a cluster with our Terraform Provider
45+
46+
Here is a basic terraform snippet that shows how you can use our Terraform provider to install our agent
47+
48+
```terraform
49+
resource "plural_cluster" "this" {
50+
handle = var.cluster
51+
name = var.cluster
52+
tags = {
53+
fleet = var.fleet
54+
tier = var.tier
55+
}
56+
57+
# metadata attaching useful cluster-level state in Plural to use for service templating
58+
metadata = jsonencode({
59+
tier = var.tier
60+
iam = {
61+
load_balancer = module.addons.gitops_metadata.aws_load_balancer_controller_iam_role_arn
62+
cluster_autoscaler = module.addons.gitops_metadata.cluster_autoscaler_iam_role_arn
63+
external_dns = module.externaldns_irsa_role.iam_role_arn
64+
cert_manager = module.externaldns_irsa_role.iam_role_arn
65+
}
66+
67+
vpc_id = local.vpc.vpc_id
68+
region = var.region
69+
})
70+
71+
# direct kubeconfig for this cluster
72+
kubeconfig = {
73+
host = module.eks.cluster_endpoint
74+
cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)
75+
token = data.aws_eks_cluster_auth.cluster.token
76+
}
77+
}
78+
79+
# optionally can specify kubeconfig at the provider level
80+
81+
provider "plural" {
82+
kubeconfig = {
83+
host = module.eks.cluster_endpoint
84+
cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)
85+
token = data.aws_eks_cluster_auth.cluster.token
86+
}
87+
}
88+
```
89+
90+
This makes it easy to wrap Plural setup in existing IaC codebases and ensure full repeatability.
91+
92+
The metadata block is of importance as well, as it drives our helm + yaml templating experience within Plural CD. You can see some guides around that [here](/plural-features/continuous-deployment/service-templating).
Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
../service-templating

src/routing/docs-structure.ts

Lines changed: 14 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -36,6 +36,10 @@ export const docsStructure: DocSection[] = [
3636
path: 'plural-cloud',
3737
title: 'Host Your Plural Console with Plural Cloud',
3838
},
39+
{
40+
path: 'add-a-cluster',
41+
title: 'Add A Cluster To Plural',
42+
},
3943
],
4044
},
4145
{
@@ -86,6 +90,16 @@ export const docsStructure: DocSection[] = [
8690
path: 'resource-application-logic',
8791
title: 'Resource Application Logic',
8892
},
93+
{
94+
path: 'service-templating',
95+
title: 'Service templating',
96+
sections: [
97+
{
98+
path: 'supporting-liquid-filters',
99+
title: 'Supporting Liquid Filters',
100+
},
101+
],
102+
},
89103
{ path: 'lua', title: 'Dynamic Helm Configuration with Lua Scripts' },
90104
{ path: 'global-service', title: 'Global services' },
91105
{
@@ -197,16 +211,6 @@ export const docsStructure: DocSection[] = [
197211
{ path: 'filters', title: 'Liquid Filters in PR Automation' },
198212
],
199213
},
200-
{
201-
path: 'service-templating',
202-
title: 'Service templating',
203-
sections: [
204-
{
205-
path: 'supporting-liquid-filters',
206-
title: 'Supporting Liquid Filters',
207-
},
208-
],
209-
},
210214
{
211215
path: 'projects-and-multi-tenancy',
212216
title: 'Projects and multi-tenancy',

0 commit comments

Comments
 (0)