Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
26 changes: 15 additions & 11 deletions generated/routes.json
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@
},
"/overview/management-api-reference": {
"relPath": "/overview/management-api-reference.md",
"lastmod": "2025-11-18T00:53:40.000Z"
"lastmod": "2025-11-21T08:11:25.000Z"
},
"/overview/agent-api-reference": {
"relPath": "/overview/agent-api-reference.md",
Expand All @@ -39,6 +39,10 @@
"relPath": "/getting-started/first-steps/plural-cloud.md",
"lastmod": "2025-07-14T15:36:50.000Z"
},
"/getting-started/first-steps/add-a-cluster": {
"relPath": "/getting-started/first-steps/add-a-cluster.md",
"lastmod": "2025-11-21T22:23:51.000Z"
},
"/getting-started/how-to-use": {
"relPath": "/getting-started/how-to-use/index.md",
"lastmod": "2025-03-12T14:59:41.000Z"
Expand Down Expand Up @@ -89,7 +93,7 @@
},
"/getting-started/advanced-config/sandboxing": {
"relPath": "/getting-started/advanced-config/sandboxing.md",
"lastmod": "2025-05-14T21:43:40.000Z"
"lastmod": "2025-11-21T22:23:51.000Z"
},
"/getting-started/advanced-config/network-configuration": {
"relPath": "/getting-started/advanced-config/network-configuration.md",
Expand Down Expand Up @@ -123,6 +127,14 @@
"relPath": "/plural-features/continuous-deployment/resource-application-logic.md",
"lastmod": "2025-10-15T14:09:53.000Z"
},
"/plural-features/continuous-deployment/service-templating": {
"relPath": "/plural-features/continuous-deployment/service-templating/index.md",
"lastmod": "2025-11-21T22:26:15.213Z"
},
"/plural-features/continuous-deployment/service-templating/supporting-liquid-filters": {
"relPath": "/plural-features/continuous-deployment/service-templating/supporting-liquid-filters.md",
"lastmod": "2025-11-21T22:26:15.230Z"
},
"/plural-features/continuous-deployment/lua": {
"relPath": "/plural-features/continuous-deployment/lua.md",
"lastmod": "2025-07-15T12:44:58.000Z"
Expand Down Expand Up @@ -315,14 +327,6 @@
"relPath": "/plural-features/pr-automation/filters.md",
"lastmod": "2025-05-19T07:10:18.000Z"
},
"/plural-features/service-templating": {
"relPath": "/plural-features/service-templating/index.md",
"lastmod": "2025-03-12T14:59:41.000Z"
},
"/plural-features/service-templating/supporting-liquid-filters": {
"relPath": "/plural-features/service-templating/supporting-liquid-filters.md",
"lastmod": "2025-06-10T07:40:44.000Z"
},
"/plural-features/projects-and-multi-tenancy": {
"relPath": "/plural-features/projects-and-multi-tenancy/index.md",
"lastmod": "2025-05-15T21:02:36.000Z"
Expand Down Expand Up @@ -469,7 +473,7 @@
},
"/deployments/sandboxing": {
"relPath": "/getting-started/advanced-config/sandboxing.md",
"lastmod": "2025-05-14T21:43:40.000Z"
"lastmod": "2025-11-21T22:23:51.000Z"
},
"/deployments/network-configuration": {
"relPath": "/getting-started/advanced-config/network-configuration.md",
Expand Down
145 changes: 106 additions & 39 deletions pages/getting-started/advanced-config/sandboxing.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,27 +38,6 @@ To set it up, you need to configure a few env vars as well, in particular:

To simplify permission management, you can also configure specific emails to automatically be made admins via another env var: `CONSOLE_ADMIN_EMAILS`. It should be a comma seperated list, and on login we'll provision that user to be an admin w/in your Plural console instance. We'd recommend only setting this for a small set of users, then using group bindings for permissions from then on

## Self-Host Git Repos

If your enterprise cannot accept external communication to github, we can provide a fully self-hosted git server built with [soft-serve](https://github.com/charmbracelet/soft-serve) with the required Plural repos pre-cloned at a compatible version for your instance of the console. This can be easily enabled via helm with the following values:

```yaml
extraEnv:
- name: CONSOLE_DEPLOY_OPERATOR_URL
value: http://git-server.plrl-console:23232/deployment-operator.git # uses the git server for deployment operator updates
- name: CONSOLE_ARITIFACTS_URL
value: http://git-server.plrl-console:23232/scaffolds.git # uses the git server for our default service catalog setup artifacts
gitServer:
enabled: true
```

We publish a new version of this every release so you will simply need to ensure it's vendored and ready to pull on each helm upgrade. Many organizations have a natural way to vendor docker images, and since this is deployed as a fully self-contained container image, you can simply repurpose that process to managing the necessary git repositories as well.

If you want to vendor the repositories entirely, the upstream repos are here:

- https://github.com/pluralsh/deployment-operator
- https://github.com/pluralsh/scaffolds

## Sandboxed Compatibility Tables

We also bundle the compatibility and deprecation data in our docker images, and you can disable live polling github by setting the env var:
Expand All @@ -73,21 +52,72 @@ This is a suitable replacement if you're ok with some data staleness and don't h

Lots of enterprises have strict requirements around the docker registries they use, or pull caches that whitelist a limited set of registries. The important images for setting up your own instance are:

Plural maintained images:

```
Management Cluster:
- ghcr.io/pluralsh/console
- ghcr.io/pluralsh/kas
- ghcr.io/pluralsh/deployment-controller
- ghcr.io/pluralsh/deployment-operator
- ghcr.io/pluralsh/agentk
- ghcr.io/pluralsh/git-server (optional if you want to use our vendored git server)
```

```
Agent:
- ghcr.io/pluralsh/agentk
- ghcr.io/pluralsh/deployment-operator
```

```
Third party images used by our chart (these are often already vendored in an enterprise environment):
- ghcr.io/pluralsh/registry/bitnami/redis:7.4.2-debian-12-r5
- ghcr.io/pluralsh/registry/nginx:stable-alpine3.20-slim (can be any nginx image, ours is not customized)
- docker.io/kubernetesui/dashboard-api - this is also available via `ghcr.io/pluralsh/registry/kubernetesui/dashboard-api`
```

If you want to deterministically extract the images from our charts, you can also just use yq, like so:

```sh
git clone https://github.com/pluralsh/console.git
cd console
helm template charts/console | yq '..|.image? | select(.)' | sort -u

git clone https://github.com/pluralsh/deployment-operator.git
cd deployment-operator
helm template charts/console | yq '..|.image? | select(.)' | sort -u
```

If you plan to utilize Stacks, Sentinels or our async coding agent harness, there are a few other images that are utilized by our deployment-operator:

```
- ghcr.io/pluralsh/harness
- ghcr.io/pluralsh/sentinel-harness
- ghcr.io/pluralsh/agent-harness
```

You can see them all [here](https://github.com/orgs/pluralsh/packages?repo_name=deployment-operator).

The product experience of all these allow bring-your-own image, but if you configure a pull-through cache for these images or vendor them consistently, you can have Plural auto-wire it against an internal registry with the following CRD:

```yaml
apiVersion: deployments.plural.sh/v1alpha1
kind: AgentConfiguration
metadata:
name: global
namespace: plrl-deploy-operator
spec:
baseRegistryURL: your.enterprise.registry
```

See more about this resource [here](/overview/agent-api-reference#agentconfigurationspec)

{% callout severity="info" %}
All of these images follow semver, and are also published to `gcr.io` and `docker.io` as well for convenience, in the event that either of those are eligible for internal pull-through caches. The redis instance is not meaningfully customized and any bitnami or equivalent redis container image can theoretically work there.
{% /callout %}

The first three will be configured in the main console chart and are installed once in your management cluster, the latter two are needed for your deployment agent pod, and require a bit more advanced configuration to manage them in bulk.
## Docker Repository Overrides for Your management cluster

A starter values file for configuring images for your console in the management cluster would be:
For the main Plural helm chart (https://pluralsh.github.io/console), configuring your *management cluster*, you'll want to use the following yaml overlay:

```yaml
# configure main console image
Expand All @@ -103,24 +133,48 @@ kas:

image:
repository: your.enterprise.registry/pluralsh/kas
```

And for the agent it would be:
redis:
registry: your.enterprise.registry
repository: redis

```yaml
# configure main agent
image:
repository: your.enterprise.registry/pluralsh/deployment-operator
# if you need to enable the internal git server
gitServer:
repository: your.enterprise.registry/git-server

# configure agentk (if this isn't pullable kubernetes dashboarding functionality will break but deployments can still proceed)
agentk:
image:
repository: your.enterprise.registry/pluralsh/agentk
dashboard:
api:
image:
repository: your.enterprise.registry/kubernetesui/dashboard
```

Agent helm configuration is covered in a few sections below.

For more advanced configuration, we definitely recommend consulting the charts directly, they're both open source at https://github.com/pluralsh/console and https://github.com/pluralsh/deployment-operator.

## Disable cert-manager based TLS

### Self-Host Git Repos (management cluster)

If your enterprise cannot accept external communication to github, we can provide a fully self-hosted git server built with [soft-serve](https://github.com/charmbracelet/soft-serve) with the required Plural repos pre-cloned at a compatible version for your instance of the console. This can be easily enabled via helm with the following values:

```yaml
extraEnv:
- name: CONSOLE_DEPLOY_OPERATOR_URL
value: http://git-server.plrl-console:23232/deployment-operator.git # uses the git server for deployment operator updates
- name: CONSOLE_ARITIFACTS_URL
value: http://git-server.plrl-console:23232/scaffolds.git # uses the git server for our default service catalog setup artifacts
gitServer:
enabled: true
```

We publish a new version of this every release so you will simply need to ensure it's vendored and ready to pull on each helm upgrade. Many organizations have a natural way to vendor docker images, and since this is deployed as a fully self-contained container image, you can simply repurpose that process to managing the necessary git repositories as well.

If you want to vendor the repositories entirely, the upstream repos are here:

- https://github.com/pluralsh/deployment-operator
- https://github.com/pluralsh/scaffolds

### Disable cert-manager based TLS (management cluster)

Our chart defaults to including TLS reconciled by cert-manager, but if you use a cloud-integrated cert management tool like Amazon Certificate Manager, it is unnecessary and could cause double-encryption. Disabling is a simple values override, done with:

Expand All @@ -137,14 +191,27 @@ kas:
enabled: false
```

## Configuring Agent Helm Values
## Configuring Agent Helm Values (Workload Clusters)

Like we said, the main console deployment is pretty easy to configure, but the agents need to be handled specially since they need to be configured in bulk. We provide a number of utilities to make reconfiguration scalable.
Agent configuration must to be handled specially since they need to be configured in bulk. We provide a number of utilities to make reconfiguration scalable.

First, you'll first want to use our agent settings to configure your helm updates for agents globally, done at `/cd/settings/agents`. You should see a screen like the following that allows you to edit the helm values for agent charts managed through Plural:
First, you'll first want to use our agent settings to configure your helm updates for agents globally, done at `{your-console-fqdn}/cd/settings/agents`. You should see a screen like the following that allows you to edit the helm values for agent charts managed through Plural:

![](/assets/deployments/agent-update.png)

This is the yaml blob that is most relevant:

```yaml
# configure main agent
image:
repository: your.enterprise.registry/pluralsh/deployment-operator

# configure agentk (if this isn't pullable kubernetes dashboarding functionality will break but deployments can still proceed)
agentk:
image:
repository: your.enterprise.registry/pluralsh/agentk
```

This can also be set via CRD using:

```yaml
Expand Down
109 changes: 109 additions & 0 deletions pages/getting-started/first-steps/add-a-cluster.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,109 @@
---
title: Add A Cluster
description: How To Add An Existing Cluster to Your Plural Instance
---

Adding a new cluster to Plural is very simple, it's simply a matter of installing our agent onto any end cluster, and usually follows two paths:

1. Leverage our CLI which wraps a full install including registering with your Plural api and helm installing the agent on the cluster
2. Use our terraform provider to wrap this whole process as Infrastructure as Code

Both are functional and fully supported, and execute equivalent code under the hood. If you set up your install with `plural up` we've already wrapped a ton of fully functional GitOps workflows for you, and those usually are more featureful workflows than doing this manually. If you want to read more about them, feel free to look at the guide here: [Create a Workload Cluster](/getting-started/how-to-use/workload-cluster).

{% callout severity="info" %}
We strongly recommend leveraging a IaC based pattern, since it'll allow you to export terraform state into Plural for re-use and maximizes reproducibility
{% /callout %}


## Onboard a cluster with our CLI

To add a new cluster simply run with a valid kubeconfig set up locally:

```sh
plural cd clusters bootstrap --name {your-cluster-name} --tag {tag}={value} --tag {tag2}={value2}
```

To see all CLI options, feel free to use:

```sh
plural cd clusters bootstrap --help
```

If you need to reinstall our agent for any reason, just use:

```sh
plural cd clusters reinstall @{cluster-handle}
```

{% callout severity="info" %}
The `@` character is required, as it allows our CLI to differentiate names from IDs.

You should also address the cluster by handle in the event name is not unique in your system.
{% /callout %}

## Onboard a cluster with our Terraform Provider

Here is a basic terraform snippet that shows how you can use our Terraform provider to install our agent

```terraform
resource "plural_cluster" "this" {
handle = var.cluster
name = var.cluster
tags = {
fleet = var.fleet
tier = var.tier
}

# metadata attaching useful cluster-level state in Plural to use for service templating
metadata = jsonencode({
tier = var.tier
iam = {
load_balancer = module.addons.gitops_metadata.aws_load_balancer_controller_iam_role_arn
cluster_autoscaler = module.addons.gitops_metadata.cluster_autoscaler_iam_role_arn
external_dns = module.externaldns_irsa_role.iam_role_arn
cert_manager = module.externaldns_irsa_role.iam_role_arn
}

vpc_id = local.vpc.vpc_id
region = var.region
})

# direct kubeconfig for this cluster
kubeconfig = {
host = module.eks.cluster_endpoint
cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)
token = data.aws_eks_cluster_auth.cluster.token
}
}

# optionally can specify kubeconfig at the provider level

provider "plural" {
kubeconfig = {
host = module.eks.cluster_endpoint
cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)
token = data.aws_eks_cluster_auth.cluster.token
}
}
```

This makes it easy to wrap Plural setup in existing IaC codebases and ensure full repeatability.

The metadata block is of importance as well, as it drives our helm + yaml templating experience within Plural CD. You can see some guides around that [here](/plural-features/continuous-deployment/service-templating).

## Next Steps

Once onboarded, you'll get a few main workflows connected to your cluster:

* GitOps Continuous Deployment - learn more [here](/plural-features/continuous-deployment)
* Kubernetes Dashboarding - learn more [here](/plural-features/kubernetes-dashboard)
* Plural AI - learn more [here](/plural-features/plural-ai)
* Plural Flows - learn more [here](/plural-features/flows)

If you want a robust, repeatable and scalable way to provision clusters, or other forms of cloud infrastructure, we definitely recommend looking into [Stacks](/plural-features/stacks-iac-management)

And if you want everything working out of the box, we'd recommend using `plural up` and going through the [How To Guide](/getting-started/how-to-use) we've constructured which leverages a lot of the GitOps templates that are built into that experience. This covers everything from:

1. Kubernetes Fleet Provisioning
2. Managing a runtime of Kubernetes add-ons
3. Deploying microservices to k8s and managing them as Flows
Loading
Loading