Skip to content

Commit

Permalink
Merge pull request #111 from kubermatic/terraform-rest-provider
Browse files Browse the repository at this point in the history
KKP User Cluster Management with Terrafrom Rest API and Cluster CRDs
  • Loading branch information
toschneck authored May 28, 2024
2 parents c510d3a + 24b9466 commit 082c7c5
Show file tree
Hide file tree
Showing 31 changed files with 1,693 additions and 7 deletions.
6 changes: 5 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,11 @@ Dedicated components for customer purposes.

| Name | Description |
|-----------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| **[Overview Manage Cluster via API/Cluster CRD with GitOps Tooling](components/api/README.md)** | |
| [api/cluster-management-by-api](components/api/cluster-management-by-api) | Bash based management scripts to specify your KKP cluster by API for CI/CD or GitOPs purposes, see [Cluster Provisioning by API via Bash/Curl](components/api/cluster-management-by-api/README.md). |
| [api/terraform-kkp-cluster-provider](components/api/terraform-kkp-cluster-provider) | Terraform based management of KKP user cluster for GitOps, see [KKP Terraform REST Provider](components/api/terraform-kkp-cluster-provider/README.md). |
| [api/cluster-management-by-crds](components/api/cluster-management-by-crds) | Management of KKP user cluster via `Cluster` or `ClusterTemplate` objects as `.yaml` files for GitOps, see [Cluster management for KKP with Cluster CRDs](components/api/cluster-management-by-crds/README.md) |
| | |
| [certificates/self-signed-ca](components/certificates/self-signed-ca) | How to create and managed self-signed CA at KKP |
| [controllers/aws-private-ip-enforce-controller](components/controllers/aws-private-ip-enforce-controller) | Enforces the `assignPublicIP: false` flag on all user cluster machine deployments |
| [controllers/component-override-controller](components/controllers/component-override-controller) | This bash-controller watches over Cluster objects and controls part of the spec.componentOverride. |
Expand All @@ -23,7 +28,6 @@ Dedicated components for customer purposes.
| [vm-images/packer-ubuntu1804-vsphere-template](./components/vm-images/packer-ubuntu1804-vsphere-template) | A packer template to customize an ubuntu 18.04 cloud-image on vSphere |
| [s3/s3-syncer-aws-cli](./components/s3/s3-syncer-aws-cli) | s3-syncer based CronJob on the `aws s3` cli to sync two different S3 locations as well Azure (by Minio Azure Gateway) |
| [s3/s3-dbdump-syncer](./components/s3/s3-dbdump-syncer) | s3-syncer based CronJob creates a DB dump of a postgres SQL database and sync it via the `aws s3` cli to a target S3 location. |
| [api/cluster-management-by-api](components/api/cluster-management-by-api) | Bash based management scripts to specify your KKP cluster by API for CI/CD or GitOPs purposes. |
| [vmware-exporter](components/vmware-exporter) | Helm chart for VMware Exporter and Dashboard for Prometheus and Grafana for monitoring of vSphere environments in the KKP MLA stack. |
| [nutanix-exporter](components/nutanix-exporter) | Helm chart for [nutanix-exporter](https://github.com/claranet/nutnix-exporter) - exporter for Prometheus that can be used for monitoring of Nutanix-based environments. |

Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

Large diffs are not rendered by default.

Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
23 changes: 23 additions & 0 deletions components/api/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
# Overview Manage Cluster via API/Cluster CRD with GitOps Tooling

## Cluster Provisioning by API via Bash/Curl

see [Cluster Provisioning by API via Bash/Curl](./cluster-management-by-api/README.md).

![KKP REST-API via Bash/Curl Architecture Overview](.assets/kkp-rest-api-bash-arch.drawio.png)

## KKP Terraform REST Provider

see [KKP Terraform REST Provider](./terraform-kkp-cluster-provider/README.md).

![KKP REST-API Terraform Provider Architecture Overview](.assets/kkp-rest-api-terraform-provider-arch.png)

## Cluster management for KKP with Cluster CRDs

see [Cluster management for KKP with Cluster CRDs](./cluster-management-by-crds/README.md).

![KKP Cluster Apply via CRD Architecture Overview](.assets/kkp-cluster-apply-via-crd-arch.png)

---

> Image Source: local [kkp-rest-API-Terraform-Cluster-CRD-Architecture-Drawing.drawio.xml](.assets/kkp-rest-API-Terraform-Cluster-CRD-Architecture-Drawing.drawio.xml) or [Google Drive](https://drive.google.com/file/d/1G8-AerEndAkR17ON4DOIrOAb_-OxEVnH/view?usp=sharing)
28 changes: 22 additions & 6 deletions components/api/cluster-management-by-api/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,22 @@
# Cluster Provisioning by API
# Cluster Provisioning by API via Bash/Curl

In the following repo you find an example how to manage your KKP user clusters by using the KKP API. For easy start how the scripts works, take a look into the example [`example-run-env`](./example-run-env) folder.
In the following folders, you find an example how to manage your KKP user clusters by using the KKP API.

## Architecture

Using the given example inside of any GitOps Tooling, the following workflow is given:

![KKP REST-API via Bash/Curl Architecture Overview](../.assets/kkp-rest-api-bash-arch.drawio.png)
> Image Source: local [kkp-rest-API-Terraform-Cluster-CRD-Architecture-Drawing.drawio.xml](../.assets/kkp-rest-API-Terraform-Cluster-CRD-Architecture-Drawing.drawio.xml) or [Google Drive](https://drive.google.com/file/d/1G8-AerEndAkR17ON4DOIrOAb_-OxEVnH/view?usp=sharing)
1) Use Authentication Token provided by the [KKP Service Accounts](https://docs.kubermatic.com/kubermatic/main/architecture/concept/kkp-concepts/service-account/using-service-account/)
2) Talk to the [KKP Rest API](https://docs.kubermatic.com/kubermatic/main/references/rest-api-reference/) with the given payload, what have been rendered by the terraform module
3) Kubermatic API transfers the API JSON payload to [Cluster](https://docs.kubermatic.com/kubermatic/main/references/crds/#cluster) object and applies it against the matching Seed Cluster Kubernetes API endpoint.
4) Seed Controller Managers use the [ClusterSpec](https://docs.kubermatic.com/kubermatic/main/references/crds/#clusterspec) and create the necessary specs for the Control Plan creation of a [KP user cluster](https://docs.kubermatic.com/kubermatic/main/architecture/#user-cluster)
5) Containerized Control Plane objects spins up (Deployments & StatefulSets) and seed controller manager creates necessary external cloud provider resources (e.g., a security group at the external cloud).

## Example
For easy start how the scripts works, take a look into the example [`example-run-env`](./example-run-env) folder.
```bash
cd example-run-env
make help
Expand Down Expand Up @@ -38,17 +54,17 @@ make help:

Examples of API Usage:
- E2E Tests: https://github.com/kubermatic/kubermatic/blob/master/pkg/test/e2e/utils/client.go#L454
- Terraform Provider: https://github.com/kubermatic/terraform-provider-kubermatic/blob/master/kubermatic/resource_cluster.go
- CLI `kkpctl`
- [Terraform REST API Provider](../terraform-kkp-cluster-provider/README.md)
- [Kubermatic Go library](https://github.com/kubermatic/go-kubermatic)
- Terraform Provider c: https://github.com/kubermatic/terraform-provider-kubermatic/blob/master/kubermatic/resource_cluster.go
- CLI `kkpctl` kkp-rest-api-terraform-provider-arch
- [Blog: KKPCTL: The Command Line Tool for Kubermatic Kubernetes Platform](https://www.kubermatic.com/blog/kkpctl-the-command-line-tool-for-kubermatic-kubernetes-platform/)
- [Github: cedi/kkpctl](https://github.com/cedi/kkpctl)


## Planned Improvements:
- https://github.com/kubermatic/kubermatic/issues/6414
- Service Account API v2 (personalized access)
- Manage User Cluster by Cluster Objects (end user facing)
- Feature Complete Terraform provider

## Declarative Stable API Objects
- JSON: Every support call from the KKP Rest API [REST-API Reference](https://docs.kubermatic.com/kubermatic/master/references/rest_api_reference/)
Expand Down
1 change: 1 addition & 0 deletions components/api/cluster-management-by-crds/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
*kubeconfig
113 changes: 113 additions & 0 deletions components/api/cluster-management-by-crds/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,113 @@
# Cluster management for KKP with Cluster CRDs

Basically in this example we show, how to manage cluster by using something like `kubectl -f cluster.yaml` to have declarative management of Cluster.

***IMPORTANT NOTE:***

**Currently, it's required to use the `kubectl` command against the responsible seed cluster, what does** ***NOT SUPPORT MULTI TENANT*** **access. In newer KKP version, we try to solve these restrictions!**

For a more git-ops declarative way with multi-tenant, check the [KKP Rest-API](https://docs.kubermatic.com/kubermatic/main/references/rest-api-reference/) and your [terraform-kkp-cluster-provider](../terraform-kkp-cluster-provider/README.md).

## Architecture

Using the given example inside of any GitOps Tooling, the following workflow is given:

![KKP Cluster Apply via CRD Architecture Overview](../.assets/kkp-cluster-apply-via-crd-arch.png)
> Image Source: local [kkp-rest-API-Terraform-Cluster-CRD-Architecture-Drawing.drawio.xml](../.assets/kkp-rest-API-Terraform-Cluster-CRD-Architecture-Drawing.drawio.xml) or [Google Drive](https://drive.google.com/file/d/1G8-AerEndAkR17ON4DOIrOAb_-OxEVnH/view?usp=sharing)
1) Use `kubectl` with a generated service account authentication token (or personalized account) within a regular `kubeconfig`. The service account should at least have access to the target seed and the required `Cluster` or `ClusterTemplate` objects you want to manage.
2) The applied [Cluster](https://docs.kubermatic.com/kubermatic/main/references/crds/#cluster) object get verified and persistently stored within the matching Seed Cluster Kubernetes API endpoint.
3) Seed Controller Managers use the [ClusterSpec](https://docs.kubermatic.com/kubermatic/main/references/crds/#clusterspec) and create the necessary specs for the Control Plan creation of a [KP user cluster](https://docs.kubermatic.com/kubermatic/main/architecture/#user-cluster)
4) Containerized Control Plane objects spins up (Deployments & StatefulSets) and seed controller manager creates necessary external cloud provider resources (e.g., a security group at the external cloud).

## Cluster CRD
At KKP you could manage your clusters with a `cluster` object. But this cluster object lives in the seed, where you need access. Additional to the `cluster` object, you will need also a `machinedeployment` to create nodes. This can be done via the `kubermatic.io/initial-machinedeployment-request` annotation or as a separate step after the cluster is created.

More information about the specs, you find at:
* [KKP Docs > Kubermatic CRD Reference](https://docs.kubermatic.com/kubermatic/main/references/crds/)
* [`Cluster`](https://docs.kubermatic.com/kubermatic/main/references/crds/#cluster)
* [MachineController > MachineDeployment Examples](https://github.com/kubermatic/machine-controller/tree/main/examples)

### Example Apply a Cluster

**Note: Expect the values of [`./cluster/*.yaml`](./cluster) files have been adjusted**
```bash
# connect to target seed
export KUBECONFIG=seed-cluster-kubeconfig

# add credentials as secret to kubermatic namespace
kubectl apply -f cluster/00_secret.credentials.example.yaml

# create cluster with initial machine deployment
kubectl apply -f cluster/10_cluster.spec.vsphere.example.yaml

#... check status of cluster creation
kubectl get cluster xxx-crd-cluster-id

# extract kubeconfig secret from cluster namespace
kubectl get cluster xxx-crd-cluster-id -o yaml | grep namespaceName
kubectl get secrets/admin-kubeconfig -n cluster-xxx-crd-cluster-id --template={{.data.kubeconfig}} | base64 -d > cluster/cluster-xxx-crd-cluster-id-kubeconfig

# now connect and check if you get access
export KUBECONFIG=cluster/cluster-xxx-crd-cluster-id-kubeconfig
kubectl get pods -A
# after some provisioning time you should also see machines
kubectl get md,ms,ma,node -A

# As example for machine management add an extra node
# (or if non intial machines get applied, manage nodes only via the machinedeployment.yaml, and remove the 'kubermatic.io/initial-machinedeployment-request' annotation)
kubectl apply -f cluster/20_machinedeployment.spec.vsphere.example.yaml
# now you should additional machine-deployments created
kubectl get md,ms,ma,node -A
```
If you want to delete the cluster, it's enough to delete it via the ID:
```bash
export KUBECONFIG=seed-cluster-kubeconfig
kubectl delete cluster xxx-crd-cluster-id

### will also work
kubectl delete -f cluster/10_cluster.spec.vsphere.example.yaml
```

### Workflow to create `cluster.yaml`
1. Create Cluster via UI
2. Extract Cluster values and remove the metadata:
```bash
export KUBECONFIG=seed-cluster-kubeconfig
mkdir -p my-cluster/.original
kubectl get cluster xxxxxxx -o yaml > my-cluster/.original/mycluster.spec.original.yaml
kubectl get cluster xxxxxxx -o yaml | kubectl-neat > my-cluster/mycluster.spec.yaml
```
3. Check the diff between the two yamls and compare it with given example diffs
```bash
#changes of the example
diff cluster/10_cluster.spec.vsphere.example.yaml cluster/.original/cluster.spec.original.yaml

#diff between your new and the example spec
diff cluster/10_cluster.spec.vsphere.example.yaml my-cluster/mycluster.spec.yaml
```
4. Ensure Parameters are matching. Somehow you need to ensure matching values of:
* Project ID ``: Secrets, Labels, MachineDeployments
* Cloud Provider Credentials and Specs
* vsphere: folder path


## Clustertemplate Management

Another option is to manage the [`ClusterTemplate`](https://docs.kubermatic.com/kubermatic/main/references/crds/#clustertemplate) object. Therefore, a non initialized template get created and separate instance object creates a copy of it. **BUT** any change to the clustertemplate will **NOT** get applied to the instance.
```bash
# connect to target seed
export KUBECONFIG=seed-cluster-kubeconfig

# add credential preset as secret to kubermatic namespace
kubectl apply -f cluster/00_secret.credentials.example.yaml

# create the template
kubectl apply -f clustertemplate/clustertemplate.vsphere-cluster-mla.yaml

# create cluster as a "copy" of the template
kubectl apply -f clustertemplate/clustertemplateinstance.vsphere.example.yaml

# check the creates instances
kubectl get clustertemplate,clustertemplateinstance,cluster
```
Loading

0 comments on commit 082c7c5

Please sign in to comment.