diff --git a/README.md b/README.md new file mode 100644 index 0000000..d9f45e1 --- /dev/null +++ b/README.md @@ -0,0 +1,7 @@ +# koperator-docs +Documentation for Koperator - the operator for managing Apache Kafka on Kubernetes + +This repository contains the documentation for the [Koperator](https://github.com/banzaicloud/koperator). + +- The _master_ branch contains the public documentation of Koperator, published at https://banzaicloud.com/docs/supertubes/kafka-operator/ +- The _supertubes-integration_ branch contains the Koperator docs for Calisti. The different branches are needed because of the different release schedule of Koperator and Calisti, and the path/url differences between the public Koperator docs and Calisti. The Calisti docs is published at [https://docs.calisti.app/sdm/koperator/](https://docs.calisti.app/sdm/koperator/). diff --git a/docs/_index.md b/docs/_index.md index 517c6a5..0bc09c5 100644 --- a/docs/_index.md +++ b/docs/_index.md @@ -1,17 +1,20 @@ --- -title: Kafka operator -img: /docs/supertubes/kafka-operator/img/kafka-operator-arch.png +title: Koperator +img: /docs/koperator-docs/img/kafka-operator-arch.png weight: 700 +aliases: + - /sdm/koperator/features/ cascade: module: kafka-operator - githubEditUrl: "https://github.com/banzaicloud/kafka-operator-docs/edit/master/docs/" + githubEditUrl: "https://github.com/banzaicloud/koperator-docs/edit/master/docs/" + operatorName: "Koperator" --- -The Banzai Cloud Kafka operator is a Kubernetes operator to automate provisioning, management, autoscaling and operations of [Apache Kafka](https://kafka.apache.org) clusters deployed to K8s. +The {{< kafka-operator >}} (formerly called Banzai Cloud Kafka Operator) is a Kubernetes operator to automate provisioning, management, autoscaling and operations of [Apache Kafka](https://kafka.apache.org) clusters deployed to K8s. ## Overview -[Apache Kafka](https://kafka.apache.org) is an open-source distributed streaming platform, and some of the main features of the **Kafka-operator** are: +[Apache Kafka](https://kafka.apache.org) is an open-source distributed streaming platform, and some of the main features of the **{{< kafka-operator >}}** are: - the provisioning of secure and production-ready Kafka clusters - **fine grained** broker configuration support @@ -23,17 +26,114 @@ The Banzai Cloud Kafka operator is a Kubernetes operator to automate provisionin - graceful rolling upgrade - advanced topic and user management via CRD -![Kafka-operator architecture](./img/kafka-operator-arch.png) +![{{< kafka-operator >}} architecture](/sdm/koperator/img/kafka-operator-arch.png) ->We took a different approach to what's out there - we believe for a good reason - please read on to understand more about our [design motivations](features/) and some of the [scenarios](scenarios/) which were driving us to create the Banzai Cloud Kafka operator. - -{{% include-headless "doc/kafka-operator-supertubes-intro.md" %}} +{{% include-headless "kafka-operator-supertubes-intro.md" "sdm" %}} ## Motivation -At [Banzai Cloud](https://banzaicloud.com) we are building a Kubernetes distribution, [PKE](/products/pke/), and a hybrid-cloud container management platform, [Pipeline](/products/pipeline/), that operate Kafka clusters (among other types) for our customers. Apache Kafka predates Kubernetes and was designed mostly for `static` on-premise environments. State management, node identity, failover, etc all come part and parcel with Kafka, so making it work properly on Kubernetes and on an underlying dynamic environment can be a challenge. +Apache Kafka predates Kubernetes and was designed mostly for `static` on-premise environments. State management, node identity, failover, etc all come part and parcel with Kafka, so making it work properly on Kubernetes and on an underlying dynamic environment can be a challenge. + +There are already several approaches to operating Apache Kafka on Kubernetes, however, we did not find them appropriate for use in a highly dynamic environment, nor capable of meeting our customers' needs. At the same time, there is substantial interest within the Kafka community for a solution which enables Kafka on Kubernetes, both in the open source and closed source space. +>We took a different approach to what's out there - we believe for a good reason - please read on to understand more about our [design motivations](#features) and some of the [scenarios](scenarios/) which were driving us to create the {{< kafka-operator >}}. + +Finally, our motivation is to build an open source solution and a community which drives the innovation and features of this operator. We are long-term contributors and active community members of both Apache Kafka and Kubernetes, and we hope to recreate a similar community around this operator. + +## Koperator features {#features} + +### Design motivations +Kafka is a stateful application. The first piece of the puzzle is the Broker, which is a simple server capable of creating/forming a cluster with other Brokers. Every Broker has his own **unique** configuration which differs slightly from all others - the most relevant of which is the ***unique broker ID***. + +All Kafka on Kubernetes operators use [StatefulSet](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/) to create a Kafka Cluster. Just to quickly recap from the K8s docs: + +>StatefulSet manages the deployment and scaling of a set of Pods, and provide guarantees about their ordering and uniqueness. Like a Deployment, a StatefulSet manages Pods that are based on an identical container spec. Unlike a Deployment, a StatefulSet maintains sticky identities for each of its Pods. These pods are created from the same spec, but are not interchangeable: each has a persistent identifier that is maintained across any rescheduling. + +How does this look from the perspective of Apache Kafka? + +With StatefulSet we get: + +- unique Broker IDs generated during Pod startup +- networking between brokers with headless services +- unique Persistent Volumes for Brokers + +Using StatefulSet we **lose:** + +- the ability to modify the configuration of unique Brokers +- to remove a specific Broker from a cluster (StatefulSet always removes the most recently created Broker) +- to use multiple, different Persistent Volumes for each Broker + +{{< kafka-operator >}} uses `simple` Pods, ConfigMaps, and PersistentVolumeClaims, instead of StatefulSet. Using these resources allows us to build an Operator which is better suited to manage Apache Kafka. + +With the {{< kafka-operator >}} you can: + +- modify the configuration of unique Brokers +- remove specific Brokers from clusters +- use multiple Persistent Volumes for each Broker + +## Features + +### Fine Grained Broker Configuration Support + +We needed to be able to react to events in a fine-grained way for each Broker - and not in the limited way StatefulSet does (which, for example, removes the most recently created Brokers). Some of the available solutions try to overcome these deficits by placing scripts inside the container to generate configurations at runtime, whereas the {{< kafka-operator >}}'s configurations are deterministically placed in specific Configmaps. + +### Graceful Kafka Cluster Scaling with the help of our CruiseControlOperation custom resource + +We know how to operate Apache Kafka at scale (we are contributors and have been operating Kafka on Kubernetes for years now). We believe, however, that LinkedIn has even more experience than we do. To scale Kafka clusters both up and down gracefully, we integrated LinkedIn's [Cruise-Control](https://github.com/linkedin/cruise-control) to do the hard work for us. We already have good defaults (i.e. plugins) that react to events, but we also allow our users to write their own. + +### External Access via LoadBalancer -There are already several approaches to operating Kafka on Kubernetes, however, we did not find them appropriate for use in a highly dynamic environment, nor capable of meeting our customers' needs. At the same time, there is substantial interest within the Kafka community for a solution which enables Kafka on Kubernetes, both in the open source and closed source space. ->We took a different approach to what's out there - we believe for a good reason - please read on to understand more about our [design motivations](features/) and some of the [scenarios](scenarios/) which were driving us to create the Banzai Cloud Kafka operator. +The {{< kafka-operator >}} externalizes access to Apache Kafka using a dynamically (re)configured Envoy proxy. Using Envoy allows us to use **a single** LoadBalancer, so there's no need for a LoadBalancer for each Broker. -Finally, our motivation is to build an open source solution and a community which drives the innovation and features of this operator. We are long term contributors and active community members of both Apache Kafka and Kubernetes, and we hope to recreate a similar community around this operator. +![Kafka External Access](/sdm/koperator/img/kafka-external.png) + +### Communication via SSL + +The operator fully automates Kafka's SSL support. +The operator can provision the required secrets and certificates for you, or you can provide your own. + +![SSL support for Kafka](/sdm/koperator/img/kafka-ssl.png) + +### Monitoring via Prometheus + +The {{< kafka-operator >}} exposes Cruise-Control and Kafka JMX metrics to Prometheus. + +### Reacting on Alerts + +{{< kafka-operator >}} acts as a **Prometheus Alert Manager**. It receives alerts defined in Prometheus, and creates actions based on Prometheus alert annotations. + +Currently, there are three default actions (which can be extended): + +- upscale cluster (add a new Broker) +- downscale cluster (remove a Broker) +- add additional disk to a Broker + +### Graceful Rolling Upgrade + +Operator supports graceful rolling upgrade, It means the operator will check if the cluster is healthy. +It basically checks if the cluster has offline partitions, and all the replicas are in sync. +It proceeds only when the failure threshold is smaller than the configured one. + +The operator also allows to create special alerts on Prometheus, which affects the rolling upgrade state, by +increasing the error rate. + +### Dynamic Configuration Support + +Kafka operates with three type of configurations: + +- Read-only +- ClusterWide +- PerBroker + +Read-only config requires broker restart to update all the others may be updated dynamically. +Operator CRD distinguishes these fields, and proceed with the right action. It can be a rolling upgrade, or +a dynamic reconfiguration. + +### Seamless Istio mesh support + +- Operator allows to use ClusterIP services instead of Headless, which still works better in case of Service meshes. +- To avoid too early Kafka initialization, which might lead to unready sidecar container. The operator uses a small script to mitigate this behaviour. Any Kafka image can be used with the only requirement of an available **curl** command. +- To access a Kafka cluster which runs inside the mesh. Operator supports creating Istio ingress gateways. + + +--- +Apache Kafka, Kafka, and the Kafka logo are either registered trademarks or trademarks of The Apache Software Foundation in the United States and other countries. diff --git a/docs/benchmarks/_index.md b/docs/benchmarks/_index.md index 34a2759..1646966 100644 --- a/docs/benchmarks/_index.md +++ b/docs/benchmarks/_index.md @@ -3,69 +3,13 @@ title: Benchmarking Kafka weight: 900 --- -How to setup the environment for the Kafka Performance Test using Amazon PKE, GKE, EKS. - -We are going to use [Banzai Cloud CLI](https://github.com/banzaicloud/banzai-cli) to create the cluster: - -```bash -brew install banzaicloud/tap/banzai-cli -banzai login -``` - -## PKE - -1. Create your own VPC and subnets on Amazon Management Console. - - 1. Use the provided wizard and select VPC with Single Public Subnet. (Please remember the Availability Zone you chose.) - 1. Save the used route table id on the generated subnet - 1. Create two additional subnet in the VPC (choose different Availability Zones) - - - Modify your newly created subnet Auto Assign IP setting - - Enable auto-assign public IPV4 address - - 1. Assign the saved route table id to the two additional subnets - - - On Route Table page click Actions and Edit subnet associations - -1. Create the cluster itself. - - ```bash - banzai cluster create - ``` - - The required cluster template file can be found [here](https://raw.githubusercontent.com/banzaicloud/kafka-operator/master/docs/benchmarks/infrastructure/cluster_pke.json) - - > Please don't forget to fill out the template with the created ids. - - This will create a cluster with 3 nodes for ZK 3 for Kafka 1 Master node and 2 node for clients. -1. Create a StorageClass which enables high performance disk requests. - - ```bash - kubectl create -f - < Please don't forget to fill out the template with the created ids. + Once your cluster is up and running you can set up the Kubernetes infrastructure. 1. Create a StorageClass which enables high performance disk requests. @@ -84,17 +28,11 @@ banzai login ## EKS -1. Create the cluster itself: - - ```bash - banzai cluster create - ``` +{{< include-headless "warning-ebs-csi-driver.md" "sdm/koperator" >}} - The required cluster template file can be found [here](https://raw.githubusercontent.com/banzaicloud/kafka-operator/master/docs/benchmarks/infrastructure/cluster_eks.json) +1. Create a test cluster with 3 nodes for ZooKeeper, 3 for Kafka, 1 Master node and 2 node for clients. - > Please don't forget to fill out the template with the created ids. - - Once your cluster is up and running we can move on to set up the Kubernetes infrastructure. + Once your cluster is up and running you can set up the Kubernetes infrastructure. 1. Create a StorageClass which enables high performance disk requests. @@ -115,31 +53,58 @@ banzai login ## Install other required components -1. Create a Zookeeper cluster with 3 replicas using Pravega's Zookeeper Operator. +1. Create a ZooKeeper cluster with 3 replicas using Pravega's Zookeeper Operator. ```bash helm repo add banzaicloud-stable https://kubernetes-charts.banzaicloud.com/ - helm install --name zookeeper-operator --namespace zookeeper banzaicloud-stable/zookeeper-operator + helm install zookeeper-operator --namespace=zookeeper --create-namespace pravega/zookeeper-operator kubectl create -f - <}} CustomResourceDefinition resources (adjust the version number to the {{< kafka-operator >}} release you want to install) and the corresponding version of {{< kafka-operator >}}, the Operator for managing Apache Kafka on Kubernetes. + + ```bash + kubectl create --validate=false -f https://github.com/banzaicloud/koperator/releases/download/v{{< param "versionnumbers-sdm.koperatorCurrentversion" >}}/kafka-operator.crds.yaml + ``` ```bash - helm install --name=kafka-operator banzaicloud-stable/kafka-operator + helm install kafka-operator --namespace=kafka --create-namespace banzaicloud-stable/kafka-operator ``` -1. Create a 3 broker Kafka Cluster using the [provided](https://raw.githubusercontent.com/banzaicloud/kafka-operator/master/docs/benchmarks/infrastructure/kafka.yaml) yaml. +1. Create a 3-broker Kafka Cluster using the [this YAML file](https://raw.githubusercontent.com/banzaicloud/koperator/master/docs/benchmarks/infrastructure/kafka.yaml). + + This will install 3 brokers with fast SSD. If you would like the brokers in different zones, modify the following configurations to match your environment and use them in the broker configurations: + + ```yaml + apiVersion: kafka.banzaicloud.io/v1beta1 + kind: KafkaCluster + ... + spec: + ... + brokerConfigGroups: + default: + affinity: + nodeAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + nodeSelectorTerms: + - matchExpressions: + - key: + operator: In + values: + - + - + - + ... + ``` - This will install 3 brokers partitioned to three different zone with fast ssd. 1. Create a client container inside the cluster ```bash @@ -147,8 +112,6 @@ banzai login apiVersion: v1 kind: Pod metadata: - annotations: - linkerd.io/inject: enabled name: kafka-test spec: containers: @@ -163,36 +126,34 @@ banzai login 1. Exec into this client and create the `perftest, perftest2, perftes3` topics. ```bash - kubectl exec -it kafka-test bash + kubectl exec -it kafka-test -n kafka bash ./opt/kafka/bin/kafka-topics.sh --zookeeper zookeeper-client.zookeeper:2181 --topic perftest --create --replication-factor 3 --partitions 3 ./opt/kafka/bin/kafka-topics.sh --zookeeper zookeeper-client.zookeeper:2181 --topic perftest2 --create --replication-factor 3 --partitions 3 ./opt/kafka/bin/kafka-topics.sh --zookeeper zookeeper-client.zookeeper:2181 --topic perftest3 --create --replication-factor 3 --partitions 3 ``` -Monitoring environment automatically installed, find your cluster and Grafanas UI/credentials on our [UI](https://banzaicloud.com/pipeline/). To monitor the infrastructure we used the official Node Exporter dashboard available with id `1860`. +Monitoring environment is automatically installed. To monitor the infrastructure we used the official Node Exporter dashboard available with id `1860`. ## Run the tests -1. Run perf test against the cluster, by building the provided Docker [image](https://raw.githubusercontent.com/banzaicloud/kafka-operator/master/docs/benchmarks/loadgens/Dockerfile) +1. Run performance test against the cluster, by building [this Docker image](https://raw.githubusercontent.com/banzaicloud/koperator/master/docs/benchmarks/loadgens/Dockerfile). -```bash -docker build -t yourname/perfload:0.1.0 /loadgens -docker push yourname/perfload:0.1.0 -``` + ```bash + docker build -t /perfload:0.1.0 /loadgens + docker push /perfload:0.1.0 + ``` -1. Submit the perf test application: +1. Submit the performance testing application: ```yaml kubectl create -f - <}} versions, and the versions of other components they are compatible with. ## Compatibility matrix |Operator Version|Apache Kafka Version|JMX Exporter Version|Cruise Control Version|Istio Operator Version|Example cluster CR|Maintained| |-------|------|----------------|-------|----|---|-| -|v0.14.0|2.5.0+|0.14.0|2.5.23|1.5|[link](https://github.com/banzaicloud/kafka-operator/blob/v0.14.0/config/samples/simplekafkacluster.yaml)|+| -|v0.15.0|2.5.0+|0.14.0|2.5.28|1.8|[link](https://github.com/banzaicloud/kafka-operator/blob/v0.15.1/config/samples/simplekafkacluster.yaml)|+| -|v0.16.0|2.5.0+|0.15.0|2.5.37|1.9|[link](https://github.com/banzaicloud/kafka-operator/blob/v0.16.1/config/samples/simplekafkacluster.yaml)|+| +|v0.18.3|2.6.2+|0.15.0|2.5.37|1.10|[link](https://github.com/banzaicloud/koperator/blob/v0.18.3/config/samples/simplekafkacluster.yaml)|-| +|v0.19.0|2.6.2+|0.15.0|2.5.68|1.10|[link](https://github.com/banzaicloud/koperator/blob/v0.19.0/config/samples/simplekafkacluster.yaml)|-| +|v0.20.0|2.6.2+|0.15.0|2.5.68|1.10|[link](https://github.com/banzaicloud/koperator/blob/v0.20.0/config/samples/simplekafkacluster.yaml)|-| +|v0.20.2|2.6.2+|0.16.1|2.5.80|1.10|[link](https://github.com/banzaicloud/koperator/blob/v0.20.2/config/samples/simplekafkacluster.yaml)|+| +|v0.21.0|2.6.2+|0.16.1|2.5.86|2.11|[link](https://github.com/banzaicloud/koperator/blob/v0.21.0/config/samples/simplekafkacluster.yaml)|+| +|v0.21.1|2.6.2+|0.16.1|2.5.86|2.11|[link](https://github.com/banzaicloud/koperator/blob/v0.21.1/config/samples/simplekafkacluster.yaml)|+| +|v0.21.2|2.6.2+|0.16.1|2.5.86|2.11|[link](https://github.com/banzaicloud/koperator/blob/v0.21.2/config/samples/simplekafkacluster.yaml)|+| +|v0.22.0|2.6.2+|0.16.1|2.5.101|2.15.3|[link](https://github.com/banzaicloud/koperator/blob/v0.22.0/config/samples/simplekafkacluster.yaml)|+| -## Available Kafka operator images +## Available {{< kafka-operator >}} images |Image|Go version| |-|-| -|ghcr.io/banzaicloud/kafka-operator:v0.14.0|1.14| -|ghcr.io/banzaicloud/kafka-operator:v0.15.0|1.15| -|ghcr.io/banzaicloud/kafka-operator:v0.15.1|1.15| -|ghcr.io/banzaicloud/kafka-operator:v0.16.0|1.15| -|ghcr.io/banzaicloud/kafka-operator:v0.16.1|1.15| +|ghcr.io/banzaicloud/kafka-operator:v0.17.0|1.16| +|ghcr.io/banzaicloud/kafka-operator:v0.18.3|1.16| +|ghcr.io/banzaicloud/kafka-operator:v0.19.0 |1.16| +|ghcr.io/banzaicloud/kafka-operator:v0.20.2 |1.17| +|ghcr.io/banzaicloud/kafka-operator:v0.21.0 |1.17| +|ghcr.io/banzaicloud/kafka-operator:v0.21.1 |1.17| +|ghcr.io/banzaicloud/kafka-operator:v0.21.2 |1.17| +|ghcr.io/banzaicloud/kafka-operator:v0.22.0 |1.19| ## Available Apache Kafka images |Image|Java version| |-|-| -|banzaicloud/kafka:2.13-2.5.0-bzc.1|11| -|banzaicloud/kafka:2.13-2.5.1-bzc.1|11| -|ghcr.io/banzaicloud/kafka:2.13-2.6.0-bzc.1|11| -|ghcr.io/banzaicloud/kafka:2.13-2.6.1-bzc.1|11| +|ghcr.io/banzaicloud/kafka:2.13-2.6.2-bzc.1|11| |ghcr.io/banzaicloud/kafka:2.13-2.7.0-bzc.1|11| |ghcr.io/banzaicloud/kafka:2.13-2.7.0-bzc.2|11| +|ghcr.io/banzaicloud/kafka:2.13-2.8.0|11| +|ghcr.io/banzaicloud/kafka:2.13-2.8.1|11| +|ghcr.io/banzaicloud/kafka:2.13-3.1.0|17| ## Available JMX Exporter images @@ -41,6 +49,7 @@ This page shows you the list of supported Kafka operator versions, and the versi |-|-| |ghcr.io/banzaicloud/jmx-javaagent:0.14.0|11| |ghcr.io/banzaicloud/jmx-javaagent:0.15.0|11| +|ghcr.io/banzaicloud/jmx-javaagent:0.16.1|11| ## Available Cruise Control images @@ -51,3 +60,8 @@ This page shows you the list of supported Kafka operator versions, and the versi |ghcr.io/banzaicloud/cruise-control:2.5.34|11| |ghcr.io/banzaicloud/cruise-control:2.5.37|11| |ghcr.io/banzaicloud/cruise-control:2.5.43|11| +|ghcr.io/banzaicloud/cruise-control:2.5.53|11| +|ghcr.io/banzaicloud/cruise-control:2.5.68|11| +|ghcr.io/banzaicloud/cruise-control:2.5.80|11| +|ghcr.io/banzaicloud/cruise-control:2.5.86|11| +|ghcr.io/banzaicloud/cruise-control:2.5.101|11| diff --git a/docs/configurations/_index.md b/docs/configurations/_index.md new file mode 100644 index 0000000..9ac7690 --- /dev/null +++ b/docs/configurations/_index.md @@ -0,0 +1,13 @@ +--- +title: Configure Kafka cluster +shorttitle: Configure +weight: 250 +--- + +Koperator provides convenient ways of configuring Kafka resources through [Kubernetes custom resources](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/). + +List of our custom resources: + +- [KafkaCluster]({{< relref "kafkacluster/_index.md" >}}) +- [KafkaTopic]({{< relref "../topics.md" >}}) +- KafkaUser diff --git a/docs/configurations/kafkacluster/_index.md b/docs/configurations/kafkacluster/_index.md new file mode 100644 index 0000000..e173c41 --- /dev/null +++ b/docs/configurations/kafkacluster/_index.md @@ -0,0 +1,17 @@ +--- +title: Kafka cluster configuration +shorttitle: KafkaCluster +weight: 7000 +--- + +## Overview + +The **KafkaCluster** custom resource is the main configuration resource for the Kafka clusters. +It defines the Apache Kafka cluster properties, like Kafka brokers and listeners configurations. +By deploying the KafkaCluster custom resource, Koperator sets up your Kafka cluster. +You can change your Kafka cluster properties by updating the KafkaCluster custom resource. +The **KafkaCluster** custom resource always reflects to your Kafka cluster: when something has changed in your KafkaCluster custom resource, Koperator reconciles the changes to your Kafka cluster. + +## Schema reference {#schema-ref} + +The schema reference for the **KafkaCluster** custom resource is available [here](https://docs.calisti.app/sdm/koperator/reference/crd/kafkaclusters.kafka.banzaicloud.io/). diff --git a/docs/configurations/kafkacluster/examples/_index.md b/docs/configurations/kafkacluster/examples/_index.md new file mode 100644 index 0000000..94a9c8d --- /dev/null +++ b/docs/configurations/kafkacluster/examples/_index.md @@ -0,0 +1,80 @@ +--- +title: Kafka cluster +shorttitle: KafkaCluster examples +weight: 7000 +--- + +The following KafkaCluster custom resource examples show you some basic use cases. +You can use these examples as a base for your own Kafka cluster. + +## KafkaCluster CR with detailed explanation + +This is our most descriptive KafkaCluster CR. You can find a lot of valuable explanation about the settings. + +- [Detailed CR with descriptions](https://github.com/banzaicloud/koperator/blob/master/config/samples/banzaicloud_v1beta1_kafkacluster.yaml) + +## Kafka cluster with monitoring + +This is a very simple KafkaCluster CR with Prometheus monitoring enabled. + +- [Simple KafkaCluster with monitoring](https://github.com/banzaicloud/koperator/blob/master/config/samples/simplekafkacluster.yaml) + +## Kafka cluster with ACL, SSL, and rack awareness + +You can read more details about rack awareness [here]({{< relref "../../../rackawareness/index.md" >}}). + +- [Use SSL and rack awareness](https://github.com/banzaicloud/koperator/blob/master/config/samples/kafkacluster_with_ssl_groups.yaml) + +## Kafka cluster with broker configuration + +- [Use broker configuration groups](https://github.com/banzaicloud/koperator/blob/master/config/samples/kafkacluster_without_ssl_groups.yaml) +- [Use independent broker configurations](https://github.com/banzaicloud/koperator/blob/master/config/samples/kafkacluster_without_ssl.yaml) + +## Kafka cluster with custom SSL certificates for external listeners + +You can specify custom SSL certificates for listeners. +For details about SSL configuration, see {{% xref "../../../ssl.md" %}}. + +- [Use custom SSL certificate for an external listener](https://github.com/banzaicloud/koperator/blob/master/config/samples/kafkacluster_with_external_ssl_customcert.yaml) +- [Use custom SSL certificate for controller and inter-broker communication](https://github.com/banzaicloud/koperator/blob/master/config/samples/kafkacluster_with_ssl_groups_customcert.yaml). In this case you also need to provide the client SSL certificate for Koperator. +- [Hybrid solution](https://github.com/banzaicloud/koperator/blob/master/config/samples/kafkacluster_with_ssl_hybrid_customcert.yaml): some listeners have custom SSL certificates and some use certificates Koperator has generated automatically using cert-manager. + +## Kafka cluster with SASL + +You can use SASL authentication on the listeners. +For details, see {{% xref "../../../external-listener/index.md" %}}. + +- [Use SASL authentication on the listeners](https://github.com/banzaicloud/koperator/blob/master/config/samples/simplekafkacluster_with_sasl.yaml) + +## Kafka cluster with load balancers and brokers in the same availability zone + +You can create a broker-ingress mapping to eliminate traffic across availability zones between load balancers and brokers by configuring load balancers for brokers in same availability zone. + +- [Load balancers and brokers in same availability zone](https://github.com/banzaicloud/koperator/blob/master/config/samples/simplekafkacluster-with-brokerbindings.yaml) + +## Kafka cluster with Istio + +You can use Istio as the ingress controller for your external listeners. It requires using our [Istio operator](https://github.com/banzaicloud/istio-operator) in the Kubernetes cluster. + +- [Kafka cluster with Istio as ingress controller](https://github.com/banzaicloud/koperator/blob/master/config/samples/kafkacluster-with-istio.yaml) + +## Kafka cluster with custom advertised address for external listeners and brokers + +You can set custom advertised IP address for brokers. +This is useful when you're advertising the brokers on an IP address different from the Kubernetes node IP address. +You can also set custom advertised address for external listeners. +For details, see {{% xref "../../../external-listener/index.md" %}}. + +- [Custom advertised address for external listeners](https://github.com/banzaicloud/koperator/blob/master/config/samples/simplekafkacluster-with-nodeport-external.yaml) + +## Kafka cluster with Kubernetes scheduler affinity settings + +You can set node [affinity](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/) for your brokers. + +- [Custom affinity settings](https://github.com/banzaicloud/koperator/blob/master/config/samples/simplekafkacluster_affinity.yaml) + +## Kafka cluster with custom storage class + +You can configure your brokers to use custom [storage classes](https://kubernetes.io/docs/concepts/storage/storage-classes/). + +- [Custom storage class](https://github.com/banzaicloud/koperator/blob/master/config/samples/simplekafkacluster_ebs_csi.yaml) diff --git a/docs/create-topic.sample b/docs/create-topic.sample index 1e9440e..00b8f84 100644 --- a/docs/create-topic.sample +++ b/docs/create-topic.sample @@ -9,4 +9,7 @@ spec: name: my-topic partitions: 1 replicationFactor: 1 + config: + "retention.ms": "604800000" + "cleanup.policy": "delete" EOF \ No newline at end of file diff --git a/docs/create-zookeeper.sample b/docs/create-zookeeper.sample index f74177a..7ed414b 100644 --- a/docs/create-zookeeper.sample +++ b/docs/create-zookeeper.sample @@ -1,9 +1,11 @@ -kubectl create --namespace zookeeper -f - <}} from your cluster, note that because of dependencies between the various components, they must be deleted in specific order. -{{< warning >}}It’s important to delete the kafka-operator deployment as the last step. +{{< warning >}}It’s important to delete the {{< kafka-operator >}} deployment as the last step. {{< /warning >}} 1. Delete the *KafkaCluster* custom resources that represent the Kafka cluster and Cruise Control. -1. Wait until kafka-operator deletes all resources. Note that KafkaCluster, KafkaTopic and KafkaUser custom resources are protected with kubernetes finalizers, so those won’t be actually deleted from Kubernetes until the kafka-operator removes those finalizers. After the kafka-operator has finished cleaning up everything, it removes the finalizers. In case you delete the kafka-operator deployment before it cleans up everything you need to remove the finalizers manually. -1. Delete the kafka-operator deployment. +1. Wait until {{< kafka-operator >}} deletes all resources. Note that KafkaCluster, KafkaTopic and KafkaUser custom resources are protected with Kubernetes finalizers, so those won’t be actually deleted from Kubernetes until the {{< kafka-operator >}} removes those finalizers. After the {{< kafka-operator >}} has finished cleaning up everything, it removes the finalizers. In case you delete the {{< kafka-operator >}} deployment before it cleans up everything, you need to remove the finalizers manually. +1. Delete the {{< kafka-operator >}} deployment. diff --git a/docs/developer.md b/docs/developer.md index 387f624..1ca35a5 100644 --- a/docs/developer.md +++ b/docs/developer.md @@ -11,13 +11,13 @@ If you find this project useful here's how you can help: - Send a pull request with your new features and bug fixes - Help new users with issues they may encounter -- Support the development of this project and [star this repo](https://github.com/banzaicloud/kafka-operator/)! +- Support the development of this project and [star this repo](https://github.com/banzaicloud/koperator/)! -When you are opening a PR to Kafka operator the first time we will require you to sign a standard CLA. +When you are opening a PR to {{< kafka-operator >}} the first time we will require you to sign a standard CLA. -## How to run Kafka-operator in your cluster with your changes +## How to run {{< kafka-operator >}} in your cluster with your changes -The Kafka operator is built on the [kubebuilder](https://github.com/kubernetes-sigs/kubebuilder) project. +{{< kafka-operator >}} is built on the [kubebuilder](https://github.com/kubernetes-sigs/kubebuilder) project. To build the operator and run tests: @@ -41,7 +41,7 @@ Alternatively, run the operator on your machine: Create CR and let the operator set up Kafka in your cluster (you can change the `spec` of `Kafka` for your needs in the yaml file): -> Remember you need Zookeeper server to run Kafka +> Remember you need an Apache ZooKeeper server to run Kafka `kubectl create -n kafka -f config/samples/simplekafkacluster.yaml` diff --git a/docs/enable-ssl.sample b/docs/enable-ssl.sample index 7180f75..cb2eebd 100644 --- a/docs/enable-ssl.sample +++ b/docs/enable-ssl.sample @@ -3,7 +3,7 @@ listenersConfig: - type: "ssl" name: "external" externalStartingPort: 19090 - containerPort: 29092 + containerPort: 9094 internalListeners: - type: "ssl" name: "internal" diff --git a/docs/external-listener/index.md b/docs/external-listener/index.md index 8469237..a190e5b 100644 --- a/docs/external-listener/index.md +++ b/docs/external-listener/index.md @@ -1,10 +1,10 @@ --- title: Expose the Kafka cluster to external applications -shorttitle: External listeners +linktitle: External listeners weight: 700 --- -There are two methods to expose your Kafka cluster so that external client applications that run outside the Kubernetes cluster can access it: +There are two methods to expose your Apache Kafka cluster so that external client applications that run outside the Kubernetes cluster can access it: - using [LoadBalancer](#loadbalancer) type services - using [NodePort](#nodeport) type services @@ -25,7 +25,7 @@ This `NodePort` method is a good fit when: ## External listeners -You can expose the Kafka cluster outside the Kubernetes cluster by declaring one or more _externalListeners_ in the `KafkaCluster` custom resource. The following *externalListeners* configuration snippet creates two external access points through which the Kafka cluster's brokers can be reached. These *external listeners* are registered in the `advertized.listeners` Kafka broker configuration as `EXTERNAL1://...,EXTERNAL2://...`. +You can expose the Kafka cluster outside the Kubernetes cluster by declaring one or more _externalListeners_ in the `KafkaCluster` custom resource. The following *externalListeners* configuration snippet creates two external access points through which the Kafka cluster's brokers can be reached. These *external listeners* are registered in the `advertised.listeners` Kafka broker configuration as `EXTERNAL1://...,EXTERNAL2://...`. By default, external listeners use the [LoadBalancer](#loadbalancer) access method. @@ -47,7 +47,7 @@ listenersConfig: To configure an external listener that uses the LoadBalancer access method, complete the following steps. 1. Edit the `KafkaCluster` custom resource. -1. Add an `externalListeners` section under `listenersConfig`. The following example creates a Load Balancer for the external listener, `external1`. Each broker in the cluster receives a dedicated port number on the Load Balancer which is computed as *broker port number = externalStartingPort + broker id*. This will be registered in each broker's config as `advertized.listeners=EXTERNAL1://:`. +1. Add an `externalListeners` section under `listenersConfig`. The following example creates a Load Balancer for the external listener, `external1`. Each broker in the cluster receives a dedicated port number on the Load Balancer which is computed as *broker port number = externalStartingPort + broker id*. This will be registered in each broker's config as `advertised.listeners=EXTERNAL1://:`. ```yaml listenersConfig: @@ -68,11 +68,44 @@ To configure an external listener that uses the LoadBalancer access method, comp The ingress controllers that are currently supported are: - - envoy: uses Envoy Proxy as an ingress controller. - - istioingress: uses Istio Gateway as an ingress controller. This is the default controller for Kafka clusters provisioned with [Supertubes](/docs/supertubes/overview/), since those clusters run inside an Istio mesh. + - `envoy`: uses Envoy proxy as an ingress. + - `istioingress`: uses Istio proxy gateway as an ingress. Istio ingress is the default controller for Kafka clusters provisioned with [SDM](/sdm/overview/), since those clusters run inside an Istio mesh. + + - To use Envoy, set the `ingressController` field in the `KafkaCluster` custom resource to `envoy`. For an example, [see](https://github.com/banzaicloud/koperator/blob/672b19d49e5c0a22f9658181003beddb56f17d33/config/samples/banzaicloud_v1beta1_kafkacluster.yaml#L12). + + For OpenShift: + + ```yaml + spec: + # ... + envoyConfig: + podSecurityContext: + runAsGroup: 19090 + runAsUser: 19090 + # ... + ingressController: "envoy" + # ... + ``` + + For Kubernetes: + + ```yaml + spec: + ingressController: "envoy" + ``` + + - To use Istio ingress controller, set the `ingressController` field to `istioingress`. [Istio operator](https://github.com/banzaicloud/istio-operator) v2 is supported from Koperator version 0.21.0+. Istio operator v2 supports multiple Istio control plane on the same cluster, that is why the corresponding control plane to the gateway must be specified. The `istioControlPlane` field in the `KafkaCluster` custom resource is a reference to that IstioControlPlane resource. For an example, [see](https://github.com/banzaicloud/koperator/blob/672b19d49e5c0a22f9658181003beddb56f17d33/config/samples/kafkacluster-with-istio.yaml#L10). + + ```yaml + spec: + ingressController: "istioingress" + istioControlPlane: + name: + namespace: + ``` 1. Configure additional parameters for the ingress controller as needed for your environment, for example, number of replicas, resource requirements and resource limits. You can be configure such parameters using the *envoyConfig* and *istioIngressConfig* fields, respectively. -1. (Optional) For external access through a static URL instead of the load balancer's public IP, specify the URL in the `hostnameOverride` field of the external listener that resolves to the public IP of the load balancer. The broker address will be advertized as, `advertized.listeners=EXTERNAL1://kafka-1.dev.my.domain:`. +1. (Optional) For external access through a static URL instead of the load balancer's public IP, specify the URL in the `hostnameOverride` field of the external listener that resolves to the public IP of the load balancer. The broker address will be advertised as, `advertised.listeners=EXTERNAL1://kafka-1.dev.my.domain:`. ```yaml listenersConfig: @@ -119,19 +152,45 @@ To configure an external listener that uses the NodePort access method, complete hostnameOverride: .dev.example.com ``` - The `hostnameOverride` behaves differently here than with LoadBalancer access method. In this case, each broker will be advertized as `advertized.listeners=EXTERNAL1://-..:`. If a three-broker Kafka cluster named *kafka* is running in the *kafka* namespace, the `advertized.listeners` for the brokers will look like this: + The `hostnameOverride` behaves differently here than with LoadBalancer access method. In this case, each broker will be advertised as `advertised.listeners=EXTERNAL1://-..:`. If a three-broker Kafka cluster named *kafka* is running in the *kafka* namespace, the `advertised.listeners` for the brokers will look like this: - broker 0: - - advertized.listeners=EXTERNAL1://kafka-0.external1.kafka.dev.my.domain:32000 + - advertised.listeners=EXTERNAL1://kafka-0.external1.kafka.dev.my.domain:32000 - broker 1: - - advertized.listeners=EXTERNAL1://kafka-1.external1.kafka.dev.my.domain:32001 + - advertised.listeners=EXTERNAL1://kafka-1.external1.kafka.dev.my.domain:32001 - broker 2: - - advertized.listeners=EXTERNAL1://kafka-2.external1.kafka.dev.my.domain:32002 + - advertised.listeners=EXTERNAL1://kafka-2.external1.kafka.dev.my.domain:32002 1. Apply the `KafkaCluster` custom resource to the cluster. ### NodePort external IP +The node IP of the node where the broker pod is scheduled will be used in the advertised.listeners broker configuration when the `nodePortNodeAddressType` is specified. +Its value determines which IP or domain name of the Kubernetes node will be used, the possible values are: Hostname, ExternalIP, InternalIP, InternalDNS and ExternalDNS. +The hostNameOverride and nodePortExternalIP must not be specified in this case. + +```yaml +brokers: +- id: 0 + brokerConfig: + nodePortNodeAddressType: ExternalIP +- id: 1 + brokerConfig: + nodePortNodeAddressType: ExternalIP +- id: 2 + brokerConfig: + nodePortNodeAddressType: ExternalIP +``` + +If *hostnameOverride* and *nodePortExternalIP* fields are not set, then broker address is advertised as follows: + +- broker 0: + - advertised.listeners=EXTERNAL1://16.171.47.211:9094 +- broker 1: + - advertised.listeners=EXTERNAL1://16.16.66.201:9094 +- broker 2: + - advertised.listeners=EXTERNAL1://16.170.214.51:9094 + Kafka brokers can be made accessible on external IPs that are not node IP, but can route into the Kubernetes cluster. These external IPs can be set for each broker in the KafkaCluster custom resource as in the following example: ```yaml @@ -150,25 +209,25 @@ brokers: external1: 13.49.70.146 # if "hostnameOverride" is not set for "external1" external listener, then broker is advertised on this IP ``` -If *hostnameOverride* field is not set, then broker address is advertized as follows: +If *hostnameOverride* field is not set, then broker address is advertised as follows: - broker 0: - - advertized.listeners=EXTERNAL1://13.53.214.23:9094 + - advertised.listeners=EXTERNAL1://13.53.214.23:9094 - broker 1: - - advertized.listeners=EXTERNAL1://13.48.71.170:9094 + - advertised.listeners=EXTERNAL1://13.48.71.170:9094 - broker 2: - - advertized.listeners=EXTERNAL1://13.49.70.146:9094 + - advertised.listeners=EXTERNAL1://13.49.70.146:9094 If both *hostnameOverride* and *nodePortExternalIP* fields are set: - broker 0: - - advertized.listeners=EXTERNAL1://kafka-0.external1.kafka.dev.my.domain:9094 + - advertised.listeners=EXTERNAL1://kafka-0.external1.kafka.dev.my.domain:9094 - broker 1: - - advertized.listeners=EXTERNAL1://kafka-1.external1.kafka.dev.my.domain:9094 + - advertised.listeners=EXTERNAL1://kafka-1.external1.kafka.dev.my.domain:9094 - broker 2: - - advertized.listeners=EXTERNAL1://kafka-2.external1.kafka.dev.my.domain:9094 + - advertised.listeners=EXTERNAL1://kafka-2.external1.kafka.dev.my.domain:9094 -> Note: If *nodePortExternalIP* is set, then the *containerPort* from the external listener config is used as a broker port, and is the same for each broker. +> Note: If *nodePortExternalIP* or *nodePortNodeAddressType* is set, then the *containerPort* from the external listener config is used as a broker port, and is the same for each broker. ## SASL authentication on external listeners {#sasl} @@ -191,14 +250,19 @@ To enable sasl_plaintext authentication on the external listener, modify the **e type: sasl_plaintext ``` -To connect to this listener using the Kafka console producer, complete the following steps: +To connect to this listener using the Kafka 3.1.0 (and above) console producer, complete the following steps: -1. Set the producer properties like this: +1. Set the producer properties like this. Replace the parameters between brackets as needed for your environment: ```ini - sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required unsecuredLoginStringClaim_sub="producer"; sasl.mechanism=OAUTHBEARER security.protocol=SASL_SSL + sasl.login.callback.handler.class=org.apache.kafka.common.security.oauthbearer.secured.OAuthBearerLoginCallbackHandler + sasl.oauthbearer.token.endpoint.url= + sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ + clientId="" \ + clientSecret="" \ + scope="kafka:write"; ssl.truststore.location=/ssl/trustore.jks ssl.truststore.password=truststorepass ssl.endpoint.identification.algorithm= @@ -210,17 +274,22 @@ To connect to this listener using the Kafka console producer, complete the follo kafka-console-producer.sh --bootstrap-server :19090 --topic --producer.config producer.properties ``` -To consume messages from this listener using the Kafka console consumer, complete the following steps: +To consume messages from this listener using the Kafka 3.1.0 (and above) console consumer, complete the following steps: -1. Set the producer properties like this: +1. Set the consumer properties like this. Replace the parameters between brackets as needed for your environment: ```ini group.id=consumer-1 group.instance.id=consumer-1-instance-1 client.id=consumer-1-instance-1 - sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required unsecuredLoginStringClaim_sub="consumer"; sasl.mechanism=OAUTHBEARER - security.protocol=SASL_SSL + security.protocol=SASL_SASL + sasl.login.callback.handler.class=org.apache.kafka.common.security.oauthbearer.secured.OAuthBearerLoginCallbackHandler + sasl.oauthbearer.token.endpoint.url= + sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ + clientId="" \ + clientSecret="" \ + scope="kafka:read" ; ssl.endpoint.identification.algorithm= ssl.truststore.location=/ssl/trustore.jks ssl.truststore.password=trustorepass @@ -230,4 +299,4 @@ To consume messages from this listener using the Kafka console consumer, complet ```bash kafka-console-consumer.sh --bootstrap-server :19090 --topic --consumer.config /opt/kafka/config/consumer.properties --from-beginning - ``` \ No newline at end of file + ``` diff --git a/docs/external-listener/lb-access.png b/docs/external-listener/lb-access.png index fdb2e61..ab759a2 100644 Binary files a/docs/external-listener/lb-access.png and b/docs/external-listener/lb-access.png differ diff --git a/docs/external-listener/nodeport-access.png b/docs/external-listener/nodeport-access.png index dfded3b..48a76b3 100644 Binary files a/docs/external-listener/nodeport-access.png and b/docs/external-listener/nodeport-access.png differ diff --git a/docs/features.md b/docs/features.md deleted file mode 100644 index 6c27fc2..0000000 --- a/docs/features.md +++ /dev/null @@ -1,99 +0,0 @@ ---- -title: Features -weight: 200 ---- - - - -Kafka is a stateful application. The first piece of the puzzle is the Broker, which is a simple server capable of creating/forming a cluster with other Brokers. Every Broker has his own **unique** configuration which differs slightly from all others - the most relevant of which is the ***unique broker ID***. - -All Kafka on Kubernetes operators use [StatefulSet](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/) to create a Kafka Cluster. Just to quickly recap from the K8s docs: - ->StatefulSet manages the deployment and scaling of a set of Pods, and provide guarantees about their ordering and uniqueness. Like a Deployment, a StatefulSet manages Pods that are based on an identical container spec. Unlike a Deployment, a StatefulSet maintains sticky identities for each of its Pods. These pods are created from the same spec, but are not interchangeable: each has a persistent identifier that is maintained across any rescheduling. - -How does this looks from the perspective of Apache Kafka? - -With StatefulSet we get: - -- unique Broker IDs generated during Pod startup -- networking between brokers with headless services -- unique Persistent Volumes for Brokers - -Using StatefulSet we **lose:** - -- the ability to modify the configuration of unique Brokers -- to remove a specific Broker from a cluster (StatefulSet always removes the most recently created Broker) -- to use multiple, different Persistent Volumes for each Broker - -The Banzai Cloud Kafka Operator uses `simple` Pods, ConfigMaps, and PersistentVolumeClaims, instead of StatefulSet. Using these resources allows us to build an Operator which is better suited to Kafka. - -With the Banzai Cloud Kafka operator we can: - -- modify the configuration of unique Brokers -- remove specific Brokers from clusters -- use multiple Persistent Volumes for each Broker - -## Features - -### Fine Grained Broker Config Support - -We needed to be able to react to events in a fine-grained way for each Broker - and not in the limited way StatefulSet does (which, for example, removes the most recently created Brokers). Some of the available solutions try to overcome these deficits by placing scripts inside the container to generate configs at runtime, whereas the Banzai Cloud Kafka operator's configurations are deterministically placed in specific Configmaps. - -### Graceful Kafka Cluster Scaling - -Here at Banzai Cloud, we know how to operate Kafka at scale (we are contributors and have been operating Kafka on Kubernetes for years now). We believe, however, that LinkedIn has even more experience than we do. To scale Kafka clusters both up and down gracefully, we integrated LinkedIn's [Cruise-Control](https://github.com/linkedin/cruise-control) to do the hard work for us. We already have good defaults (i.e. plugins) that react to events, but we also allow our users to write their own. - -### External Access via LoadBalancer - -The Banzai Cloud Kafka operator externalizes access to Kafka using a dynamically (re)configured Envoy proxy. Using Envoy allows us to use **a single** LoadBalancer, so there's no need for a LoadBalancer for each Broker. - -![Kafka External Access](../img/kafka-external.png) - -### Communication via SSL - -The operator fully automates Kafka's SSL support. -The operator can provision the required secrets and certificates for you, or you can provide your own. -The Pipeline platform is capable of automating this process, as well. - -![SSL support for Kafka](../img/kafka-ssl.png) - -### Monitoring via Prometheus - -The Kafka operator exposes Cruise-Control and Kafka JMX metrics to Prometheus. - -### Reacting on Alerts - -The Kafka Operator acts as a **Prometheus Alert Manager**. It receives alerts defined in Prometheus, and creates actions based on Prometheus alert annotations. - -Currently, there are three default actions (which can be extended): - -- upscale cluster (add a new Broker) -- downscale cluster (remove a Broker) -- add additional disk to a Broker - -### Graceful Rolling Upgrade - -Operator supports graceful rolling upgrade, It means the operator will check if the cluster is healthy. -It basically checks if the cluster has offline partitions, and all the replicas are in sync. -It proceeds only when the failure threshold is smaller than the configured one. - -The operator also allows to create special alerts on Prometheus, which affects the rolling upgrade state, by -increasing the error rate. - -### Dynamic Configuration Support - -Kafka operates with three type of configs: - -- Read-only -- ClusterWide -- PerBroker - -Read-only config requires broker restart to update all the others may be updated dynamically. -Operator CRD distinguishes these fields, and proceed with the right action. It can be a rolling upgrade, or -a dynamic reconfiguration. - -### Seamless Istio mesh support - -- Operator allows to use ClusterIP services instead of Headless, which still works better in case of Service meshes. -- To avoid too early kafka initialization, which might lead to unready sidecar container. The operator uses a small script to mitigate this behavior. All Kafka image can be used the only one requirement is an available **curl** command. -- To access a Kafka cluster which runs inside the mesh. Operator supports creating Istio ingress gateways. diff --git a/docs/headless/warning-ebs-csi-driver.md b/docs/headless/warning-ebs-csi-driver.md new file mode 100644 index 0000000..08be0dd --- /dev/null +++ b/docs/headless/warning-ebs-csi-driver.md @@ -0,0 +1,3 @@ +--- +--- +{{< warning >}}The ZooKeeper and the Kafka clusters need [persistent volume (PV)](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) to store data. Therefore, when installing the operator on Amazon EKS with Kubernetes version 1.23 or later, you [must install the EBS CSI driver add-on](https://docs.aws.amazon.com/eks/latest/userguide/managing-ebs-csi.html) on your cluster. {{< /warning >}} diff --git a/docs/img/kafka-external.png b/docs/img/kafka-external.png index acdc525..189a807 100644 Binary files a/docs/img/kafka-external.png and b/docs/img/kafka-external.png differ diff --git a/docs/img/kafka-operator-arch.png b/docs/img/kafka-operator-arch.png index c982a41..aa6f163 100644 Binary files a/docs/img/kafka-operator-arch.png and b/docs/img/kafka-operator-arch.png differ diff --git a/docs/img/kafka-ssl.png b/docs/img/kafka-ssl.png index 4c5e0e3..19d8741 100644 Binary files a/docs/img/kafka-ssl.png and b/docs/img/kafka-ssl.png differ diff --git a/docs/install-kafka-operator.md b/docs/install-kafka-operator.md index 20de3a1..3862b2f 100644 --- a/docs/install-kafka-operator.md +++ b/docs/install-kafka-operator.md @@ -1,106 +1,283 @@ --- -title: Install the Kafka operator -shorttitle: Install +title: Install the operator +linktitle: Install weight: 10 --- +The operator installs version 3.1.0 of Apache Kafka, and can run on: +- Minikube v0.33.1+, +- Kubernetes 1.21-1.24, and +- Red Hat OpenShift 4.10-4.11. -The operator installs the 2.5.0 version of Apache Kafka, and can run on Minikube v0.33.1+ and Kubernetes 1.15.0+. +> The operator supports Kafka 2.6.2-3.1.x. -> The operator supports Kafka 2.0+ +{{< include-headless "warning-ebs-csi-driver.md" "sdm/koperator" >}} ## Prerequisites -- A Kubernetes cluster (minimum 6 vCPU and 10 GB RAM). You can create one using the [Banzai Cloud Pipeline platform](/products/pipeline/), or any other tool of your choice. +- A Kubernetes cluster (minimum 6 vCPU and 10 GB RAM). Red Hat OpenShift is also supported in {{< kafka-operator >}} version 0.24 and newer, but note that it needs some permissions for certain components to function. -> We believe in the `separation of concerns` principle, thus the Kafka operator does not install nor manage Zookeeper or cert-manager. If you would like to have a fully automated and managed experience of Apache Kafka on Kubernetes, try [Banzai Cloud Supertubes](/products/supertubes/). +> We believe in the `separation of concerns` principle, thus the {{< kafka-operator >}} does not install nor manage Apache ZooKeeper or cert-manager. If you would like to have a fully automated and managed experience of Apache Kafka on Kubernetes, try [Cisco Streaming Data Manager](https://calisti.app). -## Install Kafka operator and all requirements using Supertubes +## Install {{< kafka-operator >}} and its requirements independently {#install-kafka-operator-and-its-requirements-independently} -This method uses a command-line tool of the commercial [Banzai Cloud Supertubes](/products/supertubes/) product to install the Kafka operator and its prerequisites. If you'd prefer to install these components manually, see [Install Kafka operator and the requirements independently](#manual-install). +### Install cert-manager with Helm {#install-cert-manager-with-helm} -1. [Register for an evaluation version of Supertubes](/products/try-supertubes/). +{{< kafka-operator >}} uses [cert-manager](https://cert-manager.io) for issuing certificates to clients and brokers and cert-manager is required for TLS-encrypted client connections. It is recommended to deploy and configure a cert-manager instance if there is none in your environment yet. -1. Install the [Supertubes](/docs/supertubes/overview/) CLI tool for your environment by running the following command: +> Note: +> - {{< kafka-operator >}} 0.24.0 and newer versions support cert-manager 1.10.0+ (which is a requirement for Red Hat OpenShift) +> - {{< kafka-operator >}} 0.18.1 and newer supports cert-manager 1.5.3-1.9.x +> - {{< kafka-operator >}} 0.8.x-0.17.0 supports cert-manager 1.3.x - {{< include-headless "download-supertubes.md" >}} +1. Install cert-manager's CustomResourceDefinitions. -1. Run the following command: + ```bash + kubectl apply \ + --validate=false \ + -f https://github.com/jetstack/cert-manager/releases/download/v1.11.0/cert-manager.crds.yaml + ``` + + Expected output: ```bash - supertubes install -a + customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io created + customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io created + customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io created + customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io created + customresourcedefinition.apiextensions.k8s.io/issuers.cert-manager.io created + customresourcedefinition.apiextensions.k8s.io/orders.acme.cert-manager.io created ``` -## Install Kafka operator and the requirements independently {#manual-install} +1. If you are installing cert-manager on a Red Hat OpenShift version **4.10** cluster, the default security computing profile must be enabled for cert-manager to work. -### Install cert-manager {#install-cert-manager} + 1. Create a new `SecurityContextConstraint` object named `restricted-seccomp` which will be a copy of the OpenShift built-in `restricted` `SecurityContextConstraint`, but will also allow the `runtime/default` / `RuntimeDefault` security computing profile [according to the OpenShift documentation](https://docs.openshift.com/container-platform/4.10/security/seccomp-profiles.html#configuring-default-seccomp-profile_configuring-seccomp-profiles). -The Kafka operator uses [cert-manager](https://cert-manager.io) for issuing certificates to clients and brokers. Deploy and configure cert-manager if you haven't already done so. + ```bash + oc create -f - < Note: -> -> - Kafka operator 0.8.x and newer supports cert-manager 0.15.x -> - Kafka operator 0.7.x supports cert-manager 0.10.x + Expected output: + + ```bash + securitycontextconstraints.security.openshift.io/restricted-seccomp created + ``` + + 1. Elevate the permissions of the namespace containing the cert-manager service account. + + - Using the default `cert-manager` namespace: + + ```bash + oc adm policy add-scc-to-group restricted-seccomp system:serviceaccounts:cert-manager + ``` + + - Using a custom namespace for cert-manager: + + ```bash + oc adm policy add-scc-to-group anyuid system:serviceaccounts:{NAMESPACE_FOR_CERT_MANAGER_SERVICE_ACCOUNT} + ``` + + Expected output: + + ```bash + clusterrole.rbac.authorization.k8s.io/system:openshift:scc:restricted-seccomp added: "system:serviceaccounts:{NAMESPACE_FOR_CERT_MANAGER_SERVICE_ACCOUNT}" + ``` -Install cert-manager and the CustomResourceDefinitions using one of the following methods: +1. Install cert-manager. -- Directly: + ```bash + helm install \ + cert-manager \ + --repo https://charts.jetstack.io cert-manager \ + --version v1.11.0 \ + --namespace cert-manager \ + --create-namespace \ + --atomic \ + --debug + ``` + + Expected output: + + ```bash + install.go:194: [debug] Original chart version: "v1.11.0" + install.go:211: [debug] CHART PATH: /Users//.cache/helm/repository/cert-manager-v1.11.0.tgz + + # ... + NAME: cert-manager + LAST DEPLOYED: Thu Mar 23 08:40:07 2023 + NAMESPACE: cert-manager + STATUS: deployed + REVISION: 1 + TEST SUITE: None + USER-SUPPLIED VALUES: + {} + + COMPUTED VALUES: + # ... + NOTES: + cert-manager v1.11.0 has been deployed successfully! + + In order to begin issuing certificates, you will need to set up a ClusterIssuer + or Issuer resource (for example, by creating a 'letsencrypt-staging' issuer). + + More information on the different types of issuers and how to configure them + can be found in our documentation: + + https://cert-manager.io/docs/configuration/ + + For information on how to configure cert-manager to automatically provision + Certificates for Ingress resources, take a look at the `ingress-shim` + documentation: + + https://cert-manager.io/docs/usage/ingress/ + ``` + +1. Verify that cert-manager has been deployed and is in running state. ```bash - # Install the CustomResourceDefinitions and cert-manager itself - kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v0.15.2/cert-manager.yaml + kubectl get pods -n cert-manager ``` -- Using Helm: + Expected output: ```bash + NAME READY STATUS RESTARTS AGE + cert-manager-6b4d84674-4pkh4 1/1 Running 0 117s + cert-manager-cainjector-59f8d9f696-wpqph 1/1 Running 0 117s + cert-manager-webhook-56889bfc96-x8szj 1/1 Running 0 117s + ``` + +### Install zookeeper-operator with Helm {#install-zookeeper-operator-with-helm} + +{{< kafka-operator >}} requires [Zookeeper](https://zookeeper.apache.org) for Kafka operations. You must: - # Add the jetstack helm repo - helm repo add jetstack https://charts.jetstack.io - helm repo update +- Deploy zookeeper-operator if your environment doesn't have an instance of it yet. +- Create a Zookeeper cluster if there is none in your environment yet for your Kafka cluster. - # Install cert-manager into the cluster - # Using helm3 - helm install cert-manager --namespace cert-manager --create-namespace --version v0.15.2 jetstack/cert-manager +> Note: You are recommended to create a separate ZooKeeper deployment for each Kafka cluster. If you want to share the same ZooKeeper cluster across multiple Kafka cluster instances, use a unique zk path in the KafkaCluster CR to avoid conflicts (even with previous defunct KafkaCluster instances). - # Install the CustomResourceDefinitions - kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v0.15.2/cert-manager.crds.yaml +1. If you are installing zookeeper-operator on a Red Hat OpenShift cluster, elevate the permissions of the namespace containing the Zookeeper service account. -Verify that the cert-manager pods have been created: + - Using the default `zookeeper` namespace: -```bash -kubectl get pods -n cert-manager -``` + ```bash + oc adm policy add-scc-to-group anyuid system:serviceaccounts:zookeeper + ``` + + - Using a custom namespace for Zookeeper: + + ```bash + oc adm policy add-scc-to-group anyuid system:serviceaccounts:{NAMESPACE_FOR_ZOOKEEPER_SERVICE_ACCOUNT} + ``` + + Expected output: + + ```bash + clusterrole.rbac.authorization.k8s.io/system:openshift:scc:anyuid added: "system:serviceaccounts:{NAMESPACE_FOR_ZOOKEEPER_SERVICE_ACCOUNT}" + ``` + +1. Install ZooKeeper using the [Pravega's Zookeeper Operator](https://github.com/pravega/zookeeper-operator). + + ```bash + helm install \ + zookeeper-operator \ + --repo https://charts.pravega.io zookeeper-operator \ + --version 0.2.14 \ + --namespace=zookeeper \ + --create-namespace \ + --atomic \ + --debug + ``` -Expected output: + Expected output: -```bash -NAME READY STATUS RESTARTS AGE -cert-manager-7747db9d88-vgggn 1/1 Running 0 29m -cert-manager-cainjector-87c85c6ff-q945h 1/1 Running 1 29m -cert-manager-webhook-64dc9fff44-2p6tx 1/1 Running 0 29m -``` + ```bash + install.go:194: [debug] Original chart version: "0.2.14" + install.go:211: [debug] CHART PATH: /Users//.cache/helm/repository/zookeeper-operator-0.2.14.tgz + + # ... + NAME: zookeeper-operator + LAST DEPLOYED: Thu Mar 23 08:42:42 2023 + NAMESPACE: zookeeper + STATUS: deployed + REVISION: 1 + TEST SUITE: None + USER-SUPPLIED VALUES: + {} + + COMPUTED VALUES: + # ... -### Install Zookeeper {#install-zookeeper} + ``` -Kafka requires [Zookeeper](https://zookeeper.apache.org). Deploy a Zookeeper cluster if you don't already have one. +1. Verify that zookeeper-operator has been deployed and is in running state. -> Note: You are recommended to create a separate Zookeeper deployment for each Kafka cluster. If you want to share the same Zookeeper cluster across multiple Kafka cluster instances, use a unique zk path in the KafkaCluster CR to avoid conflicts (even with previous defunct KafkaCluster instances). + ```bash + kubectl get pods --namespace zookeeper + ``` -1. Install Zookeeper using the [Pravega's Zookeeper Operator](https://github.com/pravega/zookeeper-operator). + Expected output: ```bash - helm repo add pravega https://charts.pravega.io - helm repo update - helm install zookeeper-operator --namespace=zookeeper --create-namespace pravega/zookeeper-operator + NAME READY STATUS RESTARTS AGE + zookeeper-operator-5857967dcc-gm5l5 1/1 Running 0 3m22s ``` +### Deploy a Zookeeper cluster for Kafka {#deploy-a-zookeeper-cluster-for-kafka} + 1. Create a Zookeeper cluster. {{< include-code "create-zookeeper.sample" "bash" >}} -1. Verify that Zookeeper has beeb deployed. +1. Verify that Zookeeper has been deployed and is in running state with the configured number of replicas. ```bash kubectl get pods -n zookeeper @@ -114,92 +291,391 @@ Kafka requires [Zookeeper](https://zookeeper.apache.org). Deploy a Zookeeper clu zookeeper-operator-54444dbd9d-2tccj 1/1 Running 0 28m ``` -### Install Prometheus-operator +### Install prometheus-operator with Helm {#install-prometheus-operator-with-helm} + +{{< kafka-operator >}} uses [Prometheus](https://prometheus.io/) for exporting metrics of the Kafka cluster. It is recommended to deploy a Prometheus instance if you don't already have one. + +1. If you are installing prometheus-operator on a Red Hat OpenShift version **4.10** cluster, create a `SecurityContextConstraints` object `nonroot-v2` with the following configuration for Prometheus admission and operator service accounts to work. -Install the [Prometheus operator](https://github.com/prometheus-operator/prometheus-operator) and its CustomResourceDefinitions to the `default` namespace. + ```bash + oc create -f - < Note: OpenShift doesn't let you install Prometheus in the `default` namespace due to security considerations. + + - Using the default `prometheus` namespace: - Add the prometheus repository to Helm: + ```bash + oc adm policy add-scc-to-user nonroot-v2 system:serviceaccount:prometheus:prometheus-kube-prometheus-admission + oc adm policy add-scc-to-user nonroot-v2 system:serviceaccount:prometheus:prometheus-kube-prometheus-operator + oc adm policy add-scc-to-user hostnetwork system:serviceaccount:prometheus:prometheus-operator-prometheus-node-exporter + oc adm policy add-scc-to-user node-exporter system:serviceaccount:prometheus:prometheus-operator-prometheus-node-exporter + ``` + + - Using a custom namespace or service account name for Prometheus: + + ```bash + oc adm policy add-scc-to-user nonroot-v2 system:serviceaccount:{NAMESPACE_FOR_PROMETHEUS}:{PROMETHEUS_ADMISSION_SERVICE_ACCOUNT_NAME} + oc adm policy add-scc-to-user nonroot-v2 system:serviceaccount:{NAMESPACE_FOR_PROMETHEUS}:{PROMETHEUS_OPERATOR_SERVICE_ACCOUNT_NAME} + oc adm policy add-scc-to-user hostnetwork system:serviceaccount:{NAMESPACE_FOR_PROMETHEUS}:{PROMETHEUS_NODE_EXPORTER_SERVICE_ACCOUNT_NAME} + oc adm policy add-scc-to-user node-exporter system:serviceaccount:{NAMESPACE_FOR_PROMETHEUS}:{PROMETHEUS_NODE_EXPORTER_SERVICE_ACCOUNT_NAME} + ``` + + Expected output: ```bash - helm repo add prometheus-community https://prometheus-community.github.io/helm-charts - helm repo update + clusterrole.rbac.authorization.k8s.io/system:openshift:scc:nonroot-v2 added: "{PROMETHEUS_ADMISSION_SERVICE_ACCOUNT_NAME}" + clusterrole.rbac.authorization.k8s.io/system:openshift:scc:nonroot-v2 added: "{PROMETHEUS_OPERATOR_SERVICE_ACCOUNT_NAME}" + clusterrole.rbac.authorization.k8s.io/system:openshift:scc:hostnetwork added: "{PROMETHEUS_NODE_EXPORTER_SERVICE_ACCOUNT_NAME}" + clusterrole.rbac.authorization.k8s.io/system:openshift:scc:node-exporter added: "{PROMETHEUS_NODE_EXPORTER_SERVICE_ACCOUNT_NAME}" + ``` + +1. Install the [Prometheus operator](https://github.com/prometheus-operator/prometheus-operator) and its CustomResourceDefinitions into the `prometheus` namespace. + - On an OpenShift cluster: + + ```bash + helm install \ + prometheus \ + --repo https://prometheus-community.github.io/helm-charts kube-prometheus-stack \ + --version 42.0.1 \ + --namespace prometheus \ + --create-namespace \ + --atomic \ + --debug \ + --set prometheusOperator.createCustomResource=true \ + --set defaultRules.enabled=false \ + --set alertmanager.enabled=false \ + --set grafana.enabled=false \ + --set kubeApiServer.enabled=false \ + --set kubelet.enabled=false \ + --set kubeControllerManager.enabled=false \ + --set coreDNS.enabled=false \ + --set kubeEtcd.enabled=false \ + --set kubeScheduler.enabled=false \ + --set kubeProxy.enabled=false \ + --set kubeStateMetrics.enabled=false \ + --set nodeExporter.enabled=false \ + --set prometheus.enabled=false \ + --set prometheusOperator.containerSecurityContext.capabilities.drop\[0\]="ALL" \ + --set prometheusOperator.containerSecurityContext.seccompProfile.type=RuntimeDefault \ + --set prometheusOperator.admissionWebhooks.createSecretJob.securityContext.allowPrivilegeEscalation=false \ + --set prometheusOperator.admissionWebhooks.createSecretJob.securityContext.capabilities.drop\[0\]="ALL" \ + --set prometheusOperator.admissionWebhooks.createSecretJob.securityContext.seccompProfile.type=RuntimeDefault \ + --set prometheusOperator.admissionWebhooks.patchWebhookJob.securityContext.allowPrivilegeEscalation=false \ + --set prometheusOperator.admissionWebhooks.patchWebhookJob.securityContext.capabilities.drop\[0\]="ALL" \ + --set prometheusOperator.admissionWebhooks.patchWebhookJob.securityContext.seccompProfile.type=RuntimeDefault + ``` + + - On a regular Kubernetes cluster: + + ```bash + helm install prometheus \ + --repo https://prometheus-community.github.io/helm-charts kube-prometheus-stack \ + --version 42.0.1 \ + --namespace prometheus \ + --create-namespace \ + --atomic \ + --debug \ + --set prometheusOperator.createCustomResource=true \ + --set defaultRules.enabled=false \ + --set alertmanager.enabled=false \ + --set grafana.enabled=false \ + --set kubeApiServer.enabled=false \ + --set kubelet.enabled=false \ + --set kubeControllerManager.enabled=false \ + --set coreDNS.enabled=false \ + --set kubeEtcd.enabled=false \ + --set kubeScheduler.enabled=false \ + --set kubeProxy.enabled=false \ + --set kubeStateMetrics.enabled=false \ + --set nodeExporter.enabled=false \ + --set prometheus.enabled=false + ``` + + Expected output: + + ```bash + install.go:194: [debug] Original chart version: "45.7.1" + install.go:211: [debug] CHART PATH: /Users//.cache/helm/repository/kube-prometheus-stack-45.7.1.tgz + + # ... + NAME: prometheus + LAST DEPLOYED: Thu Mar 23 09:28:29 2023 + NAMESPACE: prometheus + STATUS: deployed + REVISION: 1 + TEST SUITE: None + USER-SUPPLIED VALUES: + # ... + + COMPUTED VALUES: + # ... + NOTES: + kube-prometheus-stack has been installed. Check its status by running: + kubectl --namespace prometheus get pods -l "release=prometheus" + + Visit https://github.com/prometheus-operator/kube-prometheus for instructions on how to create & configure Alertmanager and Prometheus instances using the Operator. + ``` + +1. Verify that prometheus-operator has been deployed and is in running state. + + ```bash + kubectl get pods -n prometheus ``` - Install only the Prometheus-operator: + Expected output: ```bash - helm install prometheus --namespace default prometheus-community/kube-prometheus-stack \ - --set prometheusOperator.createCustomResource=true \ - --set defaultRules.enabled=false \ - --set alertmanager.enabled=false \ - --set grafana.enabled=false \ - --set kubeApiServer.enabled=false \ - --set kubelet.enabled=false \ - --set kubeControllerManager.enabled=false \ - --set coreDNS.enabled=false \ - --set kubeEtcd.enabled=false \ - --set kubeScheduler.enabled=false \ - --set kubeProxy.enabled=false \ - --set kubeStateMetrics.enabled=false \ - --set nodeExporter.enabled=false \ - --set prometheus.enabled=false + NAME READY STATUS RESTARTS AGE + prometheus-kube-prometheus-operator-646d5fd7d5-s72jn 1/1 Running 0 15m ``` -### Install the Kafka operator with Helm {#kafka-operator-helm} +### Install {{< kafka-operator >}} with Helm {#install-kafka-operator-with-helm} -You can deploy the Kafka operator using a [Helm chart](https://github.com/banzaicloud/kafka-operator/tree/master/charts). Complete the following steps. +{{< kafka-operator >}} can be deployed using its [Helm chart](https://github.com/banzaicloud/koperator/tree/{{< param "versionnumbers-sdm.koperatorCurrentversion" >}}/charts). -1. Install the kafka-operator CustomResourceDefinition resources (adjust the version number to the Kafka operator release you want to install). This is performed in a separate step to allow you to easily uninstall and reinstall kafka-operator without deleting your installed custom resources. +1. Install the {{< kafka-operator >}} CustomResourceDefinition resources (adjust the version number to the {{< kafka-operator >}} release you want to install). This is performed in a separate step to allow you to uninstall and reinstall {{< kafka-operator >}} without deleting your installed custom resources. ```bash - kubectl create --validate=false -f https://github.com/banzaicloud/kafka-operator/releases/download/v0.15.1/kafka-operator.crds.yaml + kubectl create \ + --validate=false \ + -f https://github.com/banzaicloud/koperator/releases/download/v{{< param "versionnumbers-sdm.koperatorCurrentversion" >}}/kafka-operator.crds.yaml ``` -1. Add the Banzai Cloud repository to Helm. + Expected output: ```bash - helm repo add banzaicloud-stable https://kubernetes-charts.banzaicloud.com/ - helm repo update + customresourcedefinition.apiextensions.k8s.io/cruisecontroloperations.kafka.banzaicloud.io created + customresourcedefinition.apiextensions.k8s.io/kafkaclusters.kafka.banzaicloud.io created + customresourcedefinition.apiextensions.k8s.io/kafkatopics.kafka.banzaicloud.io created + customresourcedefinition.apiextensions.k8s.io/kafkausers.kafka.banzaicloud.io created ``` -1. Install the Kafka operator into the *kafka* namespace: +1. If you are installing {{< kafka-operator >}} on a Red Hat OpenShift cluster: + + 1. Elevate the permissions of the Koperator namespace. + + - Using the default `kafka` namespace: + + ```bash + oc adm policy add-scc-to-group anyuid system:serviceaccounts:kafka + ``` + + - Using a custom namespace for Koperator: + + ```bash + oc adm policy add-scc-to-group anyuid system:serviceaccounts:{NAMESPACE_FOR_KOPERATOR} + ``` + + Expected output: + + ```bash + clusterrole.rbac.authorization.k8s.io/system:openshift:scc:anyuid added: "system:serviceaccounts:{NAMESPACE_FOR_KOPERATOR}" + ``` + + 1. If the Kafka cluster is going to run in a different namespace than {{< kafka-operator >}}, elevate the permissions of the Kafka cluster broker service account (`ServiceAccountName` provided in the KafkaCluster custom resource). + + ```bash + oc adm policy add-scc-to-user anyuid system:serviceaccount:{NAMESPACE_FOR_KAFKA_CLUSTER_BROKER_SERVICE_ACCOUNT}:{KAFKA_CLUSTER_BROKER_SERVICE_ACCOUNT_NAME} + ``` + + Expected output: + + ```bash + clusterrole.rbac.authorization.k8s.io/system:openshift:scc:anyuid added: "system:serviceaccount:{NAMESPACE_FOR_KAFKA_CLUSTER_BROKER_SERVICE_ACCOUNT}:{KAFKA_CLUSTER_BROKER_SERVICE_ACCOUNT_NAME}" + ``` + +1. Install {{< kafka-operator >}} into the *kafka* namespace: ```bash - helm install kafka-operator --namespace=kafka --create-namespace banzaicloud-stable/kafka-operator + helm install \ + kafka-operator \ + --repo https://kubernetes-charts.banzaicloud.com kafka-operator \ + --version {{< param "versionnumbers-sdm.koperatorCurrentversion" >}} + --namespace=kafka \ + --create-namespace \ + --atomic \ + --debug ``` -1. Create the Kafka cluster using the KafkaCluster custom resource. You can find various examples for the custom resource in the [Kafka operator repository](https://github.com/banzaicloud/kafka-operator/tree/master/config/samples). + Expected output: - {{< include-headless "warning-listener-protocol.md" "supertubes/kafka-operator" >}} + ```bash + install.go:194: [debug] Original chart version: "" + install.go:211: [debug] CHART PATH: /Users//development/src/github.com/banzaicloud/koperator/kafka-operator-{{< param "versionnumbers-sdm.koperatorCurrentversion" >}}.tgz + + # ... + NAME: kafka-operator + LAST DEPLOYED: Thu Mar 23 10:05:11 2023 + NAMESPACE: kafka + STATUS: deployed + REVISION: 1 + TEST SUITE: None + USER-SUPPLIED VALUES: + # ... + ``` + +1. Verify that Koperator has been deployed and is in running state. + + ```bash + kubectl get pods -n kafka + ``` + + Expected output: + + ```bash + NAME READY STATUS RESTARTS AGE + kafka-operator-operator-8458b45587-286f9 2/2 Running 0 62s + ``` + +### Deploy a Kafka cluster {#deploy-a-kafka-cluster} + +1. Create the Kafka cluster using the KafkaCluster custom resource. You can find various examples for the custom resource in {{% xref "/sdm/koperator/configurations/kafkacluster/_index.md" %}} and in the [{{< kafka-operator >}} repository](https://github.com/banzaicloud/koperator/tree/{{< param "versionnumbers-sdm.koperatorCurrentversion" >}}/config/samples). + + {{< include-headless "warning-listener-protocol.md" "sdm/koperator" >}} - To create a sample Kafka cluster that allows unencrypted client connections, run the following command: ```bash - kubectl create -n kafka -f https://raw.githubusercontent.com/banzaicloud/kafka-operator/master/config/samples/simplekafkacluster.yaml + kubectl create \ + -n kafka \ + -f https://raw.githubusercontent.com/banzaicloud/koperator/{{< param "versionnumbers-sdm.koperatorCurrentversion" >}}/config/samples/simplekafkacluster.yaml ``` - - To create a sample Kafka cluster that allows TLS-encrypted client connections, run the following command. For details on the configuration parameters related to SSL, see {{% xref "/docs/supertubes/kafka-operator/ssl.md#enable-ssl" %}}. + - To create a sample Kafka cluster that allows TLS-encrypted client connections, run the following command. For details on the configuration parameters related to SSL, see {{% xref "/sdm/koperator/ssl.md" %}}. ```bash - kubectl create -n kafka -f https://raw.githubusercontent.com/banzaicloud/kafka-operator/master/config/samples/simplekafkacluster_ssl.yaml + kubectl create \ + -n kafka \ + -f https://raw.githubusercontent.com/banzaicloud/koperator/{{< param "versionnumbers-sdm.koperatorCurrentversion" >}}/config/samples/simplekafkacluster_ssl.yaml ``` -1. If you have installed the Prometheus operator, create the ServiceMonitors. Prometheus will be installed and configured properly for the Kafka operator. + Expected output: + + ```bash + kafkacluster.kafka.banzaicloud.io/kafka created + ``` + +1. Wait and verify that the Kafka cluster resources have been deployed and are in running state. + + ```bash + kubectl -n kafka get kafkaclusters.kafka.banzaicloud.io kafka --watch + ``` + + Expected output: + + ```bash + NAME CLUSTER STATE CLUSTER ALERT COUNT LAST SUCCESSFUL UPGRADE UPGRADE ERROR COUNT AGE + kafka ClusterReconciling 0 0 5s + kafka ClusterReconciling 0 0 7s + kafka ClusterReconciling 0 0 8s + kafka ClusterReconciling 0 0 9s + kafka ClusterReconciling 0 0 2m17s + kafka ClusterReconciling 0 0 3m11s + kafka ClusterReconciling 0 0 3m27s + kafka ClusterReconciling 0 0 3m29s + kafka ClusterReconciling 0 0 3m31s + kafka ClusterReconciling 0 0 3m32s + kafka ClusterReconciling 0 0 3m32s + kafka ClusterRunning 0 0 3m32s + kafka ClusterReconciling 0 0 3m32s + kafka ClusterRunning 0 0 3m34s + kafka ClusterReconciling 0 0 4m23s + kafka ClusterRunning 0 0 4m25s + kafka ClusterReconciling 0 0 4m25s + kafka ClusterRunning 0 0 4m27s + kafka ClusterRunning 0 0 4m37s + kafka ClusterReconciling 0 0 4m37s + kafka ClusterRunning 0 0 4m39s + ``` + + ```bash + kubectl get pods -n kafka + ``` + + Expected output: + + ```bash + kafka-0-9brj4 1/1 Running 0 94s + kafka-1-c2spf 1/1 Running 0 93s + kafka-2-p6sg2 1/1 Running 0 92s + kafka-cruisecontrol-776f49fdbb-rjhp8 1/1 Running 0 51s + kafka-operator-operator-7d47f65d86-2mx6b 2/2 Running 0 13m + ``` + +1. If prometheus-operator is deployed, create a Prometheus instance and corresponding ServiceMonitors for {{< kafka-operator >}}. + + ```bash + kubectl create \ + -n kafka \ + -f https://raw.githubusercontent.com/banzaicloud/koperator/{{< param "versionnumbers-sdm.koperatorCurrentversion" >}}/config/samples/kafkacluster-prometheus.yaml + ``` + + Expected output: ```bash - kubectl create -n kafka -f https://raw.githubusercontent.com/banzaicloud/kafka-operator/master/config/samples/kafkacluster-prometheus.yaml + clusterrole.rbac.authorization.k8s.io/prometheus created + clusterrolebinding.rbac.authorization.k8s.io/prometheus created + prometheus.monitoring.coreos.com/kafka-prometheus created + prometheusrule.monitoring.coreos.com/kafka-alerts created + serviceaccount/prometheus created + servicemonitor.monitoring.coreos.com/cruisecontrol-servicemonitor created + servicemonitor.monitoring.coreos.com/kafka-servicemonitor created ``` -1. Verify that the Kafka cluster has been created. +1. Wait and verify that the Kafka cluster Prometheus instance has been deployed and is in running state. ```bash kubectl get pods -n kafka @@ -214,10 +690,10 @@ You can deploy the Kafka operator using a [Helm chart](https://github.com/banzai kafka-2-lppzr 1/1 Running 0 15m kafka-cruisecontrol-fb659b84b-7cwpn 1/1 Running 0 15m kafka-operator-operator-8bb75c7fb-7w4lh 2/2 Running 0 17m - prometheus-kafka-prometheus-0 2/2 Running 1 16m + prometheus-kafka-prometheus-0 2/2 Running 0 16m ``` -## Test your deployment +## Test your deployment {#test-your-deployment} - For a simple test, see [Test provisioned Kafka Cluster](../test/). - For a more in-depth view at using SSL and the `KafkaUser` CRD, see [Securing Kafka With SSL](../ssl/). diff --git a/docs/kafkacat-ssl.sample b/docs/kafkacat-ssl.sample index 62d45cc..f746b15 100644 --- a/docs/kafkacat-ssl.sample +++ b/docs/kafkacat-ssl.sample @@ -6,7 +6,7 @@ metadata: spec: containers: - name: kafka-test - image: solsson/kafkacat + image: edenhill/kcat:1.7.0 # Just spin & wait forever command: [ "/bin/sh", "-c", "--" ] args: [ "while true; do sleep 3000; done;" ] diff --git a/docs/license.md b/docs/license.md index 3b42f98..0d69642 100644 --- a/docs/license.md +++ b/docs/license.md @@ -1,10 +1,8 @@ --- -title: License of Kafka operator +title: License of Koperator weight: 10000 --- -Copyright (c) 2019 [Banzai Cloud, Inc.](https://banzaicloud.com) - Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at diff --git a/docs/monitoring.md b/docs/monitoring.md index 1f9d1fb..9324851 100644 --- a/docs/monitoring.md +++ b/docs/monitoring.md @@ -1,20 +1,30 @@ --- -title: Monitoring Kafka on Kubernetes -shorttitle: Monitoring +title: Monitoring Apache Kafka on Kubernetes +linktitle: Monitoring weight: 600 --- -This documentation shows you how to enable custom monitoring on a Kafka cluster installed using the [Kafka operator](/products/kafka-operator/). +This documentation shows you how to enable custom monitoring on an Apache Kafka cluster installed using [{{< kafka-operator >}}](https://github.com/banzaicloud/koperator). ## Using Helm for Prometheus -By default operator installs Kafka Pods with the following annotations, also it opens port 9020 in all brokers to enable scraping. +By default, the {{< kafka-operator >}} does not set annotations on the broker pods. To set annotations on the broker pods, specify them in the KafkaCluster CR. Also, you must open port 9020 on brokers and in CruiseControl to enable scraping. For example: ```yaml - "prometheus.io/scrape": "true" - "prometheus.io/port": "9020" +brokerConfigGroups: + default: + brokerAnnotations: + prometheus.io/scrape: "true" + prometheus.io/port: "9020" + +# ... + +cruiseControlConfig: + cruiseControlAnnotations: + prometheus.io/port: "9020" + prometheus.io/scrape: "true" ``` Prometheus must be configured to recognize these annotations. The following example contains the required config. @@ -48,16 +58,16 @@ Prometheus must be configured to recognize these annotations. The following exam target_label: __address__ ``` -Using the provided [CR](https://github.com/banzaicloud/kafka-operator/blob/master/config/samples/banzaicloud_v1beta1_kafkacluster.yaml), the operator installs the official [jmx exporter](https://github.com/prometheus/jmx_exporter) for Prometheus. +If you are using the provided [CR](https://github.com/banzaicloud/koperator/blob/master/config/samples/banzaicloud_v1beta1_kafkacluster.yaml), the operator installs the official [jmx exporter](https://github.com/prometheus/jmx_exporter) for Prometheus. -To change this behavior, modify the following lines in the end of the CR. +To change this behavior, modify the following lines at the end of the CR. ```yaml monitoringConfig: jmxImage describes the used prometheus jmx exporter agent container - jmxImage: "banzaicloud/jmx-javaagent:0.12.0" + jmxImage: "banzaicloud/jmx-javaagent:0.15.0" pathToJar describes the path to the jar file in the given image - pathToJar: "/opt/jmx_exporter/jmx_prometheus_javaagent-0.12.0.jar" + pathToJar: "/opt/jmx_exporter/jmx_prometheus_javaagent-0.15.0.jar" kafkaJMXExporterConfig describes jmx exporter config for Kafka kafkaJMXExporterConfig: | lowercaseOutputName: true @@ -77,7 +87,7 @@ Configure the CR the following way: Disabling Headless service means the operator will set up Kafka with unique services per broker. -Once you have a cluster up and running create as many ServiceMonitors as brokers. +Once you have a cluster up and running, create as many ServiceMonitors as brokers. ```yaml apiVersion: monitoring.coreos.com/v1 diff --git a/docs/rackawareness/index.md b/docs/rackawareness/index.md index 9951c6a..556a743 100644 --- a/docs/rackawareness/index.md +++ b/docs/rackawareness/index.md @@ -1,16 +1,16 @@ --- title: Configure rack awareness -shorttitle: Rack awareness +linktitle: Rack awareness weight: 750 --- -Kafka automatically replicates partitions across brokers, so if a broker fails, the data is safely preserved on another. Kafka's rack awareness feature spreads replicas of the same partition across different **failure groups** (racks or availability zones). This extends the guarantees Kafka provides for broker-failure to cover rack and availability zone (AZ) failures, limiting the risk of data loss should all the brokers in the same ack or AZ fail at once. +Kafka automatically replicates partitions across brokers, so if a broker fails, the data is safely preserved on another. Kafka's rack awareness feature spreads replicas of the same partition across different **failure groups** (racks or availability zones). This extends the guarantees Kafka provides for broker-failure to cover rack and availability zone (AZ) failures, limiting the risk of data loss should all the brokers in the same rack or AZ fail at once. -> Note: All brokers deployed by the Kafka operator must belong to the same Kubernetes cluster. If you want to spread your brokers across multiple Kubernetes clusters, as in a hybrid-cloud or multi-clouds environment (or just to add geo-redundancy to your setup), consider using our commercial [Supertubes](/products/supertubes/) solution. +> Note: All brokers deployed by {{< kafka-operator >}} must belong to the same Kubernetes cluster. -Since rack awareness is so vitally important, especially in multi-region and hybrid-cloud environments, the [Kafka operator](https://github.com/banzaicloud/kafka-operator) provides an automated solution for it, and allows fine-grained broker rack configuration based on pod affinities and anti-affinities. (To learn more about affinities and anti-affinities, see [Taints and tolerations, pod and node affinities demystified]({{< blogref "k8s-taints-tolerations-affinities.md" >}}).) +Since rack awareness is so vitally important, especially in multi-region and hybrid-cloud environments, [{{< kafka-operator >}}](https://github.com/banzaicloud/koperator) provides an automated solution for it, and allows fine-grained broker rack configuration based on pod affinities and anti-affinities. (To learn more about affinities and anti-affinities, see [Taints and tolerations, pod and node affinities demystified]({{< blogref "k8s-taints-tolerations-affinities.md" >}}).) -When [well-known Kubernetes labels](https://kubernetes.io/docs/reference/kubernetes-api/labels-annotations-taints/) are available (for example, AZ, node labels, and so on), the Kafka operator attempts to improve broker resilience by default. +When [well-known Kubernetes labels](https://kubernetes.io/docs/reference/kubernetes-api/labels-annotations-taints/) are available (for example, AZ, node labels, and so on), {{< kafka-operator >}} attempts to improve broker resilience by default. ![Rack Awareness](kafkarack.png) @@ -43,10 +43,10 @@ Note that depending on your use case, you might need additional configuration on ## Under the hood -As mentioned earlier, `broker.rack` is a read-only broker config, so is set whenever the broker starts or restarts. The Banzai Cloud [Kafka operator](https://github.com/banzaicloud/kafka-operator) holds all its configs within a ConfigMap in each broker. +As mentioned earlier, `broker.rack` is a read-only broker config, so is set whenever the broker starts or restarts. [{{< kafka-operator >}}](https://github.com/banzaicloud/koperator) holds all its configs within a ConfigMap in each broker. Getting label values from nodes and using them to generate a ConfigMap is relatively easy, but to determine where the exact broker/pod is scheduled, the operator has to wait until the pod is *actually* scheduled to a node. Luckily, Kubernetes schedules pods even when a given ConfigMap is unavailable. However, the corresponding pod will remain in a pending state as long as the ConfigMap is not available to mount. The operator makes use of this pending state to gather all the necessary node labels and initialize a ConfigMap with the fetched data. To take advantage of this, we introduced a status field called `RackAwarenessState` in our CRD. The operator populates this status field with two values, `WaitingForRackAwareness` and `Configured`. -![Rack Awareness](/img/blog/kafka-rack-awareness/kafkarack.gif) + ## When a broker fails diff --git a/docs/rackawareness/kafkarack.png b/docs/rackawareness/kafkarack.png index 6b407bf..778d712 100644 Binary files a/docs/rackawareness/kafkarack.png and b/docs/rackawareness/kafkarack.png differ diff --git a/docs/reference/_index.md b/docs/reference/_index.md new file mode 100644 index 0000000..55d2252 --- /dev/null +++ b/docs/reference/_index.md @@ -0,0 +1,8 @@ +--- +title: CRD +weight: 990 +--- + +The following sections contain the reference documentation of the various custom resource definitions (CRDs) that are specific to Koperator. + +For sample YAML files, see the [samples directory in the GitHub project](https://github.com/banzaicloud/koperator/tree/master/config/samples). diff --git a/docs/reference/crd/kafkaclusters.kafka.banzaicloud.io.md b/docs/reference/crd/kafkaclusters.kafka.banzaicloud.io.md new file mode 100644 index 0000000..d596b3c --- /dev/null +++ b/docs/reference/crd/kafkaclusters.kafka.banzaicloud.io.md @@ -0,0 +1,31382 @@ +--- +title: KafkaCluster CRD schema reference (group kafka.banzaicloud.io) +linkTitle: KafkaCluster +description: | + KafkaCluster is the Schema for the kafkaclusters API +weight: 100 +crd: + name_camelcase: KafkaCluster + name_plural: kafkaclusters + name_singular: kafkacluster + group: kafka.banzaicloud.io + technical_name: kafkaclusters.kafka.banzaicloud.io + scope: Namespaced + source_repository: ../../ + source_repository_ref: master + versions: + - v1beta1 + topics: +layout: crd +owner: + - https://github.com/banzaicloud/ +aliases: + - /reference/cp-k8s-api/kafkaclusters.kafka.banzaicloud.io/ +technical_name: kafkaclusters.kafka.banzaicloud.io +source_repository: ../../ +source_repository_ref: master +--- + +## KafkaCluster + + +KafkaCluster is the Schema for the kafkaclusters API +
+
Full name:
+
kafkaclusters.kafka.banzaicloud.io
+
Group:
+
kafka.banzaicloud.io
+
Singular name:
+
kafkacluster
+
Plural name:
+
kafkaclusters
+
Scope:
+
Namespaced
+
Versions:
+
v1beta1
+
+ + + +
+ +## Version v1beta1 {#v1beta1} + + + +## Properties {#property-details-v1beta1} + + +
+
+

.apiVersion

+
+
+
+string + +
+ +
+

APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources

+ +
+ +
+
+ +
+
+

.kind

+
+
+
+string + +
+ +
+

Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds

+ +
+ +
+
+ +
+
+

.metadata

+
+
+
+object + +
+ +
+
+ +
+
+

.spec

+
+
+
+object + +
+ +
+

KafkaClusterSpec defines the desired state of KafkaCluster

+ +
+ +
+
+ +
+
+

.spec.alertManagerConfig

+
+
+
+object + +
+ +
+

AlertManagerConfig defines configuration for alert manager

+ +
+ +
+
+ +
+
+

.spec.alertManagerConfig.downScaleLimit

+
+
+
+integer + +
+ +
+

DownScaleLimit the limit for auto-downscaling the Kafka cluster. Once the size of the cluster (number of brokers) reaches or falls below this limit the auto-downscaling triggered by alerts is disabled until the cluster size exceeds this limit. This limit is not enforced if this field is omitted or is <= 0.

+ +
+ +
+
+ +
+
+

.spec.alertManagerConfig.upScaleLimit

+
+
+
+integer + +
+ +
+

UpScaleLimit the limit for auto-upscaling the Kafka cluster. Once the size of the cluster (number of brokers) reaches or exceeds this limit the auto-upscaling triggered by alerts is disabled until the cluster size falls below this limit. This limit is not enforced if this field is omitted or is <= 0.

+ +
+ +
+
+ +
+
+

.spec.brokerConfigGroups

+
+
+
+object + +
+ +
+
+ +
+
+

.spec.brokers

+
+
+
+array +Required +
+ +
+
+ +
+
+

.spec.brokers[*]

+
+
+
+object + +
+ +
+

Broker defines the broker basic configuration

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig

+
+
+
+object + +
+ +
+

BrokerConfig defines the broker configuration

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity

+
+
+
+object + +
+ +
+

Any definition received through this field will override the default behaviour of OneBrokerPerNode flag and the operator supposes that the user is aware of how scheduling is done by kubernetes Affinity could be set through brokerConfigGroups definitions and can be set for individual brokers as well where letter setting will override the group setting

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.nodeAffinity

+
+
+
+object + +
+ +
+

Describes node affinity scheduling rules for the pod.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution

+
+
+
+array + +
+ +
+

The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding “weight” to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[*]

+
+
+
+object + +
+ +
+

An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it’s a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op).

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].preference

+
+
+
+object +Required +
+ +
+

A node selector term, associated with the corresponding weight.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].preference.matchExpressions

+
+
+
+array + +
+ +
+

A list of node selector requirements by node’s labels.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].preference.matchExpressions[*]

+
+
+
+object + +
+ +
+

A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].preference.matchExpressions[*].key

+
+
+
+string +Required +
+ +
+

The label key that the selector applies to.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].preference.matchExpressions[*].operator

+
+
+
+string +Required +
+ +
+

Represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].preference.matchExpressions[*].values

+
+
+
+array + +
+ +
+

An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].preference.matchExpressions[*].values[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].preference.matchFields

+
+
+
+array + +
+ +
+

A list of node selector requirements by node’s fields.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].preference.matchFields[*]

+
+
+
+object + +
+ +
+

A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].preference.matchFields[*].key

+
+
+
+string +Required +
+ +
+

The label key that the selector applies to.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].preference.matchFields[*].operator

+
+
+
+string +Required +
+ +
+

Represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].preference.matchFields[*].values

+
+
+
+array + +
+ +
+

An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].preference.matchFields[*].values[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].weight

+
+
+
+integer +Required +
+ +
+

Weight associated with matching the corresponding nodeSelectorTerm, in the range 1-100.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution

+
+
+
+object + +
+ +
+

If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms

+
+
+
+array +Required +
+ +
+

Required. A list of node selector terms. The terms are ORed.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[*]

+
+
+
+object + +
+ +
+

A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[*].matchExpressions

+
+
+
+array + +
+ +
+

A list of node selector requirements by node’s labels.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[*].matchExpressions[*]

+
+
+
+object + +
+ +
+

A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[*].matchExpressions[*].key

+
+
+
+string +Required +
+ +
+

The label key that the selector applies to.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[*].matchExpressions[*].operator

+
+
+
+string +Required +
+ +
+

Represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[*].matchExpressions[*].values

+
+
+
+array + +
+ +
+

An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[*].matchExpressions[*].values[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[*].matchFields

+
+
+
+array + +
+ +
+

A list of node selector requirements by node’s fields.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[*].matchFields[*]

+
+
+
+object + +
+ +
+

A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[*].matchFields[*].key

+
+
+
+string +Required +
+ +
+

The label key that the selector applies to.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[*].matchFields[*].operator

+
+
+
+string +Required +
+ +
+

Represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[*].matchFields[*].values

+
+
+
+array + +
+ +
+

An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[*].matchFields[*].values[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAffinity

+
+
+
+object + +
+ +
+

Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)).

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution

+
+
+
+array + +
+ +
+

The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding “weight” to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[*]

+
+
+
+object + +
+ +
+

The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s)

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm

+
+
+
+object +Required +
+ +
+

Required. A pod affinity term, associated with the corresponding weight.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm.labelSelector

+
+
+
+object + +
+ +
+

A label query over a set of resources, in this case pods.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm.labelSelector.matchExpressions

+
+
+
+array + +
+ +
+

matchExpressions is a list of label selector requirements. The requirements are ANDed.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm.labelSelector.matchExpressions[*]

+
+
+
+object + +
+ +
+

A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm.labelSelector.matchExpressions[*].key

+
+
+
+string +Required +
+ +
+

key is the label key that the selector applies to.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm.labelSelector.matchExpressions[*].operator

+
+
+
+string +Required +
+ +
+

operator represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm.labelSelector.matchExpressions[*].values

+
+
+
+array + +
+ +
+

values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm.labelSelector.matchExpressions[*].values[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm.labelSelector.matchLabels

+
+
+
+object + +
+ +
+

matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is “key”, the operator is “In”, and the values array contains only “value”. The requirements are ANDed.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm.namespaceSelector

+
+
+
+object + +
+ +
+

A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means “this pod’s namespace”. An empty selector ({}) matches all namespaces.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm.namespaceSelector.matchExpressions

+
+
+
+array + +
+ +
+

matchExpressions is a list of label selector requirements. The requirements are ANDed.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm.namespaceSelector.matchExpressions[*]

+
+
+
+object + +
+ +
+

A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm.namespaceSelector.matchExpressions[*].key

+
+
+
+string +Required +
+ +
+

key is the label key that the selector applies to.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm.namespaceSelector.matchExpressions[*].operator

+
+
+
+string +Required +
+ +
+

operator represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm.namespaceSelector.matchExpressions[*].values

+
+
+
+array + +
+ +
+

values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm.namespaceSelector.matchExpressions[*].values[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm.namespaceSelector.matchLabels

+
+
+
+object + +
+ +
+

matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is “key”, the operator is “In”, and the values array contains only “value”. The requirements are ANDed.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm.namespaces

+
+
+
+array + +
+ +
+

namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means “this pod’s namespace”.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm.namespaces[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm.topologyKey

+
+
+
+string +Required +
+ +
+

This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].weight

+
+
+
+integer +Required +
+ +
+

weight associated with matching the corresponding podAffinityTerm, in the range 1-100.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution

+
+
+
+array + +
+ +
+

If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[*]

+
+
+
+object + +
+ +
+

Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key matches that of any node on which a pod of the set of pods is running

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[*].labelSelector

+
+
+
+object + +
+ +
+

A label query over a set of resources, in this case pods.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[*].labelSelector.matchExpressions

+
+
+
+array + +
+ +
+

matchExpressions is a list of label selector requirements. The requirements are ANDed.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[*].labelSelector.matchExpressions[*]

+
+
+
+object + +
+ +
+

A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[*].labelSelector.matchExpressions[*].key

+
+
+
+string +Required +
+ +
+

key is the label key that the selector applies to.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[*].labelSelector.matchExpressions[*].operator

+
+
+
+string +Required +
+ +
+

operator represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[*].labelSelector.matchExpressions[*].values

+
+
+
+array + +
+ +
+

values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[*].labelSelector.matchExpressions[*].values[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[*].labelSelector.matchLabels

+
+
+
+object + +
+ +
+

matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is “key”, the operator is “In”, and the values array contains only “value”. The requirements are ANDed.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[*].namespaceSelector

+
+
+
+object + +
+ +
+

A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means “this pod’s namespace”. An empty selector ({}) matches all namespaces.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[*].namespaceSelector.matchExpressions

+
+
+
+array + +
+ +
+

matchExpressions is a list of label selector requirements. The requirements are ANDed.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[*].namespaceSelector.matchExpressions[*]

+
+
+
+object + +
+ +
+

A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[*].namespaceSelector.matchExpressions[*].key

+
+
+
+string +Required +
+ +
+

key is the label key that the selector applies to.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[*].namespaceSelector.matchExpressions[*].operator

+
+
+
+string +Required +
+ +
+

operator represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[*].namespaceSelector.matchExpressions[*].values

+
+
+
+array + +
+ +
+

values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[*].namespaceSelector.matchExpressions[*].values[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[*].namespaceSelector.matchLabels

+
+
+
+object + +
+ +
+

matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is “key”, the operator is “In”, and the values array contains only “value”. The requirements are ANDed.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[*].namespaces

+
+
+
+array + +
+ +
+

namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means “this pod’s namespace”.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[*].namespaces[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[*].topologyKey

+
+
+
+string +Required +
+ +
+

This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAntiAffinity

+
+
+
+object + +
+ +
+

Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as some other pod(s)).

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution

+
+
+
+array + +
+ +
+

The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding “weight” to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[*]

+
+
+
+object + +
+ +
+

The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s)

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm

+
+
+
+object +Required +
+ +
+

Required. A pod affinity term, associated with the corresponding weight.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm.labelSelector

+
+
+
+object + +
+ +
+

A label query over a set of resources, in this case pods.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm.labelSelector.matchExpressions

+
+
+
+array + +
+ +
+

matchExpressions is a list of label selector requirements. The requirements are ANDed.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm.labelSelector.matchExpressions[*]

+
+
+
+object + +
+ +
+

A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm.labelSelector.matchExpressions[*].key

+
+
+
+string +Required +
+ +
+

key is the label key that the selector applies to.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm.labelSelector.matchExpressions[*].operator

+
+
+
+string +Required +
+ +
+

operator represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm.labelSelector.matchExpressions[*].values

+
+
+
+array + +
+ +
+

values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm.labelSelector.matchExpressions[*].values[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm.labelSelector.matchLabels

+
+
+
+object + +
+ +
+

matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is “key”, the operator is “In”, and the values array contains only “value”. The requirements are ANDed.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm.namespaceSelector

+
+
+
+object + +
+ +
+

A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means “this pod’s namespace”. An empty selector ({}) matches all namespaces.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm.namespaceSelector.matchExpressions

+
+
+
+array + +
+ +
+

matchExpressions is a list of label selector requirements. The requirements are ANDed.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm.namespaceSelector.matchExpressions[*]

+
+
+
+object + +
+ +
+

A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm.namespaceSelector.matchExpressions[*].key

+
+
+
+string +Required +
+ +
+

key is the label key that the selector applies to.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm.namespaceSelector.matchExpressions[*].operator

+
+
+
+string +Required +
+ +
+

operator represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm.namespaceSelector.matchExpressions[*].values

+
+
+
+array + +
+ +
+

values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm.namespaceSelector.matchExpressions[*].values[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm.namespaceSelector.matchLabels

+
+
+
+object + +
+ +
+

matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is “key”, the operator is “In”, and the values array contains only “value”. The requirements are ANDed.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm.namespaces

+
+
+
+array + +
+ +
+

namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means “this pod’s namespace”.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm.namespaces[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm.topologyKey

+
+
+
+string +Required +
+ +
+

This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].weight

+
+
+
+integer +Required +
+ +
+

weight associated with matching the corresponding podAffinityTerm, in the range 1-100.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution

+
+
+
+array + +
+ +
+

If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[*]

+
+
+
+object + +
+ +
+

Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key matches that of any node on which a pod of the set of pods is running

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[*].labelSelector

+
+
+
+object + +
+ +
+

A label query over a set of resources, in this case pods.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[*].labelSelector.matchExpressions

+
+
+
+array + +
+ +
+

matchExpressions is a list of label selector requirements. The requirements are ANDed.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[*].labelSelector.matchExpressions[*]

+
+
+
+object + +
+ +
+

A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[*].labelSelector.matchExpressions[*].key

+
+
+
+string +Required +
+ +
+

key is the label key that the selector applies to.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[*].labelSelector.matchExpressions[*].operator

+
+
+
+string +Required +
+ +
+

operator represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[*].labelSelector.matchExpressions[*].values

+
+
+
+array + +
+ +
+

values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[*].labelSelector.matchExpressions[*].values[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[*].labelSelector.matchLabels

+
+
+
+object + +
+ +
+

matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is “key”, the operator is “In”, and the values array contains only “value”. The requirements are ANDed.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[*].namespaceSelector

+
+
+
+object + +
+ +
+

A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means “this pod’s namespace”. An empty selector ({}) matches all namespaces.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[*].namespaceSelector.matchExpressions

+
+
+
+array + +
+ +
+

matchExpressions is a list of label selector requirements. The requirements are ANDed.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[*].namespaceSelector.matchExpressions[*]

+
+
+
+object + +
+ +
+

A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[*].namespaceSelector.matchExpressions[*].key

+
+
+
+string +Required +
+ +
+

key is the label key that the selector applies to.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[*].namespaceSelector.matchExpressions[*].operator

+
+
+
+string +Required +
+ +
+

operator represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[*].namespaceSelector.matchExpressions[*].values

+
+
+
+array + +
+ +
+

values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[*].namespaceSelector.matchExpressions[*].values[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[*].namespaceSelector.matchLabels

+
+
+
+object + +
+ +
+

matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is “key”, the operator is “In”, and the values array contains only “value”. The requirements are ANDed.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[*].namespaces

+
+
+
+array + +
+ +
+

namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means “this pod’s namespace”.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[*].namespaces[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[*].topologyKey

+
+
+
+string +Required +
+ +
+

This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.brokerAnnotations

+
+
+
+object + +
+ +
+

Custom annotations for the broker pods - e.g.: Prometheus scraping annotations: prometheus.io/scrape: “true” prometheus.io/port: “9020”

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.brokerIngressMapping

+
+
+
+array + +
+ +
+

BrokerIngressMapping allows to set specific ingress to a specific broker mappings. If left empty, all broker will inherit the default one specified under external listeners config Only used when ExternalListeners.Config is populated

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.brokerIngressMapping[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.brokerLabels

+
+
+
+object + +
+ +
+

Custom labels for the broker pods, example use case: for Prometheus monitoring to capture the group for each broker as a label, e.g.: kafka_broker_group: “default_group” these labels will not override the reserved labels that the operator relies on, for example, “app”, “brokerId”, and “kafka_cr”

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.config

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers

+
+
+
+array + +
+ +
+

Containers add extra Containers to the Kafka broker pod

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*]

+
+
+
+object + +
+ +
+

A single application container that you want to run within a pod.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].args

+
+
+
+array + +
+ +
+

Arguments to the entrypoint. The container image’s CMD is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container’s environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. “$$(VAR_NAME)” will produce the string literal “$(VAR_NAME)”. Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].args[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].command

+
+
+
+array + +
+ +
+

Entrypoint array. Not executed within a shell. The container image’s ENTRYPOINT is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container’s environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. “$$(VAR_NAME)” will produce the string literal “$(VAR_NAME)”. Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].command[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].env

+
+
+
+array + +
+ +
+

List of environment variables to set in the container. Cannot be updated.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].envFrom

+
+
+
+array + +
+ +
+

List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].envFrom[*]

+
+
+
+object + +
+ +
+

EnvFromSource represents the source of a set of ConfigMaps

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].envFrom[*].configMapRef

+
+
+
+object + +
+ +
+

The ConfigMap to select from

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].envFrom[*].configMapRef.name

+
+
+
+string + +
+ +
+

Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].envFrom[*].configMapRef.optional

+
+
+
+boolean + +
+ +
+

Specify whether the ConfigMap must be defined

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].envFrom[*].prefix

+
+
+
+string + +
+ +
+

An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].envFrom[*].secretRef

+
+
+
+object + +
+ +
+

The Secret to select from

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].envFrom[*].secretRef.name

+
+
+
+string + +
+ +
+

Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].envFrom[*].secretRef.optional

+
+
+
+boolean + +
+ +
+

Specify whether the Secret must be defined

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].env[*]

+
+
+
+object + +
+ +
+

EnvVar represents an environment variable present in a Container.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].env[*].name

+
+
+
+string +Required +
+ +
+

Name of the environment variable. Must be a C_IDENTIFIER.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].env[*].value

+
+
+
+string + +
+ +
+

Variable references $(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. “$$(VAR_NAME)” will produce the string literal “$(VAR_NAME)”. Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to “”.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].env[*].valueFrom

+
+
+
+object + +
+ +
+

Source for the environment variable’s value. Cannot be used if value is not empty.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].env[*].valueFrom.configMapKeyRef

+
+
+
+object + +
+ +
+

Selects a key of a ConfigMap.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].env[*].valueFrom.configMapKeyRef.key

+
+
+
+string +Required +
+ +
+

The key to select.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].env[*].valueFrom.configMapKeyRef.name

+
+
+
+string + +
+ +
+

Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].env[*].valueFrom.configMapKeyRef.optional

+
+
+
+boolean + +
+ +
+

Specify whether the ConfigMap or its key must be defined

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].env[*].valueFrom.fieldRef

+
+
+
+object + +
+ +
+

Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'], metadata.annotations['<KEY>'], spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].env[*].valueFrom.fieldRef.apiVersion

+
+
+
+string + +
+ +
+

Version of the schema the FieldPath is written in terms of, defaults to “v1”.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].env[*].valueFrom.fieldRef.fieldPath

+
+
+
+string +Required +
+ +
+

Path of the field to select in the specified API version.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].env[*].valueFrom.resourceFieldRef

+
+
+
+object + +
+ +
+

Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].env[*].valueFrom.resourceFieldRef.containerName

+
+
+
+string + +
+ +
+

Container name: required for volumes, optional for env vars

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].env[*].valueFrom.resourceFieldRef.divisor

+
+
+
+ + +
+ +
+

Specifies the output format of the exposed resources, defaults to “1”

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].env[*].valueFrom.resourceFieldRef.resource

+
+
+
+string +Required +
+ +
+

Required: resource to select

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].env[*].valueFrom.secretKeyRef

+
+
+
+object + +
+ +
+

Selects a key of a secret in the pod’s namespace

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].env[*].valueFrom.secretKeyRef.key

+
+
+
+string +Required +
+ +
+

The key of the secret to select from. Must be a valid secret key.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].env[*].valueFrom.secretKeyRef.name

+
+
+
+string + +
+ +
+

Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].env[*].valueFrom.secretKeyRef.optional

+
+
+
+boolean + +
+ +
+

Specify whether the Secret or its key must be defined

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].image

+
+
+
+string + +
+ +
+

Container image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].imagePullPolicy

+
+
+
+string + +
+ +
+

Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].lifecycle

+
+
+
+object + +
+ +
+

Actions that the management system should take in response to container lifecycle events. Cannot be updated.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].lifecycle.postStart

+
+
+
+object + +
+ +
+

PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].lifecycle.postStart.exec

+
+
+
+object + +
+ +
+

Exec specifies the action to take.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].lifecycle.postStart.exec.command

+
+
+
+array + +
+ +
+

Command is the command line to execute inside the container, the working directory for the command is root (‘/’) in the container’s filesystem. The command is simply exec’d, it is not run inside a shell, so traditional shell instructions (‘|’, etc) won’t work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].lifecycle.postStart.exec.command[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].lifecycle.postStart.httpGet

+
+
+
+object + +
+ +
+

HTTPGet specifies the http request to perform.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].lifecycle.postStart.httpGet.host

+
+
+
+string + +
+ +
+

Host name to connect to, defaults to the pod IP. You probably want to set “Host” in httpHeaders instead.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].lifecycle.postStart.httpGet.httpHeaders

+
+
+
+array + +
+ +
+

Custom headers to set in the request. HTTP allows repeated headers.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].lifecycle.postStart.httpGet.httpHeaders[*]

+
+
+
+object + +
+ +
+

HTTPHeader describes a custom header to be used in HTTP probes

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].lifecycle.postStart.httpGet.httpHeaders[*].name

+
+
+
+string +Required +
+ +
+

The header field name

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].lifecycle.postStart.httpGet.httpHeaders[*].value

+
+
+
+string +Required +
+ +
+

The header field value

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].lifecycle.postStart.httpGet.path

+
+
+
+string + +
+ +
+

Path to access on the HTTP server.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].lifecycle.postStart.httpGet.port

+
+
+
+ +Required +
+ +
+

Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].lifecycle.postStart.httpGet.scheme

+
+
+
+string + +
+ +
+

Scheme to use for connecting to the host. Defaults to HTTP.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].lifecycle.postStart.tcpSocket

+
+
+
+object + +
+ +
+

Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].lifecycle.postStart.tcpSocket.host

+
+
+
+string + +
+ +
+

Optional: Host name to connect to, defaults to the pod IP.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].lifecycle.postStart.tcpSocket.port

+
+
+
+ +Required +
+ +
+

Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].lifecycle.preStop

+
+
+
+object + +
+ +
+

PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod’s termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod’s termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].lifecycle.preStop.exec

+
+
+
+object + +
+ +
+

Exec specifies the action to take.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].lifecycle.preStop.exec.command

+
+
+
+array + +
+ +
+

Command is the command line to execute inside the container, the working directory for the command is root (‘/’) in the container’s filesystem. The command is simply exec’d, it is not run inside a shell, so traditional shell instructions (‘|’, etc) won’t work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].lifecycle.preStop.exec.command[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].lifecycle.preStop.httpGet

+
+
+
+object + +
+ +
+

HTTPGet specifies the http request to perform.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].lifecycle.preStop.httpGet.host

+
+
+
+string + +
+ +
+

Host name to connect to, defaults to the pod IP. You probably want to set “Host” in httpHeaders instead.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].lifecycle.preStop.httpGet.httpHeaders

+
+
+
+array + +
+ +
+

Custom headers to set in the request. HTTP allows repeated headers.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].lifecycle.preStop.httpGet.httpHeaders[*]

+
+
+
+object + +
+ +
+

HTTPHeader describes a custom header to be used in HTTP probes

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].lifecycle.preStop.httpGet.httpHeaders[*].name

+
+
+
+string +Required +
+ +
+

The header field name

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].lifecycle.preStop.httpGet.httpHeaders[*].value

+
+
+
+string +Required +
+ +
+

The header field value

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].lifecycle.preStop.httpGet.path

+
+
+
+string + +
+ +
+

Path to access on the HTTP server.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].lifecycle.preStop.httpGet.port

+
+
+
+ +Required +
+ +
+

Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].lifecycle.preStop.httpGet.scheme

+
+
+
+string + +
+ +
+

Scheme to use for connecting to the host. Defaults to HTTP.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].lifecycle.preStop.tcpSocket

+
+
+
+object + +
+ +
+

Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].lifecycle.preStop.tcpSocket.host

+
+
+
+string + +
+ +
+

Optional: Host name to connect to, defaults to the pod IP.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].lifecycle.preStop.tcpSocket.port

+
+
+
+ +Required +
+ +
+

Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].livenessProbe

+
+
+
+object + +
+ +
+

Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].livenessProbe.exec

+
+
+
+object + +
+ +
+

Exec specifies the action to take.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].livenessProbe.exec.command

+
+
+
+array + +
+ +
+

Command is the command line to execute inside the container, the working directory for the command is root (‘/’) in the container’s filesystem. The command is simply exec’d, it is not run inside a shell, so traditional shell instructions (‘|’, etc) won’t work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].livenessProbe.exec.command[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].livenessProbe.failureThreshold

+
+
+
+integer + +
+ +
+

Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].livenessProbe.grpc

+
+
+
+object + +
+ +
+

GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].livenessProbe.grpc.port

+
+
+
+integer +Required +
+ +
+

Port number of the gRPC service. Number must be in the range 1 to 65535.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].livenessProbe.grpc.service

+
+
+
+string + +
+ +
+

Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md). + If this is not specified, the default behavior is defined by gRPC.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].livenessProbe.httpGet

+
+
+
+object + +
+ +
+

HTTPGet specifies the http request to perform.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].livenessProbe.httpGet.host

+
+
+
+string + +
+ +
+

Host name to connect to, defaults to the pod IP. You probably want to set “Host” in httpHeaders instead.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].livenessProbe.httpGet.httpHeaders

+
+
+
+array + +
+ +
+

Custom headers to set in the request. HTTP allows repeated headers.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].livenessProbe.httpGet.httpHeaders[*]

+
+
+
+object + +
+ +
+

HTTPHeader describes a custom header to be used in HTTP probes

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].livenessProbe.httpGet.httpHeaders[*].name

+
+
+
+string +Required +
+ +
+

The header field name

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].livenessProbe.httpGet.httpHeaders[*].value

+
+
+
+string +Required +
+ +
+

The header field value

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].livenessProbe.httpGet.path

+
+
+
+string + +
+ +
+

Path to access on the HTTP server.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].livenessProbe.httpGet.port

+
+
+
+ +Required +
+ +
+

Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].livenessProbe.httpGet.scheme

+
+
+
+string + +
+ +
+

Scheme to use for connecting to the host. Defaults to HTTP.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].livenessProbe.initialDelaySeconds

+
+
+
+integer + +
+ +
+

Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].livenessProbe.periodSeconds

+
+
+
+integer + +
+ +
+

How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].livenessProbe.successThreshold

+
+
+
+integer + +
+ +
+

Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].livenessProbe.tcpSocket

+
+
+
+object + +
+ +
+

TCPSocket specifies an action involving a TCP port.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].livenessProbe.tcpSocket.host

+
+
+
+string + +
+ +
+

Optional: Host name to connect to, defaults to the pod IP.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].livenessProbe.tcpSocket.port

+
+
+
+ +Required +
+ +
+

Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].livenessProbe.terminationGracePeriodSeconds

+
+
+
+integer + +
+ +
+

Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod’s terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].livenessProbe.timeoutSeconds

+
+
+
+integer + +
+ +
+

Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].name

+
+
+
+string +Required +
+ +
+

Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].ports

+
+
+
+array + +
+ +
+

List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default “0.0.0.0” address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255. Cannot be updated.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].ports[*]

+
+
+
+object + +
+ +
+

ContainerPort represents a network port in a single container.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].ports[*].containerPort

+
+
+
+integer +Required +
+ +
+

Number of port to expose on the pod’s IP address. This must be a valid port number, 0 < x < 65536.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].ports[*].hostIP

+
+
+
+string + +
+ +
+

What host IP to bind the external port to.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].ports[*].hostPort

+
+
+
+integer + +
+ +
+

Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].ports[*].name

+
+
+
+string + +
+ +
+

If specified, this must be an IANA_SVC_NAME and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].ports[*].protocol

+
+
+
+string + +
+ +
+

Protocol for port. Must be UDP, TCP, or SCTP. Defaults to “TCP”.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].readinessProbe

+
+
+
+object + +
+ +
+

Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].readinessProbe.exec

+
+
+
+object + +
+ +
+

Exec specifies the action to take.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].readinessProbe.exec.command

+
+
+
+array + +
+ +
+

Command is the command line to execute inside the container, the working directory for the command is root (‘/’) in the container’s filesystem. The command is simply exec’d, it is not run inside a shell, so traditional shell instructions (‘|’, etc) won’t work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].readinessProbe.exec.command[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].readinessProbe.failureThreshold

+
+
+
+integer + +
+ +
+

Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].readinessProbe.grpc

+
+
+
+object + +
+ +
+

GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].readinessProbe.grpc.port

+
+
+
+integer +Required +
+ +
+

Port number of the gRPC service. Number must be in the range 1 to 65535.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].readinessProbe.grpc.service

+
+
+
+string + +
+ +
+

Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md). + If this is not specified, the default behavior is defined by gRPC.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].readinessProbe.httpGet

+
+
+
+object + +
+ +
+

HTTPGet specifies the http request to perform.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].readinessProbe.httpGet.host

+
+
+
+string + +
+ +
+

Host name to connect to, defaults to the pod IP. You probably want to set “Host” in httpHeaders instead.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].readinessProbe.httpGet.httpHeaders

+
+
+
+array + +
+ +
+

Custom headers to set in the request. HTTP allows repeated headers.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].readinessProbe.httpGet.httpHeaders[*]

+
+
+
+object + +
+ +
+

HTTPHeader describes a custom header to be used in HTTP probes

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].readinessProbe.httpGet.httpHeaders[*].name

+
+
+
+string +Required +
+ +
+

The header field name

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].readinessProbe.httpGet.httpHeaders[*].value

+
+
+
+string +Required +
+ +
+

The header field value

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].readinessProbe.httpGet.path

+
+
+
+string + +
+ +
+

Path to access on the HTTP server.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].readinessProbe.httpGet.port

+
+
+
+ +Required +
+ +
+

Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].readinessProbe.httpGet.scheme

+
+
+
+string + +
+ +
+

Scheme to use for connecting to the host. Defaults to HTTP.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].readinessProbe.initialDelaySeconds

+
+
+
+integer + +
+ +
+

Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].readinessProbe.periodSeconds

+
+
+
+integer + +
+ +
+

How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].readinessProbe.successThreshold

+
+
+
+integer + +
+ +
+

Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].readinessProbe.tcpSocket

+
+
+
+object + +
+ +
+

TCPSocket specifies an action involving a TCP port.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].readinessProbe.tcpSocket.host

+
+
+
+string + +
+ +
+

Optional: Host name to connect to, defaults to the pod IP.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].readinessProbe.tcpSocket.port

+
+
+
+ +Required +
+ +
+

Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].readinessProbe.terminationGracePeriodSeconds

+
+
+
+integer + +
+ +
+

Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod’s terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].readinessProbe.timeoutSeconds

+
+
+
+integer + +
+ +
+

Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].resources

+
+
+
+object + +
+ +
+

Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].resources.limits

+
+
+
+object + +
+ +
+

Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].resources.requests

+
+
+
+object + +
+ +
+

Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].securityContext

+
+
+
+object + +
+ +
+

SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].securityContext.allowPrivilegeEscalation

+
+
+
+boolean + +
+ +
+

AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note that this field cannot be set when spec.os.name is windows.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].securityContext.capabilities

+
+
+
+object + +
+ +
+

The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].securityContext.capabilities.add

+
+
+
+array + +
+ +
+

Added capabilities

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].securityContext.capabilities.add[*]

+
+
+
+string + +
+ +
+

Capability represent POSIX capabilities type

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].securityContext.capabilities.drop

+
+
+
+array + +
+ +
+

Removed capabilities

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].securityContext.capabilities.drop[*]

+
+
+
+string + +
+ +
+

Capability represent POSIX capabilities type

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].securityContext.privileged

+
+
+
+boolean + +
+ +
+

Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. Note that this field cannot be set when spec.os.name is windows.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].securityContext.procMount

+
+
+
+string + +
+ +
+

procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. Note that this field cannot be set when spec.os.name is windows.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].securityContext.readOnlyRootFilesystem

+
+
+
+boolean + +
+ +
+

Whether this container has a read-only root filesystem. Default is false. Note that this field cannot be set when spec.os.name is windows.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].securityContext.runAsGroup

+
+
+
+integer + +
+ +
+

The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].securityContext.runAsNonRoot

+
+
+
+boolean + +
+ +
+

Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].securityContext.runAsUser

+
+
+
+integer + +
+ +
+

The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].securityContext.seLinuxOptions

+
+
+
+object + +
+ +
+

The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].securityContext.seLinuxOptions.level

+
+
+
+string + +
+ +
+

Level is SELinux level label that applies to the container.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].securityContext.seLinuxOptions.role

+
+
+
+string + +
+ +
+

Role is a SELinux role label that applies to the container.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].securityContext.seLinuxOptions.type

+
+
+
+string + +
+ +
+

Type is a SELinux type label that applies to the container.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].securityContext.seLinuxOptions.user

+
+
+
+string + +
+ +
+

User is a SELinux user label that applies to the container.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].securityContext.seccompProfile

+
+
+
+object + +
+ +
+

The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].securityContext.seccompProfile.localhostProfile

+
+
+
+string + +
+ +
+

localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet’s configured seccomp profile location. Must only be set if type is “Localhost”.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].securityContext.seccompProfile.type

+
+
+
+string +Required +
+ +
+

type indicates which kind of seccomp profile will be applied. Valid options are: + Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].securityContext.windowsOptions

+
+
+
+object + +
+ +
+

The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].securityContext.windowsOptions.gmsaCredentialSpec

+
+
+
+string + +
+ +
+

GMSACredentialSpec is where the GMSA admission webhook (https://github.com/kubernetes-sigs/windows-gmsa) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].securityContext.windowsOptions.gmsaCredentialSpecName

+
+
+
+string + +
+ +
+

GMSACredentialSpecName is the name of the GMSA credential spec to use.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].securityContext.windowsOptions.hostProcess

+
+
+
+boolean + +
+ +
+

HostProcess determines if a container should be run as a ‘Host Process’ container. This field is alpha-level and will only be honored by components that enable the WindowsHostProcessContainers feature flag. Setting this field without the feature flag will result in errors when validating the Pod. All of a Pod’s containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].securityContext.windowsOptions.runAsUserName

+
+
+
+string + +
+ +
+

The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].startupProbe

+
+
+
+object + +
+ +
+

StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod’s lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].startupProbe.exec

+
+
+
+object + +
+ +
+

Exec specifies the action to take.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].startupProbe.exec.command

+
+
+
+array + +
+ +
+

Command is the command line to execute inside the container, the working directory for the command is root (‘/’) in the container’s filesystem. The command is simply exec’d, it is not run inside a shell, so traditional shell instructions (‘|’, etc) won’t work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].startupProbe.exec.command[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].startupProbe.failureThreshold

+
+
+
+integer + +
+ +
+

Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].startupProbe.grpc

+
+
+
+object + +
+ +
+

GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].startupProbe.grpc.port

+
+
+
+integer +Required +
+ +
+

Port number of the gRPC service. Number must be in the range 1 to 65535.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].startupProbe.grpc.service

+
+
+
+string + +
+ +
+

Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md). + If this is not specified, the default behavior is defined by gRPC.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].startupProbe.httpGet

+
+
+
+object + +
+ +
+

HTTPGet specifies the http request to perform.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].startupProbe.httpGet.host

+
+
+
+string + +
+ +
+

Host name to connect to, defaults to the pod IP. You probably want to set “Host” in httpHeaders instead.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].startupProbe.httpGet.httpHeaders

+
+
+
+array + +
+ +
+

Custom headers to set in the request. HTTP allows repeated headers.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].startupProbe.httpGet.httpHeaders[*]

+
+
+
+object + +
+ +
+

HTTPHeader describes a custom header to be used in HTTP probes

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].startupProbe.httpGet.httpHeaders[*].name

+
+
+
+string +Required +
+ +
+

The header field name

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].startupProbe.httpGet.httpHeaders[*].value

+
+
+
+string +Required +
+ +
+

The header field value

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].startupProbe.httpGet.path

+
+
+
+string + +
+ +
+

Path to access on the HTTP server.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].startupProbe.httpGet.port

+
+
+
+ +Required +
+ +
+

Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].startupProbe.httpGet.scheme

+
+
+
+string + +
+ +
+

Scheme to use for connecting to the host. Defaults to HTTP.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].startupProbe.initialDelaySeconds

+
+
+
+integer + +
+ +
+

Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].startupProbe.periodSeconds

+
+
+
+integer + +
+ +
+

How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].startupProbe.successThreshold

+
+
+
+integer + +
+ +
+

Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].startupProbe.tcpSocket

+
+
+
+object + +
+ +
+

TCPSocket specifies an action involving a TCP port.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].startupProbe.tcpSocket.host

+
+
+
+string + +
+ +
+

Optional: Host name to connect to, defaults to the pod IP.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].startupProbe.tcpSocket.port

+
+
+
+ +Required +
+ +
+

Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].startupProbe.terminationGracePeriodSeconds

+
+
+
+integer + +
+ +
+

Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod’s terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].startupProbe.timeoutSeconds

+
+
+
+integer + +
+ +
+

Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].stdin

+
+
+
+boolean + +
+ +
+

Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].stdinOnce

+
+
+
+boolean + +
+ +
+

Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].terminationMessagePath

+
+
+
+string + +
+ +
+

Optional: Path at which the file to which the container’s termination message will be written is mounted into the container’s filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].terminationMessagePolicy

+
+
+
+string + +
+ +
+

Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].tty

+
+
+
+boolean + +
+ +
+

Whether this container should allocate a TTY for itself, also requires ‘stdin’ to be true. Default is false.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].volumeDevices

+
+
+
+array + +
+ +
+

volumeDevices is the list of block devices to be used by the container.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].volumeDevices[*]

+
+
+
+object + +
+ +
+

volumeDevice describes a mapping of a raw block device within a container.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].volumeDevices[*].devicePath

+
+
+
+string +Required +
+ +
+

devicePath is the path inside of the container that the device will be mapped to.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].volumeDevices[*].name

+
+
+
+string +Required +
+ +
+

name must match the name of a persistentVolumeClaim in the pod

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].volumeMounts

+
+
+
+array + +
+ +
+

Pod volumes to mount into the container’s filesystem. Cannot be updated.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].volumeMounts[*]

+
+
+
+object + +
+ +
+

VolumeMount describes a mounting of a Volume within a container.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].volumeMounts[*].mountPath

+
+
+
+string +Required +
+ +
+

Path within the container at which the volume should be mounted. Must not contain ‘:’.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].volumeMounts[*].mountPropagation

+
+
+
+string + +
+ +
+

mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].volumeMounts[*].name

+
+
+
+string +Required +
+ +
+

This must match the Name of a Volume.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].volumeMounts[*].readOnly

+
+
+
+boolean + +
+ +
+

Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].volumeMounts[*].subPath

+
+
+
+string + +
+ +
+

Path within the volume from which the container’s volume should be mounted. Defaults to “” (volume’s root).

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].volumeMounts[*].subPathExpr

+
+
+
+string + +
+ +
+

Expanded path within the volume from which the container’s volume should be mounted. Behaves similarly to SubPath but environment variable references $(VAR_NAME) are expanded using the container’s environment. Defaults to “” (volume’s root). SubPathExpr and SubPath are mutually exclusive.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.containers[*].workingDir

+
+
+
+string + +
+ +
+

Container’s working directory. If not specified, the container runtime’s default will be used, which might be configured in the container image. Cannot be updated.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.envs

+
+
+
+array + +
+ +
+

Envs defines environment variables for Kafka broker Pods. Adding the “+” prefix to the name prepends the value to that environment variable instead of overwriting it. Add the “+” suffix to append.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.envs[*]

+
+
+
+object + +
+ +
+

EnvVar represents an environment variable present in a Container.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.envs[*].name

+
+
+
+string +Required +
+ +
+

Name of the environment variable. Must be a C_IDENTIFIER.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.envs[*].value

+
+
+
+string + +
+ +
+

Variable references $(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. “$$(VAR_NAME)” will produce the string literal “$(VAR_NAME)”. Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to “”.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.envs[*].valueFrom

+
+
+
+object + +
+ +
+

Source for the environment variable’s value. Cannot be used if value is not empty.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.envs[*].valueFrom.configMapKeyRef

+
+
+
+object + +
+ +
+

Selects a key of a ConfigMap.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.envs[*].valueFrom.configMapKeyRef.key

+
+
+
+string +Required +
+ +
+

The key to select.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.envs[*].valueFrom.configMapKeyRef.name

+
+
+
+string + +
+ +
+

Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.envs[*].valueFrom.configMapKeyRef.optional

+
+
+
+boolean + +
+ +
+

Specify whether the ConfigMap or its key must be defined

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.envs[*].valueFrom.fieldRef

+
+
+
+object + +
+ +
+

Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'], metadata.annotations['<KEY>'], spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.envs[*].valueFrom.fieldRef.apiVersion

+
+
+
+string + +
+ +
+

Version of the schema the FieldPath is written in terms of, defaults to “v1”.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.envs[*].valueFrom.fieldRef.fieldPath

+
+
+
+string +Required +
+ +
+

Path of the field to select in the specified API version.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.envs[*].valueFrom.resourceFieldRef

+
+
+
+object + +
+ +
+

Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.envs[*].valueFrom.resourceFieldRef.containerName

+
+
+
+string + +
+ +
+

Container name: required for volumes, optional for env vars

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.envs[*].valueFrom.resourceFieldRef.divisor

+
+
+
+ + +
+ +
+

Specifies the output format of the exposed resources, defaults to “1”

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.envs[*].valueFrom.resourceFieldRef.resource

+
+
+
+string +Required +
+ +
+

Required: resource to select

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.envs[*].valueFrom.secretKeyRef

+
+
+
+object + +
+ +
+

Selects a key of a secret in the pod’s namespace

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.envs[*].valueFrom.secretKeyRef.key

+
+
+
+string +Required +
+ +
+

The key of the secret to select from. Must be a valid secret key.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.envs[*].valueFrom.secretKeyRef.name

+
+
+
+string + +
+ +
+

Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.envs[*].valueFrom.secretKeyRef.optional

+
+
+
+boolean + +
+ +
+

Specify whether the Secret or its key must be defined

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.image

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.imagePullSecrets

+
+
+
+array + +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.imagePullSecrets[*]

+
+
+
+object + +
+ +
+

LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.imagePullSecrets[*].name

+
+
+
+string + +
+ +
+

Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers

+
+
+
+array + +
+ +
+

InitContainers add extra initContainers to the Kafka broker pod

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*]

+
+
+
+object + +
+ +
+

A single application container that you want to run within a pod.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].args

+
+
+
+array + +
+ +
+

Arguments to the entrypoint. The container image’s CMD is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container’s environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. “$$(VAR_NAME)” will produce the string literal “$(VAR_NAME)”. Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].args[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].command

+
+
+
+array + +
+ +
+

Entrypoint array. Not executed within a shell. The container image’s ENTRYPOINT is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container’s environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. “$$(VAR_NAME)” will produce the string literal “$(VAR_NAME)”. Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].command[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].env

+
+
+
+array + +
+ +
+

List of environment variables to set in the container. Cannot be updated.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].envFrom

+
+
+
+array + +
+ +
+

List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].envFrom[*]

+
+
+
+object + +
+ +
+

EnvFromSource represents the source of a set of ConfigMaps

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].envFrom[*].configMapRef

+
+
+
+object + +
+ +
+

The ConfigMap to select from

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].envFrom[*].configMapRef.name

+
+
+
+string + +
+ +
+

Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].envFrom[*].configMapRef.optional

+
+
+
+boolean + +
+ +
+

Specify whether the ConfigMap must be defined

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].envFrom[*].prefix

+
+
+
+string + +
+ +
+

An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].envFrom[*].secretRef

+
+
+
+object + +
+ +
+

The Secret to select from

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].envFrom[*].secretRef.name

+
+
+
+string + +
+ +
+

Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].envFrom[*].secretRef.optional

+
+
+
+boolean + +
+ +
+

Specify whether the Secret must be defined

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].env[*]

+
+
+
+object + +
+ +
+

EnvVar represents an environment variable present in a Container.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].env[*].name

+
+
+
+string +Required +
+ +
+

Name of the environment variable. Must be a C_IDENTIFIER.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].env[*].value

+
+
+
+string + +
+ +
+

Variable references $(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. “$$(VAR_NAME)” will produce the string literal “$(VAR_NAME)”. Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to “”.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].env[*].valueFrom

+
+
+
+object + +
+ +
+

Source for the environment variable’s value. Cannot be used if value is not empty.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].env[*].valueFrom.configMapKeyRef

+
+
+
+object + +
+ +
+

Selects a key of a ConfigMap.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].env[*].valueFrom.configMapKeyRef.key

+
+
+
+string +Required +
+ +
+

The key to select.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].env[*].valueFrom.configMapKeyRef.name

+
+
+
+string + +
+ +
+

Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].env[*].valueFrom.configMapKeyRef.optional

+
+
+
+boolean + +
+ +
+

Specify whether the ConfigMap or its key must be defined

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].env[*].valueFrom.fieldRef

+
+
+
+object + +
+ +
+

Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'], metadata.annotations['<KEY>'], spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].env[*].valueFrom.fieldRef.apiVersion

+
+
+
+string + +
+ +
+

Version of the schema the FieldPath is written in terms of, defaults to “v1”.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].env[*].valueFrom.fieldRef.fieldPath

+
+
+
+string +Required +
+ +
+

Path of the field to select in the specified API version.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].env[*].valueFrom.resourceFieldRef

+
+
+
+object + +
+ +
+

Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].env[*].valueFrom.resourceFieldRef.containerName

+
+
+
+string + +
+ +
+

Container name: required for volumes, optional for env vars

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].env[*].valueFrom.resourceFieldRef.divisor

+
+
+
+ + +
+ +
+

Specifies the output format of the exposed resources, defaults to “1”

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].env[*].valueFrom.resourceFieldRef.resource

+
+
+
+string +Required +
+ +
+

Required: resource to select

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].env[*].valueFrom.secretKeyRef

+
+
+
+object + +
+ +
+

Selects a key of a secret in the pod’s namespace

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].env[*].valueFrom.secretKeyRef.key

+
+
+
+string +Required +
+ +
+

The key of the secret to select from. Must be a valid secret key.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].env[*].valueFrom.secretKeyRef.name

+
+
+
+string + +
+ +
+

Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].env[*].valueFrom.secretKeyRef.optional

+
+
+
+boolean + +
+ +
+

Specify whether the Secret or its key must be defined

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].image

+
+
+
+string + +
+ +
+

Container image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].imagePullPolicy

+
+
+
+string + +
+ +
+

Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].lifecycle

+
+
+
+object + +
+ +
+

Actions that the management system should take in response to container lifecycle events. Cannot be updated.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].lifecycle.postStart

+
+
+
+object + +
+ +
+

PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].lifecycle.postStart.exec

+
+
+
+object + +
+ +
+

Exec specifies the action to take.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].lifecycle.postStart.exec.command

+
+
+
+array + +
+ +
+

Command is the command line to execute inside the container, the working directory for the command is root (‘/’) in the container’s filesystem. The command is simply exec’d, it is not run inside a shell, so traditional shell instructions (‘|’, etc) won’t work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].lifecycle.postStart.exec.command[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].lifecycle.postStart.httpGet

+
+
+
+object + +
+ +
+

HTTPGet specifies the http request to perform.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].lifecycle.postStart.httpGet.host

+
+
+
+string + +
+ +
+

Host name to connect to, defaults to the pod IP. You probably want to set “Host” in httpHeaders instead.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].lifecycle.postStart.httpGet.httpHeaders

+
+
+
+array + +
+ +
+

Custom headers to set in the request. HTTP allows repeated headers.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].lifecycle.postStart.httpGet.httpHeaders[*]

+
+
+
+object + +
+ +
+

HTTPHeader describes a custom header to be used in HTTP probes

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].lifecycle.postStart.httpGet.httpHeaders[*].name

+
+
+
+string +Required +
+ +
+

The header field name

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].lifecycle.postStart.httpGet.httpHeaders[*].value

+
+
+
+string +Required +
+ +
+

The header field value

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].lifecycle.postStart.httpGet.path

+
+
+
+string + +
+ +
+

Path to access on the HTTP server.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].lifecycle.postStart.httpGet.port

+
+
+
+ +Required +
+ +
+

Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].lifecycle.postStart.httpGet.scheme

+
+
+
+string + +
+ +
+

Scheme to use for connecting to the host. Defaults to HTTP.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].lifecycle.postStart.tcpSocket

+
+
+
+object + +
+ +
+

Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].lifecycle.postStart.tcpSocket.host

+
+
+
+string + +
+ +
+

Optional: Host name to connect to, defaults to the pod IP.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].lifecycle.postStart.tcpSocket.port

+
+
+
+ +Required +
+ +
+

Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].lifecycle.preStop

+
+
+
+object + +
+ +
+

PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod’s termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod’s termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].lifecycle.preStop.exec

+
+
+
+object + +
+ +
+

Exec specifies the action to take.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].lifecycle.preStop.exec.command

+
+
+
+array + +
+ +
+

Command is the command line to execute inside the container, the working directory for the command is root (‘/’) in the container’s filesystem. The command is simply exec’d, it is not run inside a shell, so traditional shell instructions (‘|’, etc) won’t work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].lifecycle.preStop.exec.command[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].lifecycle.preStop.httpGet

+
+
+
+object + +
+ +
+

HTTPGet specifies the http request to perform.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].lifecycle.preStop.httpGet.host

+
+
+
+string + +
+ +
+

Host name to connect to, defaults to the pod IP. You probably want to set “Host” in httpHeaders instead.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].lifecycle.preStop.httpGet.httpHeaders

+
+
+
+array + +
+ +
+

Custom headers to set in the request. HTTP allows repeated headers.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].lifecycle.preStop.httpGet.httpHeaders[*]

+
+
+
+object + +
+ +
+

HTTPHeader describes a custom header to be used in HTTP probes

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].lifecycle.preStop.httpGet.httpHeaders[*].name

+
+
+
+string +Required +
+ +
+

The header field name

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].lifecycle.preStop.httpGet.httpHeaders[*].value

+
+
+
+string +Required +
+ +
+

The header field value

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].lifecycle.preStop.httpGet.path

+
+
+
+string + +
+ +
+

Path to access on the HTTP server.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].lifecycle.preStop.httpGet.port

+
+
+
+ +Required +
+ +
+

Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].lifecycle.preStop.httpGet.scheme

+
+
+
+string + +
+ +
+

Scheme to use for connecting to the host. Defaults to HTTP.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].lifecycle.preStop.tcpSocket

+
+
+
+object + +
+ +
+

Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].lifecycle.preStop.tcpSocket.host

+
+
+
+string + +
+ +
+

Optional: Host name to connect to, defaults to the pod IP.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].lifecycle.preStop.tcpSocket.port

+
+
+
+ +Required +
+ +
+

Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].livenessProbe

+
+
+
+object + +
+ +
+

Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].livenessProbe.exec

+
+
+
+object + +
+ +
+

Exec specifies the action to take.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].livenessProbe.exec.command

+
+
+
+array + +
+ +
+

Command is the command line to execute inside the container, the working directory for the command is root (‘/’) in the container’s filesystem. The command is simply exec’d, it is not run inside a shell, so traditional shell instructions (‘|’, etc) won’t work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].livenessProbe.exec.command[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].livenessProbe.failureThreshold

+
+
+
+integer + +
+ +
+

Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].livenessProbe.grpc

+
+
+
+object + +
+ +
+

GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].livenessProbe.grpc.port

+
+
+
+integer +Required +
+ +
+

Port number of the gRPC service. Number must be in the range 1 to 65535.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].livenessProbe.grpc.service

+
+
+
+string + +
+ +
+

Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md). + If this is not specified, the default behavior is defined by gRPC.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].livenessProbe.httpGet

+
+
+
+object + +
+ +
+

HTTPGet specifies the http request to perform.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].livenessProbe.httpGet.host

+
+
+
+string + +
+ +
+

Host name to connect to, defaults to the pod IP. You probably want to set “Host” in httpHeaders instead.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].livenessProbe.httpGet.httpHeaders

+
+
+
+array + +
+ +
+

Custom headers to set in the request. HTTP allows repeated headers.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].livenessProbe.httpGet.httpHeaders[*]

+
+
+
+object + +
+ +
+

HTTPHeader describes a custom header to be used in HTTP probes

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].livenessProbe.httpGet.httpHeaders[*].name

+
+
+
+string +Required +
+ +
+

The header field name

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].livenessProbe.httpGet.httpHeaders[*].value

+
+
+
+string +Required +
+ +
+

The header field value

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].livenessProbe.httpGet.path

+
+
+
+string + +
+ +
+

Path to access on the HTTP server.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].livenessProbe.httpGet.port

+
+
+
+ +Required +
+ +
+

Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].livenessProbe.httpGet.scheme

+
+
+
+string + +
+ +
+

Scheme to use for connecting to the host. Defaults to HTTP.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].livenessProbe.initialDelaySeconds

+
+
+
+integer + +
+ +
+

Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].livenessProbe.periodSeconds

+
+
+
+integer + +
+ +
+

How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].livenessProbe.successThreshold

+
+
+
+integer + +
+ +
+

Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].livenessProbe.tcpSocket

+
+
+
+object + +
+ +
+

TCPSocket specifies an action involving a TCP port.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].livenessProbe.tcpSocket.host

+
+
+
+string + +
+ +
+

Optional: Host name to connect to, defaults to the pod IP.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].livenessProbe.tcpSocket.port

+
+
+
+ +Required +
+ +
+

Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].livenessProbe.terminationGracePeriodSeconds

+
+
+
+integer + +
+ +
+

Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod’s terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].livenessProbe.timeoutSeconds

+
+
+
+integer + +
+ +
+

Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].name

+
+
+
+string +Required +
+ +
+

Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].ports

+
+
+
+array + +
+ +
+

List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default “0.0.0.0” address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255. Cannot be updated.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].ports[*]

+
+
+
+object + +
+ +
+

ContainerPort represents a network port in a single container.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].ports[*].containerPort

+
+
+
+integer +Required +
+ +
+

Number of port to expose on the pod’s IP address. This must be a valid port number, 0 < x < 65536.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].ports[*].hostIP

+
+
+
+string + +
+ +
+

What host IP to bind the external port to.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].ports[*].hostPort

+
+
+
+integer + +
+ +
+

Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].ports[*].name

+
+
+
+string + +
+ +
+

If specified, this must be an IANA_SVC_NAME and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].ports[*].protocol

+
+
+
+string + +
+ +
+

Protocol for port. Must be UDP, TCP, or SCTP. Defaults to “TCP”.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].readinessProbe

+
+
+
+object + +
+ +
+

Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].readinessProbe.exec

+
+
+
+object + +
+ +
+

Exec specifies the action to take.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].readinessProbe.exec.command

+
+
+
+array + +
+ +
+

Command is the command line to execute inside the container, the working directory for the command is root (‘/’) in the container’s filesystem. The command is simply exec’d, it is not run inside a shell, so traditional shell instructions (‘|’, etc) won’t work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].readinessProbe.exec.command[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].readinessProbe.failureThreshold

+
+
+
+integer + +
+ +
+

Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].readinessProbe.grpc

+
+
+
+object + +
+ +
+

GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].readinessProbe.grpc.port

+
+
+
+integer +Required +
+ +
+

Port number of the gRPC service. Number must be in the range 1 to 65535.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].readinessProbe.grpc.service

+
+
+
+string + +
+ +
+

Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md). + If this is not specified, the default behavior is defined by gRPC.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].readinessProbe.httpGet

+
+
+
+object + +
+ +
+

HTTPGet specifies the http request to perform.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].readinessProbe.httpGet.host

+
+
+
+string + +
+ +
+

Host name to connect to, defaults to the pod IP. You probably want to set “Host” in httpHeaders instead.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].readinessProbe.httpGet.httpHeaders

+
+
+
+array + +
+ +
+

Custom headers to set in the request. HTTP allows repeated headers.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].readinessProbe.httpGet.httpHeaders[*]

+
+
+
+object + +
+ +
+

HTTPHeader describes a custom header to be used in HTTP probes

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].readinessProbe.httpGet.httpHeaders[*].name

+
+
+
+string +Required +
+ +
+

The header field name

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].readinessProbe.httpGet.httpHeaders[*].value

+
+
+
+string +Required +
+ +
+

The header field value

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].readinessProbe.httpGet.path

+
+
+
+string + +
+ +
+

Path to access on the HTTP server.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].readinessProbe.httpGet.port

+
+
+
+ +Required +
+ +
+

Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].readinessProbe.httpGet.scheme

+
+
+
+string + +
+ +
+

Scheme to use for connecting to the host. Defaults to HTTP.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].readinessProbe.initialDelaySeconds

+
+
+
+integer + +
+ +
+

Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].readinessProbe.periodSeconds

+
+
+
+integer + +
+ +
+

How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].readinessProbe.successThreshold

+
+
+
+integer + +
+ +
+

Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].readinessProbe.tcpSocket

+
+
+
+object + +
+ +
+

TCPSocket specifies an action involving a TCP port.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].readinessProbe.tcpSocket.host

+
+
+
+string + +
+ +
+

Optional: Host name to connect to, defaults to the pod IP.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].readinessProbe.tcpSocket.port

+
+
+
+ +Required +
+ +
+

Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].readinessProbe.terminationGracePeriodSeconds

+
+
+
+integer + +
+ +
+

Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod’s terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].readinessProbe.timeoutSeconds

+
+
+
+integer + +
+ +
+

Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].resources

+
+
+
+object + +
+ +
+

Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].resources.limits

+
+
+
+object + +
+ +
+

Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].resources.requests

+
+
+
+object + +
+ +
+

Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].securityContext

+
+
+
+object + +
+ +
+

SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].securityContext.allowPrivilegeEscalation

+
+
+
+boolean + +
+ +
+

AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note that this field cannot be set when spec.os.name is windows.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].securityContext.capabilities

+
+
+
+object + +
+ +
+

The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].securityContext.capabilities.add

+
+
+
+array + +
+ +
+

Added capabilities

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].securityContext.capabilities.add[*]

+
+
+
+string + +
+ +
+

Capability represent POSIX capabilities type

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].securityContext.capabilities.drop

+
+
+
+array + +
+ +
+

Removed capabilities

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].securityContext.capabilities.drop[*]

+
+
+
+string + +
+ +
+

Capability represent POSIX capabilities type

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].securityContext.privileged

+
+
+
+boolean + +
+ +
+

Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. Note that this field cannot be set when spec.os.name is windows.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].securityContext.procMount

+
+
+
+string + +
+ +
+

procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. Note that this field cannot be set when spec.os.name is windows.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].securityContext.readOnlyRootFilesystem

+
+
+
+boolean + +
+ +
+

Whether this container has a read-only root filesystem. Default is false. Note that this field cannot be set when spec.os.name is windows.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].securityContext.runAsGroup

+
+
+
+integer + +
+ +
+

The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].securityContext.runAsNonRoot

+
+
+
+boolean + +
+ +
+

Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].securityContext.runAsUser

+
+
+
+integer + +
+ +
+

The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].securityContext.seLinuxOptions

+
+
+
+object + +
+ +
+

The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].securityContext.seLinuxOptions.level

+
+
+
+string + +
+ +
+

Level is SELinux level label that applies to the container.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].securityContext.seLinuxOptions.role

+
+
+
+string + +
+ +
+

Role is a SELinux role label that applies to the container.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].securityContext.seLinuxOptions.type

+
+
+
+string + +
+ +
+

Type is a SELinux type label that applies to the container.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].securityContext.seLinuxOptions.user

+
+
+
+string + +
+ +
+

User is a SELinux user label that applies to the container.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].securityContext.seccompProfile

+
+
+
+object + +
+ +
+

The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].securityContext.seccompProfile.localhostProfile

+
+
+
+string + +
+ +
+

localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet’s configured seccomp profile location. Must only be set if type is “Localhost”.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].securityContext.seccompProfile.type

+
+
+
+string +Required +
+ +
+

type indicates which kind of seccomp profile will be applied. Valid options are: + Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].securityContext.windowsOptions

+
+
+
+object + +
+ +
+

The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].securityContext.windowsOptions.gmsaCredentialSpec

+
+
+
+string + +
+ +
+

GMSACredentialSpec is where the GMSA admission webhook (https://github.com/kubernetes-sigs/windows-gmsa) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].securityContext.windowsOptions.gmsaCredentialSpecName

+
+
+
+string + +
+ +
+

GMSACredentialSpecName is the name of the GMSA credential spec to use.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].securityContext.windowsOptions.hostProcess

+
+
+
+boolean + +
+ +
+

HostProcess determines if a container should be run as a ‘Host Process’ container. This field is alpha-level and will only be honored by components that enable the WindowsHostProcessContainers feature flag. Setting this field without the feature flag will result in errors when validating the Pod. All of a Pod’s containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].securityContext.windowsOptions.runAsUserName

+
+
+
+string + +
+ +
+

The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].startupProbe

+
+
+
+object + +
+ +
+

StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod’s lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].startupProbe.exec

+
+
+
+object + +
+ +
+

Exec specifies the action to take.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].startupProbe.exec.command

+
+
+
+array + +
+ +
+

Command is the command line to execute inside the container, the working directory for the command is root (‘/’) in the container’s filesystem. The command is simply exec’d, it is not run inside a shell, so traditional shell instructions (‘|’, etc) won’t work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].startupProbe.exec.command[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].startupProbe.failureThreshold

+
+
+
+integer + +
+ +
+

Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].startupProbe.grpc

+
+
+
+object + +
+ +
+

GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].startupProbe.grpc.port

+
+
+
+integer +Required +
+ +
+

Port number of the gRPC service. Number must be in the range 1 to 65535.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].startupProbe.grpc.service

+
+
+
+string + +
+ +
+

Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md). + If this is not specified, the default behavior is defined by gRPC.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].startupProbe.httpGet

+
+
+
+object + +
+ +
+

HTTPGet specifies the http request to perform.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].startupProbe.httpGet.host

+
+
+
+string + +
+ +
+

Host name to connect to, defaults to the pod IP. You probably want to set “Host” in httpHeaders instead.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].startupProbe.httpGet.httpHeaders

+
+
+
+array + +
+ +
+

Custom headers to set in the request. HTTP allows repeated headers.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].startupProbe.httpGet.httpHeaders[*]

+
+
+
+object + +
+ +
+

HTTPHeader describes a custom header to be used in HTTP probes

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].startupProbe.httpGet.httpHeaders[*].name

+
+
+
+string +Required +
+ +
+

The header field name

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].startupProbe.httpGet.httpHeaders[*].value

+
+
+
+string +Required +
+ +
+

The header field value

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].startupProbe.httpGet.path

+
+
+
+string + +
+ +
+

Path to access on the HTTP server.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].startupProbe.httpGet.port

+
+
+
+ +Required +
+ +
+

Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].startupProbe.httpGet.scheme

+
+
+
+string + +
+ +
+

Scheme to use for connecting to the host. Defaults to HTTP.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].startupProbe.initialDelaySeconds

+
+
+
+integer + +
+ +
+

Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].startupProbe.periodSeconds

+
+
+
+integer + +
+ +
+

How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].startupProbe.successThreshold

+
+
+
+integer + +
+ +
+

Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].startupProbe.tcpSocket

+
+
+
+object + +
+ +
+

TCPSocket specifies an action involving a TCP port.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].startupProbe.tcpSocket.host

+
+
+
+string + +
+ +
+

Optional: Host name to connect to, defaults to the pod IP.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].startupProbe.tcpSocket.port

+
+
+
+ +Required +
+ +
+

Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].startupProbe.terminationGracePeriodSeconds

+
+
+
+integer + +
+ +
+

Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod’s terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].startupProbe.timeoutSeconds

+
+
+
+integer + +
+ +
+

Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].stdin

+
+
+
+boolean + +
+ +
+

Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].stdinOnce

+
+
+
+boolean + +
+ +
+

Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].terminationMessagePath

+
+
+
+string + +
+ +
+

Optional: Path at which the file to which the container’s termination message will be written is mounted into the container’s filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].terminationMessagePolicy

+
+
+
+string + +
+ +
+

Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].tty

+
+
+
+boolean + +
+ +
+

Whether this container should allocate a TTY for itself, also requires ‘stdin’ to be true. Default is false.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].volumeDevices

+
+
+
+array + +
+ +
+

volumeDevices is the list of block devices to be used by the container.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].volumeDevices[*]

+
+
+
+object + +
+ +
+

volumeDevice describes a mapping of a raw block device within a container.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].volumeDevices[*].devicePath

+
+
+
+string +Required +
+ +
+

devicePath is the path inside of the container that the device will be mapped to.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].volumeDevices[*].name

+
+
+
+string +Required +
+ +
+

name must match the name of a persistentVolumeClaim in the pod

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].volumeMounts

+
+
+
+array + +
+ +
+

Pod volumes to mount into the container’s filesystem. Cannot be updated.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].volumeMounts[*]

+
+
+
+object + +
+ +
+

VolumeMount describes a mounting of a Volume within a container.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].volumeMounts[*].mountPath

+
+
+
+string +Required +
+ +
+

Path within the container at which the volume should be mounted. Must not contain ‘:’.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].volumeMounts[*].mountPropagation

+
+
+
+string + +
+ +
+

mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].volumeMounts[*].name

+
+
+
+string +Required +
+ +
+

This must match the Name of a Volume.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].volumeMounts[*].readOnly

+
+
+
+boolean + +
+ +
+

Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].volumeMounts[*].subPath

+
+
+
+string + +
+ +
+

Path within the volume from which the container’s volume should be mounted. Defaults to “” (volume’s root).

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].volumeMounts[*].subPathExpr

+
+
+
+string + +
+ +
+

Expanded path within the volume from which the container’s volume should be mounted. Behaves similarly to SubPath but environment variable references $(VAR_NAME) are expanded using the container’s environment. Defaults to “” (volume’s root). SubPathExpr and SubPath are mutually exclusive.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.initContainers[*].workingDir

+
+
+
+string + +
+ +
+

Container’s working directory. If not specified, the container runtime’s default will be used, which might be configured in the container image. Cannot be updated.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.kafkaHeapOpts

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.kafkaJvmPerfOpts

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.log4jConfig

+
+
+
+string + +
+ +
+

Override for the default log4j configuration

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.metricsReporterImage

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.networkConfig

+
+
+
+object + +
+ +
+

Network throughput information in kB/s used by Cruise Control to determine broker network capacity. By default it is set to 125000 which means 1Gbit/s in network throughput.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.networkConfig.incomingNetworkThroughPut

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.networkConfig.outgoingNetworkThroughPut

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.nodePortExternalIP

+
+
+
+object + +
+ +
+

External listeners that use NodePort type service to expose the broker outside the Kubernetes clusterT and their external IP to advertise Kafka broker external listener. The external IP value is ignored in case of external listeners that use LoadBalancer type service to expose the broker outside the Kubernetes cluster. Also, when “hostnameOverride” field of the external listener is set it will override the broker’s external listener advertise address according to the description of the “hostnameOverride” field.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.nodeSelector

+
+
+
+object + +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.podSecurityContext

+
+
+
+object + +
+ +
+

PodSecurityContext holds pod-level security attributes and common container settings. Some fields are also present in container.securityContext. Field values of container.securityContext take precedence over field values of PodSecurityContext.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.podSecurityContext.fsGroup

+
+
+
+integer + +
+ +
+

A special supplemental group that applies to all containers in a pod. Some volume types allow the Kubelet to change the ownership of that volume to be owned by the pod: + 1. The owning GID will be the FSGroup 2. The setgid bit is set (new files created in the volume will be owned by FSGroup) 3. The permission bits are OR’d with rw-rw—- + If unset, the Kubelet will not modify the ownership and permissions of any volume. Note that this field cannot be set when spec.os.name is windows.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.podSecurityContext.fsGroupChangePolicy

+
+
+
+string + +
+ +
+

fsGroupChangePolicy defines behavior of changing ownership and permission of the volume before being exposed inside Pod. This field will only apply to volume types which support fsGroup based ownership(and permissions). It will have no effect on ephemeral volume types such as: secret, configmaps and emptydir. Valid values are “OnRootMismatch” and “Always”. If not specified, “Always” is used. Note that this field cannot be set when spec.os.name is windows.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.podSecurityContext.runAsGroup

+
+
+
+integer + +
+ +
+

The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.podSecurityContext.runAsNonRoot

+
+
+
+boolean + +
+ +
+

Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.podSecurityContext.runAsUser

+
+
+
+integer + +
+ +
+

The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.podSecurityContext.seLinuxOptions

+
+
+
+object + +
+ +
+

The SELinux context to be applied to all containers. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.podSecurityContext.seLinuxOptions.level

+
+
+
+string + +
+ +
+

Level is SELinux level label that applies to the container.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.podSecurityContext.seLinuxOptions.role

+
+
+
+string + +
+ +
+

Role is a SELinux role label that applies to the container.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.podSecurityContext.seLinuxOptions.type

+
+
+
+string + +
+ +
+

Type is a SELinux type label that applies to the container.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.podSecurityContext.seLinuxOptions.user

+
+
+
+string + +
+ +
+

User is a SELinux user label that applies to the container.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.podSecurityContext.seccompProfile

+
+
+
+object + +
+ +
+

The seccomp options to use by the containers in this pod. Note that this field cannot be set when spec.os.name is windows.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.podSecurityContext.seccompProfile.localhostProfile

+
+
+
+string + +
+ +
+

localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet’s configured seccomp profile location. Must only be set if type is “Localhost”.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.podSecurityContext.seccompProfile.type

+
+
+
+string +Required +
+ +
+

type indicates which kind of seccomp profile will be applied. Valid options are: + Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.podSecurityContext.supplementalGroups

+
+
+
+array + +
+ +
+

A list of groups applied to the first process run in each container, in addition to the container’s primary GID. If unspecified, no groups will be added to any container. Note that this field cannot be set when spec.os.name is windows.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.podSecurityContext.supplementalGroups[*]

+
+
+
+integer + +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.podSecurityContext.sysctls

+
+
+
+array + +
+ +
+

Sysctls hold a list of namespaced sysctls used for the pod. Pods with unsupported sysctls (by the container runtime) might fail to launch. Note that this field cannot be set when spec.os.name is windows.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.podSecurityContext.sysctls[*]

+
+
+
+object + +
+ +
+

Sysctl defines a kernel parameter to be set

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.podSecurityContext.sysctls[*].name

+
+
+
+string +Required +
+ +
+

Name of a property to set

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.podSecurityContext.sysctls[*].value

+
+
+
+string +Required +
+ +
+

Value of a property to set

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.podSecurityContext.windowsOptions

+
+
+
+object + +
+ +
+

The Windows specific settings applied to all containers. If unspecified, the options within a container’s SecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.podSecurityContext.windowsOptions.gmsaCredentialSpec

+
+
+
+string + +
+ +
+

GMSACredentialSpec is where the GMSA admission webhook (https://github.com/kubernetes-sigs/windows-gmsa) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.podSecurityContext.windowsOptions.gmsaCredentialSpecName

+
+
+
+string + +
+ +
+

GMSACredentialSpecName is the name of the GMSA credential spec to use.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.podSecurityContext.windowsOptions.hostProcess

+
+
+
+boolean + +
+ +
+

HostProcess determines if a container should be run as a ‘Host Process’ container. This field is alpha-level and will only be honored by components that enable the WindowsHostProcessContainers feature flag. Setting this field without the feature flag will result in errors when validating the Pod. All of a Pod’s containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.podSecurityContext.windowsOptions.runAsUserName

+
+
+
+string + +
+ +
+

The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.priorityClassName

+
+
+
+string + +
+ +
+

PriorityClassName specifies the priority class name for a broker pod(s). If specified, the PriorityClass resource with this PriorityClassName must be created beforehand. If not specified, the broker pods’ priority is default to zero.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.resourceRequirements

+
+
+
+object + +
+ +
+

ResourceRequirements describes the compute resource requirements.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.resourceRequirements.limits

+
+
+
+object + +
+ +
+

Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.resourceRequirements.requests

+
+
+
+object + +
+ +
+

Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.securityContext

+
+
+
+object + +
+ +
+

SecurityContext allows to set security context for the kafka container

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.securityContext.allowPrivilegeEscalation

+
+
+
+boolean + +
+ +
+

AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note that this field cannot be set when spec.os.name is windows.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.securityContext.capabilities

+
+
+
+object + +
+ +
+

The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.securityContext.capabilities.add

+
+
+
+array + +
+ +
+

Added capabilities

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.securityContext.capabilities.add[*]

+
+
+
+string + +
+ +
+

Capability represent POSIX capabilities type

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.securityContext.capabilities.drop

+
+
+
+array + +
+ +
+

Removed capabilities

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.securityContext.capabilities.drop[*]

+
+
+
+string + +
+ +
+

Capability represent POSIX capabilities type

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.securityContext.privileged

+
+
+
+boolean + +
+ +
+

Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. Note that this field cannot be set when spec.os.name is windows.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.securityContext.procMount

+
+
+
+string + +
+ +
+

procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. Note that this field cannot be set when spec.os.name is windows.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.securityContext.readOnlyRootFilesystem

+
+
+
+boolean + +
+ +
+

Whether this container has a read-only root filesystem. Default is false. Note that this field cannot be set when spec.os.name is windows.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.securityContext.runAsGroup

+
+
+
+integer + +
+ +
+

The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.securityContext.runAsNonRoot

+
+
+
+boolean + +
+ +
+

Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.securityContext.runAsUser

+
+
+
+integer + +
+ +
+

The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.securityContext.seLinuxOptions

+
+
+
+object + +
+ +
+

The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.securityContext.seLinuxOptions.level

+
+
+
+string + +
+ +
+

Level is SELinux level label that applies to the container.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.securityContext.seLinuxOptions.role

+
+
+
+string + +
+ +
+

Role is a SELinux role label that applies to the container.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.securityContext.seLinuxOptions.type

+
+
+
+string + +
+ +
+

Type is a SELinux type label that applies to the container.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.securityContext.seLinuxOptions.user

+
+
+
+string + +
+ +
+

User is a SELinux user label that applies to the container.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.securityContext.seccompProfile

+
+
+
+object + +
+ +
+

The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.securityContext.seccompProfile.localhostProfile

+
+
+
+string + +
+ +
+

localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet’s configured seccomp profile location. Must only be set if type is “Localhost”.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.securityContext.seccompProfile.type

+
+
+
+string +Required +
+ +
+

type indicates which kind of seccomp profile will be applied. Valid options are: + Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.securityContext.windowsOptions

+
+
+
+object + +
+ +
+

The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.securityContext.windowsOptions.gmsaCredentialSpec

+
+
+
+string + +
+ +
+

GMSACredentialSpec is where the GMSA admission webhook (https://github.com/kubernetes-sigs/windows-gmsa) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.securityContext.windowsOptions.gmsaCredentialSpecName

+
+
+
+string + +
+ +
+

GMSACredentialSpecName is the name of the GMSA credential spec to use.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.securityContext.windowsOptions.hostProcess

+
+
+
+boolean + +
+ +
+

HostProcess determines if a container should be run as a ‘Host Process’ container. This field is alpha-level and will only be honored by components that enable the WindowsHostProcessContainers feature flag. Setting this field without the feature flag will result in errors when validating the Pod. All of a Pod’s containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.securityContext.windowsOptions.runAsUserName

+
+
+
+string + +
+ +
+

The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.serviceAccountName

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.storageConfigs

+
+
+
+array + +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.storageConfigs[*]

+
+
+
+object + +
+ +
+

StorageConfig defines the broker storage configuration

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.storageConfigs[*].emptyDir

+
+
+
+object + +
+ +
+

If set https://kubernetes.io/docs/concepts/storage/volumes#emptydir is used as storage for Kafka broker log dirs. The use of empty dir as Kafka broker storage is useful in development environments where data loss is not a concern as data stored on emptydir backed storage is lost at pod restarts. Either pvcSpec or emptyDir has to be set. When both pvcSpec and emptyDir fields are set the pvcSpec is used by default.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.storageConfigs[*].emptyDir.medium

+
+
+
+string + +
+ +
+

medium represents what type of storage medium should back this directory. The default is “” which means to use the node’s default medium. Must be an empty string (default) or Memory. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.storageConfigs[*].emptyDir.sizeLimit

+
+
+
+ + +
+ +
+

sizeLimit is the total amount of local storage required for this EmptyDir volume. The size limit is also applicable for memory medium. The maximum usage on memory medium EmptyDir would be the minimum value between the SizeLimit specified here and the sum of memory limits of all containers in a pod. The default is nil which means that the limit is undefined. More info: http://kubernetes.io/docs/user-guide/volumes#emptydir

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.storageConfigs[*].mountPath

+
+
+
+string +Required +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.storageConfigs[*].pvcSpec

+
+
+
+object + +
+ +
+

If set https://kubernetes.io/docs/concepts/storage/volumes/#persistentvolumeclaim is used as storage for Kafka broker log dirs. Either pvcSpec or emptyDir has to be set. When both pvcSpec and emptyDir fields are set the pvcSpec is used by default.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.storageConfigs[*].pvcSpec.accessModes

+
+
+
+array + +
+ +
+

accessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.storageConfigs[*].pvcSpec.accessModes[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.storageConfigs[*].pvcSpec.dataSource

+
+
+
+object + +
+ +
+

dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. If the AnyVolumeDataSource feature gate is enabled, this field will always have the same contents as the DataSourceRef field.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.storageConfigs[*].pvcSpec.dataSource.apiGroup

+
+
+
+string + +
+ +
+

APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.storageConfigs[*].pvcSpec.dataSource.kind

+
+
+
+string +Required +
+ +
+

Kind is the type of resource being referenced

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.storageConfigs[*].pvcSpec.dataSource.name

+
+
+
+string +Required +
+ +
+

Name is the name of resource being referenced

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.storageConfigs[*].pvcSpec.dataSourceRef

+
+
+
+object + +
+ +
+

dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any local object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the DataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, both fields (DataSource and DataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. There are two important differences between DataSource and DataSourceRef: * While DataSource only allows two specific types of objects, DataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While DataSource ignores disallowed values (dropping them), DataSourceRef preserves all values, and generates an error if a disallowed value is specified. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.storageConfigs[*].pvcSpec.dataSourceRef.apiGroup

+
+
+
+string + +
+ +
+

APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.storageConfigs[*].pvcSpec.dataSourceRef.kind

+
+
+
+string +Required +
+ +
+

Kind is the type of resource being referenced

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.storageConfigs[*].pvcSpec.dataSourceRef.name

+
+
+
+string +Required +
+ +
+

Name is the name of resource being referenced

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.storageConfigs[*].pvcSpec.resources

+
+
+
+object + +
+ +
+

resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than previous value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.storageConfigs[*].pvcSpec.resources.limits

+
+
+
+object + +
+ +
+

Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.storageConfigs[*].pvcSpec.resources.requests

+
+
+
+object + +
+ +
+

Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.storageConfigs[*].pvcSpec.selector

+
+
+
+object + +
+ +
+

selector is a label query over volumes to consider for binding.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.storageConfigs[*].pvcSpec.selector.matchExpressions

+
+
+
+array + +
+ +
+

matchExpressions is a list of label selector requirements. The requirements are ANDed.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.storageConfigs[*].pvcSpec.selector.matchExpressions[*]

+
+
+
+object + +
+ +
+

A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.storageConfigs[*].pvcSpec.selector.matchExpressions[*].key

+
+
+
+string +Required +
+ +
+

key is the label key that the selector applies to.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.storageConfigs[*].pvcSpec.selector.matchExpressions[*].operator

+
+
+
+string +Required +
+ +
+

operator represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.storageConfigs[*].pvcSpec.selector.matchExpressions[*].values

+
+
+
+array + +
+ +
+

values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.storageConfigs[*].pvcSpec.selector.matchExpressions[*].values[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.storageConfigs[*].pvcSpec.selector.matchLabels

+
+
+
+object + +
+ +
+

matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is “key”, the operator is “In”, and the values array contains only “value”. The requirements are ANDed.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.storageConfigs[*].pvcSpec.storageClassName

+
+
+
+string + +
+ +
+

storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.storageConfigs[*].pvcSpec.volumeMode

+
+
+
+string + +
+ +
+

volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.storageConfigs[*].pvcSpec.volumeName

+
+
+
+string + +
+ +
+

volumeName is the binding reference to the PersistentVolume backing this claim.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.terminationGracePeriodSeconds

+
+
+
+integer + +
+ +
+

TerminationGracePeriod defines the pod termination grace period

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.tolerations

+
+
+
+array + +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.tolerations[*]

+
+
+
+object + +
+ +
+

The pod this Toleration is attached to tolerates any taint that matches the triple using the matching operator .

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.tolerations[*].effect

+
+
+
+string + +
+ +
+

Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.tolerations[*].key

+
+
+
+string + +
+ +
+

Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.tolerations[*].operator

+
+
+
+string + +
+ +
+

Operator represents a key’s relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.tolerations[*].tolerationSeconds

+
+
+
+integer + +
+ +
+

TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.tolerations[*].value

+
+
+
+string + +
+ +
+

Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumeMounts

+
+
+
+array + +
+ +
+

VolumeMounts define some extra Kubernetes VolumeMounts for the Kafka broker Pods.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumeMounts[*]

+
+
+
+object + +
+ +
+

VolumeMount describes a mounting of a Volume within a container.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumeMounts[*].mountPath

+
+
+
+string +Required +
+ +
+

Path within the container at which the volume should be mounted. Must not contain ‘:’.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumeMounts[*].mountPropagation

+
+
+
+string + +
+ +
+

mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumeMounts[*].name

+
+
+
+string +Required +
+ +
+

This must match the Name of a Volume.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumeMounts[*].readOnly

+
+
+
+boolean + +
+ +
+

Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumeMounts[*].subPath

+
+
+
+string + +
+ +
+

Path within the volume from which the container’s volume should be mounted. Defaults to “” (volume’s root).

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumeMounts[*].subPathExpr

+
+
+
+string + +
+ +
+

Expanded path within the volume from which the container’s volume should be mounted. Behaves similarly to SubPath but environment variable references $(VAR_NAME) are expanded using the container’s environment. Defaults to “” (volume’s root). SubPathExpr and SubPath are mutually exclusive.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes

+
+
+
+array + +
+ +
+

Volumes define some extra Kubernetes Volumes for the Kafka broker Pods.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*]

+
+
+
+object + +
+ +
+

Volume represents a named volume in a pod that may be accessed by any container in the pod.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].awsElasticBlockStore

+
+
+
+object + +
+ +
+

awsElasticBlockStore represents an AWS Disk resource that is attached to a kubelet’s host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].awsElasticBlockStore.fsType

+
+
+
+string + +
+ +
+

fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: “ext4”, “xfs”, “ntfs”. Implicitly inferred to be “ext4” if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore TODO: how do we prevent errors in the filesystem from compromising the machine

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].awsElasticBlockStore.partition

+
+
+
+integer + +
+ +
+

partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as “1”. Similarly, the volume partition for /dev/sda is “0” (or you can leave the property empty).

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].awsElasticBlockStore.readOnly

+
+
+
+boolean + +
+ +
+

readOnly value true will force the readOnly setting in VolumeMounts. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].awsElasticBlockStore.volumeID

+
+
+
+string +Required +
+ +
+

volumeID is unique ID of the persistent disk resource in AWS (Amazon EBS volume). More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].azureDisk

+
+
+
+object + +
+ +
+

azureDisk represents an Azure Data Disk mount on the host and bind mount to the pod.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].azureDisk.cachingMode

+
+
+
+string + +
+ +
+

cachingMode is the Host Caching mode: None, Read Only, Read Write.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].azureDisk.diskName

+
+
+
+string +Required +
+ +
+

diskName is the Name of the data disk in the blob storage

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].azureDisk.diskURI

+
+
+
+string +Required +
+ +
+

diskURI is the URI of data disk in the blob storage

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].azureDisk.fsType

+
+
+
+string + +
+ +
+

fsType is Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. “ext4”, “xfs”, “ntfs”. Implicitly inferred to be “ext4” if unspecified.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].azureDisk.kind

+
+
+
+string + +
+ +
+

kind expected values are Shared: multiple blob disks per storage account Dedicated: single blob disk per storage account Managed: azure managed data disk (only in managed availability set). defaults to shared

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].azureDisk.readOnly

+
+
+
+boolean + +
+ +
+

readOnly Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].azureFile

+
+
+
+object + +
+ +
+

azureFile represents an Azure File Service mount on the host and bind mount to the pod.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].azureFile.readOnly

+
+
+
+boolean + +
+ +
+

readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].azureFile.secretName

+
+
+
+string +Required +
+ +
+

secretName is the name of secret that contains Azure Storage Account Name and Key

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].azureFile.shareName

+
+
+
+string +Required +
+ +
+

shareName is the azure share Name

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].cephfs

+
+
+
+object + +
+ +
+

cephFS represents a Ceph FS mount on the host that shares a pod’s lifetime

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].cephfs.monitors

+
+
+
+array +Required +
+ +
+

monitors is Required: Monitors is a collection of Ceph monitors More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].cephfs.monitors[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].cephfs.path

+
+
+
+string + +
+ +
+

path is Optional: Used as the mounted root, rather than the full Ceph tree, default is /

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].cephfs.readOnly

+
+
+
+boolean + +
+ +
+

readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].cephfs.secretFile

+
+
+
+string + +
+ +
+

secretFile is Optional: SecretFile is the path to key ring for User, default is /etc/ceph/user.secret More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].cephfs.secretRef

+
+
+
+object + +
+ +
+

secretRef is Optional: SecretRef is reference to the authentication secret for User, default is empty. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].cephfs.secretRef.name

+
+
+
+string + +
+ +
+

Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].cephfs.user

+
+
+
+string + +
+ +
+

user is optional: User is the rados user name, default is admin More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].cinder

+
+
+
+object + +
+ +
+

cinder represents a cinder volume attached and mounted on kubelets host machine. More info: https://examples.k8s.io/mysql-cinder-pd/README.md

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].cinder.fsType

+
+
+
+string + +
+ +
+

fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Examples: “ext4”, “xfs”, “ntfs”. Implicitly inferred to be “ext4” if unspecified. More info: https://examples.k8s.io/mysql-cinder-pd/README.md

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].cinder.readOnly

+
+
+
+boolean + +
+ +
+

readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/mysql-cinder-pd/README.md

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].cinder.secretRef

+
+
+
+object + +
+ +
+

secretRef is optional: points to a secret object containing parameters used to connect to OpenStack.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].cinder.secretRef.name

+
+
+
+string + +
+ +
+

Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].cinder.volumeID

+
+
+
+string +Required +
+ +
+

volumeID used to identify the volume in cinder. More info: https://examples.k8s.io/mysql-cinder-pd/README.md

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].configMap

+
+
+
+object + +
+ +
+

configMap represents a configMap that should populate this volume

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].configMap.defaultMode

+
+
+
+integer + +
+ +
+

defaultMode is optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].configMap.items

+
+
+
+array + +
+ +
+

items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the ‘..’ path or start with ‘..’.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].configMap.items[*]

+
+
+
+object + +
+ +
+

Maps a string key to a path within a volume.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].configMap.items[*].key

+
+
+
+string +Required +
+ +
+

key is the key to project.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].configMap.items[*].mode

+
+
+
+integer + +
+ +
+

mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].configMap.items[*].path

+
+
+
+string +Required +
+ +
+

path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element ‘..’. May not start with the string ‘..’.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].configMap.name

+
+
+
+string + +
+ +
+

Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].configMap.optional

+
+
+
+boolean + +
+ +
+

optional specify whether the ConfigMap or its keys must be defined

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].csi

+
+
+
+object + +
+ +
+

csi (Container Storage Interface) represents ephemeral storage that is handled by certain external CSI drivers (Beta feature).

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].csi.driver

+
+
+
+string +Required +
+ +
+

driver is the name of the CSI driver that handles this volume. Consult with your admin for the correct name as registered in the cluster.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].csi.fsType

+
+
+
+string + +
+ +
+

fsType to mount. Ex. “ext4”, “xfs”, “ntfs”. If not provided, the empty value is passed to the associated CSI driver which will determine the default filesystem to apply.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].csi.nodePublishSecretRef

+
+
+
+object + +
+ +
+

nodePublishSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodePublishVolume and NodeUnpublishVolume calls. This field is optional, and may be empty if no secret is required. If the secret object contains more than one secret, all secret references are passed.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].csi.nodePublishSecretRef.name

+
+
+
+string + +
+ +
+

Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].csi.readOnly

+
+
+
+boolean + +
+ +
+

readOnly specifies a read-only configuration for the volume. Defaults to false (read/write).

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].csi.volumeAttributes

+
+
+
+object + +
+ +
+

volumeAttributes stores driver-specific properties that are passed to the CSI driver. Consult your driver’s documentation for supported values.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].downwardAPI

+
+
+
+object + +
+ +
+

downwardAPI represents downward API about the pod that should populate this volume

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].downwardAPI.defaultMode

+
+
+
+integer + +
+ +
+

Optional: mode bits to use on created files by default. Must be a Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].downwardAPI.items

+
+
+
+array + +
+ +
+

Items is a list of downward API volume file

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].downwardAPI.items[*]

+
+
+
+object + +
+ +
+

DownwardAPIVolumeFile represents information to create the file containing the pod field

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].downwardAPI.items[*].fieldRef

+
+
+
+object + +
+ +
+

Required: Selects a field of the pod: only annotations, labels, name and namespace are supported.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].downwardAPI.items[*].fieldRef.apiVersion

+
+
+
+string + +
+ +
+

Version of the schema the FieldPath is written in terms of, defaults to “v1”.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].downwardAPI.items[*].fieldRef.fieldPath

+
+
+
+string +Required +
+ +
+

Path of the field to select in the specified API version.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].downwardAPI.items[*].mode

+
+
+
+integer + +
+ +
+

Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].downwardAPI.items[*].path

+
+
+
+string +Required +
+ +
+

Required: Path is the relative path name of the file to be created. Must not be absolute or contain the ‘..’ path. Must be utf-8 encoded. The first item of the relative path must not start with ‘..’

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].downwardAPI.items[*].resourceFieldRef

+
+
+
+object + +
+ +
+

Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].downwardAPI.items[*].resourceFieldRef.containerName

+
+
+
+string + +
+ +
+

Container name: required for volumes, optional for env vars

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].downwardAPI.items[*].resourceFieldRef.divisor

+
+
+
+ + +
+ +
+

Specifies the output format of the exposed resources, defaults to “1”

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].downwardAPI.items[*].resourceFieldRef.resource

+
+
+
+string +Required +
+ +
+

Required: resource to select

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].emptyDir

+
+
+
+object + +
+ +
+

emptyDir represents a temporary directory that shares a pod’s lifetime. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].emptyDir.medium

+
+
+
+string + +
+ +
+

medium represents what type of storage medium should back this directory. The default is “” which means to use the node’s default medium. Must be an empty string (default) or Memory. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].emptyDir.sizeLimit

+
+
+
+ + +
+ +
+

sizeLimit is the total amount of local storage required for this EmptyDir volume. The size limit is also applicable for memory medium. The maximum usage on memory medium EmptyDir would be the minimum value between the SizeLimit specified here and the sum of memory limits of all containers in a pod. The default is nil which means that the limit is undefined. More info: http://kubernetes.io/docs/user-guide/volumes#emptydir

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].ephemeral

+
+
+
+object + +
+ +
+

ephemeral represents a volume that is handled by a cluster storage driver. The volume’s lifecycle is tied to the pod that defines it - it will be created before the pod starts, and deleted when the pod is removed. + Use this if: a) the volume is only needed while the pod runs, b) features of normal volumes like restoring from snapshot or capacity tracking are needed, c) the storage driver is specified through a storage class, and d) the storage driver supports dynamic volume provisioning through a PersistentVolumeClaim (see EphemeralVolumeSource for more information on the connection between this volume type and PersistentVolumeClaim). + Use PersistentVolumeClaim or one of the vendor-specific APIs for volumes that persist for longer than the lifecycle of an individual pod. + Use CSI for light-weight local ephemeral volumes if the CSI driver is meant to be used that way - see the documentation of the driver for more information. + A pod can use both types of ephemeral volumes and persistent volumes at the same time.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].ephemeral.volumeClaimTemplate

+
+
+
+object + +
+ +
+

Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be <pod name>-<volume name> where <volume name> is the name from the PodSpec.Volumes array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). + An existing PVC with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster. + This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created. + Required, must not be nil.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].ephemeral.volumeClaimTemplate.metadata

+
+
+
+object + +
+ +
+

May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].ephemeral.volumeClaimTemplate.spec

+
+
+
+object +Required +
+ +
+

The specification for the PersistentVolumeClaim. The entire content is copied unchanged into the PVC that gets created from this template. The same fields as in a PersistentVolumeClaim are also valid here.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].ephemeral.volumeClaimTemplate.spec.accessModes

+
+
+
+array + +
+ +
+

accessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].ephemeral.volumeClaimTemplate.spec.accessModes[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].ephemeral.volumeClaimTemplate.spec.dataSource

+
+
+
+object + +
+ +
+

dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. If the AnyVolumeDataSource feature gate is enabled, this field will always have the same contents as the DataSourceRef field.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].ephemeral.volumeClaimTemplate.spec.dataSource.apiGroup

+
+
+
+string + +
+ +
+

APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].ephemeral.volumeClaimTemplate.spec.dataSource.kind

+
+
+
+string +Required +
+ +
+

Kind is the type of resource being referenced

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].ephemeral.volumeClaimTemplate.spec.dataSource.name

+
+
+
+string +Required +
+ +
+

Name is the name of resource being referenced

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].ephemeral.volumeClaimTemplate.spec.dataSourceRef

+
+
+
+object + +
+ +
+

dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any local object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the DataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, both fields (DataSource and DataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. There are two important differences between DataSource and DataSourceRef: * While DataSource only allows two specific types of objects, DataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While DataSource ignores disallowed values (dropping them), DataSourceRef preserves all values, and generates an error if a disallowed value is specified. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].ephemeral.volumeClaimTemplate.spec.dataSourceRef.apiGroup

+
+
+
+string + +
+ +
+

APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].ephemeral.volumeClaimTemplate.spec.dataSourceRef.kind

+
+
+
+string +Required +
+ +
+

Kind is the type of resource being referenced

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].ephemeral.volumeClaimTemplate.spec.dataSourceRef.name

+
+
+
+string +Required +
+ +
+

Name is the name of resource being referenced

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].ephemeral.volumeClaimTemplate.spec.resources

+
+
+
+object + +
+ +
+

resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than previous value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].ephemeral.volumeClaimTemplate.spec.resources.limits

+
+
+
+object + +
+ +
+

Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].ephemeral.volumeClaimTemplate.spec.resources.requests

+
+
+
+object + +
+ +
+

Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].ephemeral.volumeClaimTemplate.spec.selector

+
+
+
+object + +
+ +
+

selector is a label query over volumes to consider for binding.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].ephemeral.volumeClaimTemplate.spec.selector.matchExpressions

+
+
+
+array + +
+ +
+

matchExpressions is a list of label selector requirements. The requirements are ANDed.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].ephemeral.volumeClaimTemplate.spec.selector.matchExpressions[*]

+
+
+
+object + +
+ +
+

A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].ephemeral.volumeClaimTemplate.spec.selector.matchExpressions[*].key

+
+
+
+string +Required +
+ +
+

key is the label key that the selector applies to.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].ephemeral.volumeClaimTemplate.spec.selector.matchExpressions[*].operator

+
+
+
+string +Required +
+ +
+

operator represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].ephemeral.volumeClaimTemplate.spec.selector.matchExpressions[*].values

+
+
+
+array + +
+ +
+

values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].ephemeral.volumeClaimTemplate.spec.selector.matchExpressions[*].values[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].ephemeral.volumeClaimTemplate.spec.selector.matchLabels

+
+
+
+object + +
+ +
+

matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is “key”, the operator is “In”, and the values array contains only “value”. The requirements are ANDed.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].ephemeral.volumeClaimTemplate.spec.storageClassName

+
+
+
+string + +
+ +
+

storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].ephemeral.volumeClaimTemplate.spec.volumeMode

+
+
+
+string + +
+ +
+

volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].ephemeral.volumeClaimTemplate.spec.volumeName

+
+
+
+string + +
+ +
+

volumeName is the binding reference to the PersistentVolume backing this claim.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].fc

+
+
+
+object + +
+ +
+

fc represents a Fibre Channel resource that is attached to a kubelet’s host machine and then exposed to the pod.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].fc.fsType

+
+
+
+string + +
+ +
+

fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. “ext4”, “xfs”, “ntfs”. Implicitly inferred to be “ext4” if unspecified. TODO: how do we prevent errors in the filesystem from compromising the machine

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].fc.lun

+
+
+
+integer + +
+ +
+

lun is Optional: FC target lun number

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].fc.readOnly

+
+
+
+boolean + +
+ +
+

readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].fc.targetWWNs

+
+
+
+array + +
+ +
+

targetWWNs is Optional: FC target worldwide names (WWNs)

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].fc.targetWWNs[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].fc.wwids

+
+
+
+array + +
+ +
+

wwids Optional: FC volume world wide identifiers (wwids) Either wwids or combination of targetWWNs and lun must be set, but not both simultaneously.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].fc.wwids[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].flexVolume

+
+
+
+object + +
+ +
+

flexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].flexVolume.driver

+
+
+
+string +Required +
+ +
+

driver is the name of the driver to use for this volume.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].flexVolume.fsType

+
+
+
+string + +
+ +
+

fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. “ext4”, “xfs”, “ntfs”. The default filesystem depends on FlexVolume script.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].flexVolume.options

+
+
+
+object + +
+ +
+

options is Optional: this field holds extra command options if any.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].flexVolume.readOnly

+
+
+
+boolean + +
+ +
+

readOnly is Optional: defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].flexVolume.secretRef

+
+
+
+object + +
+ +
+

secretRef is Optional: secretRef is reference to the secret object containing sensitive information to pass to the plugin scripts. This may be empty if no secret object is specified. If the secret object contains more than one secret, all secrets are passed to the plugin scripts.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].flexVolume.secretRef.name

+
+
+
+string + +
+ +
+

Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].flocker

+
+
+
+object + +
+ +
+

flocker represents a Flocker volume attached to a kubelet’s host machine. This depends on the Flocker control service being running

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].flocker.datasetName

+
+
+
+string + +
+ +
+

datasetName is Name of the dataset stored as metadata -> name on the dataset for Flocker should be considered as deprecated

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].flocker.datasetUUID

+
+
+
+string + +
+ +
+

datasetUUID is the UUID of the dataset. This is unique identifier of a Flocker dataset

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].gcePersistentDisk

+
+
+
+object + +
+ +
+

gcePersistentDisk represents a GCE Disk resource that is attached to a kubelet’s host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].gcePersistentDisk.fsType

+
+
+
+string + +
+ +
+

fsType is filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: “ext4”, “xfs”, “ntfs”. Implicitly inferred to be “ext4” if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk TODO: how do we prevent errors in the filesystem from compromising the machine

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].gcePersistentDisk.partition

+
+
+
+integer + +
+ +
+

partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as “1”. Similarly, the volume partition for /dev/sda is “0” (or you can leave the property empty). More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].gcePersistentDisk.pdName

+
+
+
+string +Required +
+ +
+

pdName is unique name of the PD resource in GCE. Used to identify the disk in GCE. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].gcePersistentDisk.readOnly

+
+
+
+boolean + +
+ +
+

readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].gitRepo

+
+
+
+object + +
+ +
+

gitRepo represents a git repository at a particular revision. DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod’s container.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].gitRepo.directory

+
+
+
+string + +
+ +
+

directory is the target directory name. Must not contain or start with ‘..’. If ‘.’ is supplied, the volume directory will be the git repository. Otherwise, if specified, the volume will contain the git repository in the subdirectory with the given name.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].gitRepo.repository

+
+
+
+string +Required +
+ +
+

repository is the URL

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].gitRepo.revision

+
+
+
+string + +
+ +
+

revision is the commit hash for the specified revision.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].glusterfs

+
+
+
+object + +
+ +
+

glusterfs represents a Glusterfs mount on the host that shares a pod’s lifetime. More info: https://examples.k8s.io/volumes/glusterfs/README.md

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].glusterfs.endpoints

+
+
+
+string +Required +
+ +
+

endpoints is the endpoint name that details Glusterfs topology. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].glusterfs.path

+
+
+
+string +Required +
+ +
+

path is the Glusterfs volume path. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].glusterfs.readOnly

+
+
+
+boolean + +
+ +
+

readOnly here will force the Glusterfs volume to be mounted with read-only permissions. Defaults to false. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].hostPath

+
+
+
+object + +
+ +
+

hostPath represents a pre-existing file or directory on the host machine that is directly exposed to the container. This is generally used for system agents or other privileged things that are allowed to see the host machine. Most containers will NOT need this. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath — TODO(jonesdl) We need to restrict who can use host directory mounts and who can/can not mount host directories as read/write.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].hostPath.path

+
+
+
+string +Required +
+ +
+

path of the directory on the host. If the path is a symlink, it will follow the link to the real path. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].hostPath.type

+
+
+
+string + +
+ +
+

type for HostPath Volume Defaults to “” More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].iscsi

+
+
+
+object + +
+ +
+

iscsi represents an ISCSI Disk resource that is attached to a kubelet’s host machine and then exposed to the pod. More info: https://examples.k8s.io/volumes/iscsi/README.md

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].iscsi.chapAuthDiscovery

+
+
+
+boolean + +
+ +
+

chapAuthDiscovery defines whether support iSCSI Discovery CHAP authentication

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].iscsi.chapAuthSession

+
+
+
+boolean + +
+ +
+

chapAuthSession defines whether support iSCSI Session CHAP authentication

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].iscsi.fsType

+
+
+
+string + +
+ +
+

fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: “ext4”, “xfs”, “ntfs”. Implicitly inferred to be “ext4” if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#iscsi TODO: how do we prevent errors in the filesystem from compromising the machine

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].iscsi.initiatorName

+
+
+
+string + +
+ +
+

initiatorName is the custom iSCSI Initiator Name. If initiatorName is specified with iscsiInterface simultaneously, new iSCSI interface : will be created for the connection.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].iscsi.iqn

+
+
+
+string +Required +
+ +
+

iqn is the target iSCSI Qualified Name.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].iscsi.iscsiInterface

+
+
+
+string + +
+ +
+

iscsiInterface is the interface Name that uses an iSCSI transport. Defaults to ‘default’ (tcp).

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].iscsi.lun

+
+
+
+integer +Required +
+ +
+

lun represents iSCSI Target Lun number.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].iscsi.portals

+
+
+
+array + +
+ +
+

portals is the iSCSI Target Portal List. The portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260).

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].iscsi.portals[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].iscsi.readOnly

+
+
+
+boolean + +
+ +
+

readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].iscsi.secretRef

+
+
+
+object + +
+ +
+

secretRef is the CHAP Secret for iSCSI target and initiator authentication

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].iscsi.secretRef.name

+
+
+
+string + +
+ +
+

Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].iscsi.targetPortal

+
+
+
+string +Required +
+ +
+

targetPortal is iSCSI Target Portal. The Portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260).

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].name

+
+
+
+string +Required +
+ +
+

name of the volume. Must be a DNS_LABEL and unique within the pod. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].nfs

+
+
+
+object + +
+ +
+

nfs represents an NFS mount on the host that shares a pod’s lifetime More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].nfs.path

+
+
+
+string +Required +
+ +
+

path that is exported by the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].nfs.readOnly

+
+
+
+boolean + +
+ +
+

readOnly here will force the NFS export to be mounted with read-only permissions. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].nfs.server

+
+
+
+string +Required +
+ +
+

server is the hostname or IP address of the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].persistentVolumeClaim

+
+
+
+object + +
+ +
+

persistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].persistentVolumeClaim.claimName

+
+
+
+string +Required +
+ +
+

claimName is the name of a PersistentVolumeClaim in the same namespace as the pod using this volume. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].persistentVolumeClaim.readOnly

+
+
+
+boolean + +
+ +
+

readOnly Will force the ReadOnly setting in VolumeMounts. Default false.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].photonPersistentDisk

+
+
+
+object + +
+ +
+

photonPersistentDisk represents a PhotonController persistent disk attached and mounted on kubelets host machine

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].photonPersistentDisk.fsType

+
+
+
+string + +
+ +
+

fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. “ext4”, “xfs”, “ntfs”. Implicitly inferred to be “ext4” if unspecified.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].photonPersistentDisk.pdID

+
+
+
+string +Required +
+ +
+

pdID is the ID that identifies Photon Controller persistent disk

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].portworxVolume

+
+
+
+object + +
+ +
+

portworxVolume represents a portworx volume attached and mounted on kubelets host machine

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].portworxVolume.fsType

+
+
+
+string + +
+ +
+

fSType represents the filesystem type to mount Must be a filesystem type supported by the host operating system. Ex. “ext4”, “xfs”. Implicitly inferred to be “ext4” if unspecified.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].portworxVolume.readOnly

+
+
+
+boolean + +
+ +
+

readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].portworxVolume.volumeID

+
+
+
+string +Required +
+ +
+

volumeID uniquely identifies a Portworx volume

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].projected

+
+
+
+object + +
+ +
+

projected items for all in one resources secrets, configmaps, and downward API

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].projected.defaultMode

+
+
+
+integer + +
+ +
+

defaultMode are the mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].projected.sources

+
+
+
+array + +
+ +
+

sources is the list of volume projections

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].projected.sources[*]

+
+
+
+object + +
+ +
+

Projection that may be projected along with other supported volume types

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].projected.sources[*].configMap

+
+
+
+object + +
+ +
+

configMap information about the configMap data to project

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].projected.sources[*].configMap.items

+
+
+
+array + +
+ +
+

items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the ‘..’ path or start with ‘..’.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].projected.sources[*].configMap.items[*]

+
+
+
+object + +
+ +
+

Maps a string key to a path within a volume.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].projected.sources[*].configMap.items[*].key

+
+
+
+string +Required +
+ +
+

key is the key to project.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].projected.sources[*].configMap.items[*].mode

+
+
+
+integer + +
+ +
+

mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].projected.sources[*].configMap.items[*].path

+
+
+
+string +Required +
+ +
+

path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element ‘..’. May not start with the string ‘..’.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].projected.sources[*].configMap.name

+
+
+
+string + +
+ +
+

Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].projected.sources[*].configMap.optional

+
+
+
+boolean + +
+ +
+

optional specify whether the ConfigMap or its keys must be defined

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].projected.sources[*].downwardAPI

+
+
+
+object + +
+ +
+

downwardAPI information about the downwardAPI data to project

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].projected.sources[*].downwardAPI.items

+
+
+
+array + +
+ +
+

Items is a list of DownwardAPIVolume file

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].projected.sources[*].downwardAPI.items[*]

+
+
+
+object + +
+ +
+

DownwardAPIVolumeFile represents information to create the file containing the pod field

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].projected.sources[*].downwardAPI.items[*].fieldRef

+
+
+
+object + +
+ +
+

Required: Selects a field of the pod: only annotations, labels, name and namespace are supported.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].projected.sources[*].downwardAPI.items[*].fieldRef.apiVersion

+
+
+
+string + +
+ +
+

Version of the schema the FieldPath is written in terms of, defaults to “v1”.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].projected.sources[*].downwardAPI.items[*].fieldRef.fieldPath

+
+
+
+string +Required +
+ +
+

Path of the field to select in the specified API version.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].projected.sources[*].downwardAPI.items[*].mode

+
+
+
+integer + +
+ +
+

Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].projected.sources[*].downwardAPI.items[*].path

+
+
+
+string +Required +
+ +
+

Required: Path is the relative path name of the file to be created. Must not be absolute or contain the ‘..’ path. Must be utf-8 encoded. The first item of the relative path must not start with ‘..’

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].projected.sources[*].downwardAPI.items[*].resourceFieldRef

+
+
+
+object + +
+ +
+

Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].projected.sources[*].downwardAPI.items[*].resourceFieldRef.containerName

+
+
+
+string + +
+ +
+

Container name: required for volumes, optional for env vars

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].projected.sources[*].downwardAPI.items[*].resourceFieldRef.divisor

+
+
+
+ + +
+ +
+

Specifies the output format of the exposed resources, defaults to “1”

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].projected.sources[*].downwardAPI.items[*].resourceFieldRef.resource

+
+
+
+string +Required +
+ +
+

Required: resource to select

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].projected.sources[*].secret

+
+
+
+object + +
+ +
+

secret information about the secret data to project

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].projected.sources[*].secret.items

+
+
+
+array + +
+ +
+

items if unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the ‘..’ path or start with ‘..’.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].projected.sources[*].secret.items[*]

+
+
+
+object + +
+ +
+

Maps a string key to a path within a volume.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].projected.sources[*].secret.items[*].key

+
+
+
+string +Required +
+ +
+

key is the key to project.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].projected.sources[*].secret.items[*].mode

+
+
+
+integer + +
+ +
+

mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].projected.sources[*].secret.items[*].path

+
+
+
+string +Required +
+ +
+

path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element ‘..’. May not start with the string ‘..’.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].projected.sources[*].secret.name

+
+
+
+string + +
+ +
+

Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].projected.sources[*].secret.optional

+
+
+
+boolean + +
+ +
+

optional field specify whether the Secret or its key must be defined

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].projected.sources[*].serviceAccountToken

+
+
+
+object + +
+ +
+

serviceAccountToken is information about the serviceAccountToken data to project

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].projected.sources[*].serviceAccountToken.audience

+
+
+
+string + +
+ +
+

audience is the intended audience of the token. A recipient of a token must identify itself with an identifier specified in the audience of the token, and otherwise should reject the token. The audience defaults to the identifier of the apiserver.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].projected.sources[*].serviceAccountToken.expirationSeconds

+
+
+
+integer + +
+ +
+

expirationSeconds is the requested duration of validity of the service account token. As the token approaches expiration, the kubelet volume plugin will proactively rotate the service account token. The kubelet will start trying to rotate the token if the token is older than 80 percent of its time to live or if the token is older than 24 hours.Defaults to 1 hour and must be at least 10 minutes.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].projected.sources[*].serviceAccountToken.path

+
+
+
+string +Required +
+ +
+

path is the path relative to the mount point of the file to project the token into.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].quobyte

+
+
+
+object + +
+ +
+

quobyte represents a Quobyte mount on the host that shares a pod’s lifetime

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].quobyte.group

+
+
+
+string + +
+ +
+

group to map volume access to Default is no group

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].quobyte.readOnly

+
+
+
+boolean + +
+ +
+

readOnly here will force the Quobyte volume to be mounted with read-only permissions. Defaults to false.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].quobyte.registry

+
+
+
+string +Required +
+ +
+

registry represents a single or multiple Quobyte Registry services specified as a string as host:port pair (multiple entries are separated with commas) which acts as the central registry for volumes

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].quobyte.tenant

+
+
+
+string + +
+ +
+

tenant owning the given Quobyte volume in the Backend Used with dynamically provisioned Quobyte volumes, value is set by the plugin

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].quobyte.user

+
+
+
+string + +
+ +
+

user to map volume access to Defaults to serivceaccount user

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].quobyte.volume

+
+
+
+string +Required +
+ +
+

volume is a string that references an already created Quobyte volume by name.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].rbd

+
+
+
+object + +
+ +
+

rbd represents a Rados Block Device mount on the host that shares a pod’s lifetime. More info: https://examples.k8s.io/volumes/rbd/README.md

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].rbd.fsType

+
+
+
+string + +
+ +
+

fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: “ext4”, “xfs”, “ntfs”. Implicitly inferred to be “ext4” if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#rbd TODO: how do we prevent errors in the filesystem from compromising the machine

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].rbd.image

+
+
+
+string +Required +
+ +
+

image is the rados image name. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].rbd.keyring

+
+
+
+string + +
+ +
+

keyring is the path to key ring for RBDUser. Default is /etc/ceph/keyring. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].rbd.monitors

+
+
+
+array +Required +
+ +
+

monitors is a collection of Ceph monitors. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].rbd.monitors[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].rbd.pool

+
+
+
+string + +
+ +
+

pool is the rados pool name. Default is rbd. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].rbd.readOnly

+
+
+
+boolean + +
+ +
+

readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].rbd.secretRef

+
+
+
+object + +
+ +
+

secretRef is name of the authentication secret for RBDUser. If provided overrides keyring. Default is nil. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].rbd.secretRef.name

+
+
+
+string + +
+ +
+

Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].rbd.user

+
+
+
+string + +
+ +
+

user is the rados user name. Default is admin. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].scaleIO

+
+
+
+object + +
+ +
+

scaleIO represents a ScaleIO persistent volume attached and mounted on Kubernetes nodes.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].scaleIO.fsType

+
+
+
+string + +
+ +
+

fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. “ext4”, “xfs”, “ntfs”. Default is “xfs”.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].scaleIO.gateway

+
+
+
+string +Required +
+ +
+

gateway is the host address of the ScaleIO API Gateway.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].scaleIO.protectionDomain

+
+
+
+string + +
+ +
+

protectionDomain is the name of the ScaleIO Protection Domain for the configured storage.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].scaleIO.readOnly

+
+
+
+boolean + +
+ +
+

readOnly Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].scaleIO.secretRef

+
+
+
+object +Required +
+ +
+

secretRef references to the secret for ScaleIO user and other sensitive information. If this is not provided, Login operation will fail.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].scaleIO.secretRef.name

+
+
+
+string + +
+ +
+

Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].scaleIO.sslEnabled

+
+
+
+boolean + +
+ +
+

sslEnabled Flag enable/disable SSL communication with Gateway, default false

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].scaleIO.storageMode

+
+
+
+string + +
+ +
+

storageMode indicates whether the storage for a volume should be ThickProvisioned or ThinProvisioned. Default is ThinProvisioned.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].scaleIO.storagePool

+
+
+
+string + +
+ +
+

storagePool is the ScaleIO Storage Pool associated with the protection domain.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].scaleIO.system

+
+
+
+string +Required +
+ +
+

system is the name of the storage system as configured in ScaleIO.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].scaleIO.volumeName

+
+
+
+string + +
+ +
+

volumeName is the name of a volume already created in the ScaleIO system that is associated with this volume source.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].secret

+
+
+
+object + +
+ +
+

secret represents a secret that should populate this volume. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].secret.defaultMode

+
+
+
+integer + +
+ +
+

defaultMode is Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].secret.items

+
+
+
+array + +
+ +
+

items If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the ‘..’ path or start with ‘..’.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].secret.items[*]

+
+
+
+object + +
+ +
+

Maps a string key to a path within a volume.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].secret.items[*].key

+
+
+
+string +Required +
+ +
+

key is the key to project.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].secret.items[*].mode

+
+
+
+integer + +
+ +
+

mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].secret.items[*].path

+
+
+
+string +Required +
+ +
+

path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element ‘..’. May not start with the string ‘..’.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].secret.optional

+
+
+
+boolean + +
+ +
+

optional field specify whether the Secret or its keys must be defined

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].secret.secretName

+
+
+
+string + +
+ +
+

secretName is the name of the secret in the pod’s namespace to use. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].storageos

+
+
+
+object + +
+ +
+

storageOS represents a StorageOS volume attached and mounted on Kubernetes nodes.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].storageos.fsType

+
+
+
+string + +
+ +
+

fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. “ext4”, “xfs”, “ntfs”. Implicitly inferred to be “ext4” if unspecified.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].storageos.readOnly

+
+
+
+boolean + +
+ +
+

readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].storageos.secretRef

+
+
+
+object + +
+ +
+

secretRef specifies the secret to use for obtaining the StorageOS API credentials. If not specified, default values will be attempted.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].storageos.secretRef.name

+
+
+
+string + +
+ +
+

Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].storageos.volumeName

+
+
+
+string + +
+ +
+

volumeName is the human-readable name of the StorageOS volume. Volume names are only unique within a namespace.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].storageos.volumeNamespace

+
+
+
+string + +
+ +
+

volumeNamespace specifies the scope of the volume within StorageOS. If no namespace is specified then the Pod’s namespace will be used. This allows the Kubernetes name scoping to be mirrored within StorageOS for tighter integration. Set VolumeName to any name to override the default behaviour. Set to “default” if you are not using namespaces within StorageOS. Namespaces that do not pre-exist within StorageOS will be created.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].vsphereVolume

+
+
+
+object + +
+ +
+

vsphereVolume represents a vSphere volume attached and mounted on kubelets host machine

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].vsphereVolume.fsType

+
+
+
+string + +
+ +
+

fsType is filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. “ext4”, “xfs”, “ntfs”. Implicitly inferred to be “ext4” if unspecified.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].vsphereVolume.storagePolicyID

+
+
+
+string + +
+ +
+

storagePolicyID is the storage Policy Based Management (SPBM) profile ID associated with the StoragePolicyName.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].vsphereVolume.storagePolicyName

+
+
+
+string + +
+ +
+

storagePolicyName is the storage Policy Based Management (SPBM) profile name.

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfig.volumes[*].vsphereVolume.volumePath

+
+
+
+string +Required +
+ +
+

volumePath is the path that identifies vSphere volume vmdk

+ +
+ +
+
+ +
+
+

.spec.brokers[*].brokerConfigGroup

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.brokers[*].id

+
+
+
+integer +Required +
+ +
+
+ +
+
+

.spec.brokers[*].readOnlyConfig

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.clientSSLCertSecret

+
+
+
+object + +
+ +
+

ClientSSLCertSecret is a reference to the Kubernetes secret where custom client SSL certificate can be provided. It will be used by the koperator, cruise control, cruise control metrics reporter to communicate on SSL with that internal listener which is used for interbroker communication. The client certificate must share the same chain of trust as the server certificate used by the corresponding internal listener. The secret must contain the keystore, truststore jks files and the password for them in base64 encoded format under the keystore.jks, truststore.jks, password data fields.

+ +
+ +
+
+ +
+
+

.spec.clientSSLCertSecret.name

+
+
+
+string + +
+ +
+

Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?

+ +
+ +
+
+ +
+
+

.spec.clusterImage

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.clusterMetricsReporterImage

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.clusterWideConfig

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.cruiseControlConfig

+
+
+
+object +Required +
+ +
+

CruiseControlConfig defines the config for Cruise Control

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.capacityConfig

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.clusterConfig

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.config

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.cruiseControlAnnotations

+
+
+
+object + +
+ +
+

Annotations to be applied to CruiseControl pod

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.cruiseControlEndpoint

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.cruiseControlOperationSpec

+
+
+
+object + +
+ +
+

CruiseControlOperationSpec specifies the configuration of the CruiseControlOperation handling

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.cruiseControlOperationSpec.ttlSecondsAfterFinished

+
+
+
+integer + +
+ +
+

When TTLSecondsAfterFinished is specified, the created and finished (completed successfully or completedWithError and errorPolicy: ignore) cruiseControlOperation custom resource will be deleted after the given time elapsed. When it is 0 then the resource is going to be deleted instantly after the operation is finished. When it is not specified the resource is not going to be removed. Value can be only zero and positive integers.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.cruiseControlTaskSpec

+
+
+
+object + +
+ +
+

CruiseControlTaskSpec specifies the configuration of the CC Tasks

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.cruiseControlTaskSpec.RetryDurationMinutes

+
+
+
+integer +Required +
+ +
+

RetryDurationMinutes describes the amount of time the Operator waits for the task

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.image

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.imagePullSecrets

+
+
+
+array + +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.imagePullSecrets[*]

+
+
+
+object + +
+ +
+

LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.imagePullSecrets[*].name

+
+
+
+string + +
+ +
+

Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers

+
+
+
+array + +
+ +
+

InitContainers add extra initContainers to CruiseControl pod

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*]

+
+
+
+object + +
+ +
+

A single application container that you want to run within a pod.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].args

+
+
+
+array + +
+ +
+

Arguments to the entrypoint. The container image’s CMD is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container’s environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. “$$(VAR_NAME)” will produce the string literal “$(VAR_NAME)”. Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].args[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].command

+
+
+
+array + +
+ +
+

Entrypoint array. Not executed within a shell. The container image’s ENTRYPOINT is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container’s environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. “$$(VAR_NAME)” will produce the string literal “$(VAR_NAME)”. Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].command[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].env

+
+
+
+array + +
+ +
+

List of environment variables to set in the container. Cannot be updated.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].envFrom

+
+
+
+array + +
+ +
+

List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].envFrom[*]

+
+
+
+object + +
+ +
+

EnvFromSource represents the source of a set of ConfigMaps

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].envFrom[*].configMapRef

+
+
+
+object + +
+ +
+

The ConfigMap to select from

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].envFrom[*].configMapRef.name

+
+
+
+string + +
+ +
+

Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].envFrom[*].configMapRef.optional

+
+
+
+boolean + +
+ +
+

Specify whether the ConfigMap must be defined

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].envFrom[*].prefix

+
+
+
+string + +
+ +
+

An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].envFrom[*].secretRef

+
+
+
+object + +
+ +
+

The Secret to select from

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].envFrom[*].secretRef.name

+
+
+
+string + +
+ +
+

Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].envFrom[*].secretRef.optional

+
+
+
+boolean + +
+ +
+

Specify whether the Secret must be defined

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].env[*]

+
+
+
+object + +
+ +
+

EnvVar represents an environment variable present in a Container.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].env[*].name

+
+
+
+string +Required +
+ +
+

Name of the environment variable. Must be a C_IDENTIFIER.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].env[*].value

+
+
+
+string + +
+ +
+

Variable references $(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. “$$(VAR_NAME)” will produce the string literal “$(VAR_NAME)”. Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to “”.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].env[*].valueFrom

+
+
+
+object + +
+ +
+

Source for the environment variable’s value. Cannot be used if value is not empty.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].env[*].valueFrom.configMapKeyRef

+
+
+
+object + +
+ +
+

Selects a key of a ConfigMap.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].env[*].valueFrom.configMapKeyRef.key

+
+
+
+string +Required +
+ +
+

The key to select.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].env[*].valueFrom.configMapKeyRef.name

+
+
+
+string + +
+ +
+

Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].env[*].valueFrom.configMapKeyRef.optional

+
+
+
+boolean + +
+ +
+

Specify whether the ConfigMap or its key must be defined

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].env[*].valueFrom.fieldRef

+
+
+
+object + +
+ +
+

Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'], metadata.annotations['<KEY>'], spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].env[*].valueFrom.fieldRef.apiVersion

+
+
+
+string + +
+ +
+

Version of the schema the FieldPath is written in terms of, defaults to “v1”.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].env[*].valueFrom.fieldRef.fieldPath

+
+
+
+string +Required +
+ +
+

Path of the field to select in the specified API version.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].env[*].valueFrom.resourceFieldRef

+
+
+
+object + +
+ +
+

Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].env[*].valueFrom.resourceFieldRef.containerName

+
+
+
+string + +
+ +
+

Container name: required for volumes, optional for env vars

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].env[*].valueFrom.resourceFieldRef.divisor

+
+
+
+ + +
+ +
+

Specifies the output format of the exposed resources, defaults to “1”

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].env[*].valueFrom.resourceFieldRef.resource

+
+
+
+string +Required +
+ +
+

Required: resource to select

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].env[*].valueFrom.secretKeyRef

+
+
+
+object + +
+ +
+

Selects a key of a secret in the pod’s namespace

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].env[*].valueFrom.secretKeyRef.key

+
+
+
+string +Required +
+ +
+

The key of the secret to select from. Must be a valid secret key.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].env[*].valueFrom.secretKeyRef.name

+
+
+
+string + +
+ +
+

Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].env[*].valueFrom.secretKeyRef.optional

+
+
+
+boolean + +
+ +
+

Specify whether the Secret or its key must be defined

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].image

+
+
+
+string + +
+ +
+

Container image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].imagePullPolicy

+
+
+
+string + +
+ +
+

Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].lifecycle

+
+
+
+object + +
+ +
+

Actions that the management system should take in response to container lifecycle events. Cannot be updated.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].lifecycle.postStart

+
+
+
+object + +
+ +
+

PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].lifecycle.postStart.exec

+
+
+
+object + +
+ +
+

Exec specifies the action to take.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].lifecycle.postStart.exec.command

+
+
+
+array + +
+ +
+

Command is the command line to execute inside the container, the working directory for the command is root (‘/’) in the container’s filesystem. The command is simply exec’d, it is not run inside a shell, so traditional shell instructions (‘|’, etc) won’t work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].lifecycle.postStart.exec.command[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].lifecycle.postStart.httpGet

+
+
+
+object + +
+ +
+

HTTPGet specifies the http request to perform.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].lifecycle.postStart.httpGet.host

+
+
+
+string + +
+ +
+

Host name to connect to, defaults to the pod IP. You probably want to set “Host” in httpHeaders instead.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].lifecycle.postStart.httpGet.httpHeaders

+
+
+
+array + +
+ +
+

Custom headers to set in the request. HTTP allows repeated headers.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].lifecycle.postStart.httpGet.httpHeaders[*]

+
+
+
+object + +
+ +
+

HTTPHeader describes a custom header to be used in HTTP probes

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].lifecycle.postStart.httpGet.httpHeaders[*].name

+
+
+
+string +Required +
+ +
+

The header field name

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].lifecycle.postStart.httpGet.httpHeaders[*].value

+
+
+
+string +Required +
+ +
+

The header field value

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].lifecycle.postStart.httpGet.path

+
+
+
+string + +
+ +
+

Path to access on the HTTP server.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].lifecycle.postStart.httpGet.port

+
+
+
+ +Required +
+ +
+

Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].lifecycle.postStart.httpGet.scheme

+
+
+
+string + +
+ +
+

Scheme to use for connecting to the host. Defaults to HTTP.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].lifecycle.postStart.tcpSocket

+
+
+
+object + +
+ +
+

Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].lifecycle.postStart.tcpSocket.host

+
+
+
+string + +
+ +
+

Optional: Host name to connect to, defaults to the pod IP.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].lifecycle.postStart.tcpSocket.port

+
+
+
+ +Required +
+ +
+

Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].lifecycle.preStop

+
+
+
+object + +
+ +
+

PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod’s termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod’s termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].lifecycle.preStop.exec

+
+
+
+object + +
+ +
+

Exec specifies the action to take.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].lifecycle.preStop.exec.command

+
+
+
+array + +
+ +
+

Command is the command line to execute inside the container, the working directory for the command is root (‘/’) in the container’s filesystem. The command is simply exec’d, it is not run inside a shell, so traditional shell instructions (‘|’, etc) won’t work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].lifecycle.preStop.exec.command[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].lifecycle.preStop.httpGet

+
+
+
+object + +
+ +
+

HTTPGet specifies the http request to perform.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].lifecycle.preStop.httpGet.host

+
+
+
+string + +
+ +
+

Host name to connect to, defaults to the pod IP. You probably want to set “Host” in httpHeaders instead.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].lifecycle.preStop.httpGet.httpHeaders

+
+
+
+array + +
+ +
+

Custom headers to set in the request. HTTP allows repeated headers.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].lifecycle.preStop.httpGet.httpHeaders[*]

+
+
+
+object + +
+ +
+

HTTPHeader describes a custom header to be used in HTTP probes

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].lifecycle.preStop.httpGet.httpHeaders[*].name

+
+
+
+string +Required +
+ +
+

The header field name

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].lifecycle.preStop.httpGet.httpHeaders[*].value

+
+
+
+string +Required +
+ +
+

The header field value

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].lifecycle.preStop.httpGet.path

+
+
+
+string + +
+ +
+

Path to access on the HTTP server.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].lifecycle.preStop.httpGet.port

+
+
+
+ +Required +
+ +
+

Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].lifecycle.preStop.httpGet.scheme

+
+
+
+string + +
+ +
+

Scheme to use for connecting to the host. Defaults to HTTP.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].lifecycle.preStop.tcpSocket

+
+
+
+object + +
+ +
+

Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].lifecycle.preStop.tcpSocket.host

+
+
+
+string + +
+ +
+

Optional: Host name to connect to, defaults to the pod IP.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].lifecycle.preStop.tcpSocket.port

+
+
+
+ +Required +
+ +
+

Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].livenessProbe

+
+
+
+object + +
+ +
+

Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].livenessProbe.exec

+
+
+
+object + +
+ +
+

Exec specifies the action to take.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].livenessProbe.exec.command

+
+
+
+array + +
+ +
+

Command is the command line to execute inside the container, the working directory for the command is root (‘/’) in the container’s filesystem. The command is simply exec’d, it is not run inside a shell, so traditional shell instructions (‘|’, etc) won’t work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].livenessProbe.exec.command[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].livenessProbe.failureThreshold

+
+
+
+integer + +
+ +
+

Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].livenessProbe.grpc

+
+
+
+object + +
+ +
+

GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].livenessProbe.grpc.port

+
+
+
+integer +Required +
+ +
+

Port number of the gRPC service. Number must be in the range 1 to 65535.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].livenessProbe.grpc.service

+
+
+
+string + +
+ +
+

Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md). + If this is not specified, the default behavior is defined by gRPC.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].livenessProbe.httpGet

+
+
+
+object + +
+ +
+

HTTPGet specifies the http request to perform.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].livenessProbe.httpGet.host

+
+
+
+string + +
+ +
+

Host name to connect to, defaults to the pod IP. You probably want to set “Host” in httpHeaders instead.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].livenessProbe.httpGet.httpHeaders

+
+
+
+array + +
+ +
+

Custom headers to set in the request. HTTP allows repeated headers.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].livenessProbe.httpGet.httpHeaders[*]

+
+
+
+object + +
+ +
+

HTTPHeader describes a custom header to be used in HTTP probes

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].livenessProbe.httpGet.httpHeaders[*].name

+
+
+
+string +Required +
+ +
+

The header field name

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].livenessProbe.httpGet.httpHeaders[*].value

+
+
+
+string +Required +
+ +
+

The header field value

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].livenessProbe.httpGet.path

+
+
+
+string + +
+ +
+

Path to access on the HTTP server.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].livenessProbe.httpGet.port

+
+
+
+ +Required +
+ +
+

Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].livenessProbe.httpGet.scheme

+
+
+
+string + +
+ +
+

Scheme to use for connecting to the host. Defaults to HTTP.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].livenessProbe.initialDelaySeconds

+
+
+
+integer + +
+ +
+

Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].livenessProbe.periodSeconds

+
+
+
+integer + +
+ +
+

How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].livenessProbe.successThreshold

+
+
+
+integer + +
+ +
+

Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].livenessProbe.tcpSocket

+
+
+
+object + +
+ +
+

TCPSocket specifies an action involving a TCP port.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].livenessProbe.tcpSocket.host

+
+
+
+string + +
+ +
+

Optional: Host name to connect to, defaults to the pod IP.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].livenessProbe.tcpSocket.port

+
+
+
+ +Required +
+ +
+

Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].livenessProbe.terminationGracePeriodSeconds

+
+
+
+integer + +
+ +
+

Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod’s terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].livenessProbe.timeoutSeconds

+
+
+
+integer + +
+ +
+

Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].name

+
+
+
+string +Required +
+ +
+

Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].ports

+
+
+
+array + +
+ +
+

List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default “0.0.0.0” address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255. Cannot be updated.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].ports[*]

+
+
+
+object + +
+ +
+

ContainerPort represents a network port in a single container.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].ports[*].containerPort

+
+
+
+integer +Required +
+ +
+

Number of port to expose on the pod’s IP address. This must be a valid port number, 0 < x < 65536.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].ports[*].hostIP

+
+
+
+string + +
+ +
+

What host IP to bind the external port to.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].ports[*].hostPort

+
+
+
+integer + +
+ +
+

Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].ports[*].name

+
+
+
+string + +
+ +
+

If specified, this must be an IANA_SVC_NAME and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].ports[*].protocol

+
+
+
+string + +
+ +
+

Protocol for port. Must be UDP, TCP, or SCTP. Defaults to “TCP”.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].readinessProbe

+
+
+
+object + +
+ +
+

Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].readinessProbe.exec

+
+
+
+object + +
+ +
+

Exec specifies the action to take.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].readinessProbe.exec.command

+
+
+
+array + +
+ +
+

Command is the command line to execute inside the container, the working directory for the command is root (‘/’) in the container’s filesystem. The command is simply exec’d, it is not run inside a shell, so traditional shell instructions (‘|’, etc) won’t work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].readinessProbe.exec.command[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].readinessProbe.failureThreshold

+
+
+
+integer + +
+ +
+

Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].readinessProbe.grpc

+
+
+
+object + +
+ +
+

GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].readinessProbe.grpc.port

+
+
+
+integer +Required +
+ +
+

Port number of the gRPC service. Number must be in the range 1 to 65535.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].readinessProbe.grpc.service

+
+
+
+string + +
+ +
+

Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md). + If this is not specified, the default behavior is defined by gRPC.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].readinessProbe.httpGet

+
+
+
+object + +
+ +
+

HTTPGet specifies the http request to perform.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].readinessProbe.httpGet.host

+
+
+
+string + +
+ +
+

Host name to connect to, defaults to the pod IP. You probably want to set “Host” in httpHeaders instead.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].readinessProbe.httpGet.httpHeaders

+
+
+
+array + +
+ +
+

Custom headers to set in the request. HTTP allows repeated headers.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].readinessProbe.httpGet.httpHeaders[*]

+
+
+
+object + +
+ +
+

HTTPHeader describes a custom header to be used in HTTP probes

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].readinessProbe.httpGet.httpHeaders[*].name

+
+
+
+string +Required +
+ +
+

The header field name

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].readinessProbe.httpGet.httpHeaders[*].value

+
+
+
+string +Required +
+ +
+

The header field value

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].readinessProbe.httpGet.path

+
+
+
+string + +
+ +
+

Path to access on the HTTP server.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].readinessProbe.httpGet.port

+
+
+
+ +Required +
+ +
+

Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].readinessProbe.httpGet.scheme

+
+
+
+string + +
+ +
+

Scheme to use for connecting to the host. Defaults to HTTP.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].readinessProbe.initialDelaySeconds

+
+
+
+integer + +
+ +
+

Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].readinessProbe.periodSeconds

+
+
+
+integer + +
+ +
+

How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].readinessProbe.successThreshold

+
+
+
+integer + +
+ +
+

Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].readinessProbe.tcpSocket

+
+
+
+object + +
+ +
+

TCPSocket specifies an action involving a TCP port.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].readinessProbe.tcpSocket.host

+
+
+
+string + +
+ +
+

Optional: Host name to connect to, defaults to the pod IP.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].readinessProbe.tcpSocket.port

+
+
+
+ +Required +
+ +
+

Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].readinessProbe.terminationGracePeriodSeconds

+
+
+
+integer + +
+ +
+

Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod’s terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].readinessProbe.timeoutSeconds

+
+
+
+integer + +
+ +
+

Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].resources

+
+
+
+object + +
+ +
+

Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].resources.limits

+
+
+
+object + +
+ +
+

Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].resources.requests

+
+
+
+object + +
+ +
+

Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].securityContext

+
+
+
+object + +
+ +
+

SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].securityContext.allowPrivilegeEscalation

+
+
+
+boolean + +
+ +
+

AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note that this field cannot be set when spec.os.name is windows.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].securityContext.capabilities

+
+
+
+object + +
+ +
+

The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].securityContext.capabilities.add

+
+
+
+array + +
+ +
+

Added capabilities

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].securityContext.capabilities.add[*]

+
+
+
+string + +
+ +
+

Capability represent POSIX capabilities type

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].securityContext.capabilities.drop

+
+
+
+array + +
+ +
+

Removed capabilities

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].securityContext.capabilities.drop[*]

+
+
+
+string + +
+ +
+

Capability represent POSIX capabilities type

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].securityContext.privileged

+
+
+
+boolean + +
+ +
+

Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. Note that this field cannot be set when spec.os.name is windows.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].securityContext.procMount

+
+
+
+string + +
+ +
+

procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. Note that this field cannot be set when spec.os.name is windows.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].securityContext.readOnlyRootFilesystem

+
+
+
+boolean + +
+ +
+

Whether this container has a read-only root filesystem. Default is false. Note that this field cannot be set when spec.os.name is windows.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].securityContext.runAsGroup

+
+
+
+integer + +
+ +
+

The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].securityContext.runAsNonRoot

+
+
+
+boolean + +
+ +
+

Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].securityContext.runAsUser

+
+
+
+integer + +
+ +
+

The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].securityContext.seLinuxOptions

+
+
+
+object + +
+ +
+

The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].securityContext.seLinuxOptions.level

+
+
+
+string + +
+ +
+

Level is SELinux level label that applies to the container.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].securityContext.seLinuxOptions.role

+
+
+
+string + +
+ +
+

Role is a SELinux role label that applies to the container.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].securityContext.seLinuxOptions.type

+
+
+
+string + +
+ +
+

Type is a SELinux type label that applies to the container.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].securityContext.seLinuxOptions.user

+
+
+
+string + +
+ +
+

User is a SELinux user label that applies to the container.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].securityContext.seccompProfile

+
+
+
+object + +
+ +
+

The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].securityContext.seccompProfile.localhostProfile

+
+
+
+string + +
+ +
+

localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet’s configured seccomp profile location. Must only be set if type is “Localhost”.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].securityContext.seccompProfile.type

+
+
+
+string +Required +
+ +
+

type indicates which kind of seccomp profile will be applied. Valid options are: + Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].securityContext.windowsOptions

+
+
+
+object + +
+ +
+

The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].securityContext.windowsOptions.gmsaCredentialSpec

+
+
+
+string + +
+ +
+

GMSACredentialSpec is where the GMSA admission webhook (https://github.com/kubernetes-sigs/windows-gmsa) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].securityContext.windowsOptions.gmsaCredentialSpecName

+
+
+
+string + +
+ +
+

GMSACredentialSpecName is the name of the GMSA credential spec to use.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].securityContext.windowsOptions.hostProcess

+
+
+
+boolean + +
+ +
+

HostProcess determines if a container should be run as a ‘Host Process’ container. This field is alpha-level and will only be honored by components that enable the WindowsHostProcessContainers feature flag. Setting this field without the feature flag will result in errors when validating the Pod. All of a Pod’s containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].securityContext.windowsOptions.runAsUserName

+
+
+
+string + +
+ +
+

The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].startupProbe

+
+
+
+object + +
+ +
+

StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod’s lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].startupProbe.exec

+
+
+
+object + +
+ +
+

Exec specifies the action to take.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].startupProbe.exec.command

+
+
+
+array + +
+ +
+

Command is the command line to execute inside the container, the working directory for the command is root (‘/’) in the container’s filesystem. The command is simply exec’d, it is not run inside a shell, so traditional shell instructions (‘|’, etc) won’t work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].startupProbe.exec.command[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].startupProbe.failureThreshold

+
+
+
+integer + +
+ +
+

Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].startupProbe.grpc

+
+
+
+object + +
+ +
+

GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].startupProbe.grpc.port

+
+
+
+integer +Required +
+ +
+

Port number of the gRPC service. Number must be in the range 1 to 65535.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].startupProbe.grpc.service

+
+
+
+string + +
+ +
+

Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md). + If this is not specified, the default behavior is defined by gRPC.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].startupProbe.httpGet

+
+
+
+object + +
+ +
+

HTTPGet specifies the http request to perform.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].startupProbe.httpGet.host

+
+
+
+string + +
+ +
+

Host name to connect to, defaults to the pod IP. You probably want to set “Host” in httpHeaders instead.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].startupProbe.httpGet.httpHeaders

+
+
+
+array + +
+ +
+

Custom headers to set in the request. HTTP allows repeated headers.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].startupProbe.httpGet.httpHeaders[*]

+
+
+
+object + +
+ +
+

HTTPHeader describes a custom header to be used in HTTP probes

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].startupProbe.httpGet.httpHeaders[*].name

+
+
+
+string +Required +
+ +
+

The header field name

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].startupProbe.httpGet.httpHeaders[*].value

+
+
+
+string +Required +
+ +
+

The header field value

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].startupProbe.httpGet.path

+
+
+
+string + +
+ +
+

Path to access on the HTTP server.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].startupProbe.httpGet.port

+
+
+
+ +Required +
+ +
+

Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].startupProbe.httpGet.scheme

+
+
+
+string + +
+ +
+

Scheme to use for connecting to the host. Defaults to HTTP.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].startupProbe.initialDelaySeconds

+
+
+
+integer + +
+ +
+

Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].startupProbe.periodSeconds

+
+
+
+integer + +
+ +
+

How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].startupProbe.successThreshold

+
+
+
+integer + +
+ +
+

Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].startupProbe.tcpSocket

+
+
+
+object + +
+ +
+

TCPSocket specifies an action involving a TCP port.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].startupProbe.tcpSocket.host

+
+
+
+string + +
+ +
+

Optional: Host name to connect to, defaults to the pod IP.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].startupProbe.tcpSocket.port

+
+
+
+ +Required +
+ +
+

Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].startupProbe.terminationGracePeriodSeconds

+
+
+
+integer + +
+ +
+

Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod’s terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].startupProbe.timeoutSeconds

+
+
+
+integer + +
+ +
+

Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].stdin

+
+
+
+boolean + +
+ +
+

Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].stdinOnce

+
+
+
+boolean + +
+ +
+

Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].terminationMessagePath

+
+
+
+string + +
+ +
+

Optional: Path at which the file to which the container’s termination message will be written is mounted into the container’s filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].terminationMessagePolicy

+
+
+
+string + +
+ +
+

Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].tty

+
+
+
+boolean + +
+ +
+

Whether this container should allocate a TTY for itself, also requires ‘stdin’ to be true. Default is false.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].volumeDevices

+
+
+
+array + +
+ +
+

volumeDevices is the list of block devices to be used by the container.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].volumeDevices[*]

+
+
+
+object + +
+ +
+

volumeDevice describes a mapping of a raw block device within a container.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].volumeDevices[*].devicePath

+
+
+
+string +Required +
+ +
+

devicePath is the path inside of the container that the device will be mapped to.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].volumeDevices[*].name

+
+
+
+string +Required +
+ +
+

name must match the name of a persistentVolumeClaim in the pod

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].volumeMounts

+
+
+
+array + +
+ +
+

Pod volumes to mount into the container’s filesystem. Cannot be updated.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].volumeMounts[*]

+
+
+
+object + +
+ +
+

VolumeMount describes a mounting of a Volume within a container.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].volumeMounts[*].mountPath

+
+
+
+string +Required +
+ +
+

Path within the container at which the volume should be mounted. Must not contain ‘:’.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].volumeMounts[*].mountPropagation

+
+
+
+string + +
+ +
+

mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].volumeMounts[*].name

+
+
+
+string +Required +
+ +
+

This must match the Name of a Volume.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].volumeMounts[*].readOnly

+
+
+
+boolean + +
+ +
+

Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].volumeMounts[*].subPath

+
+
+
+string + +
+ +
+

Path within the volume from which the container’s volume should be mounted. Defaults to “” (volume’s root).

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].volumeMounts[*].subPathExpr

+
+
+
+string + +
+ +
+

Expanded path within the volume from which the container’s volume should be mounted. Behaves similarly to SubPath but environment variable references $(VAR_NAME) are expanded using the container’s environment. Defaults to “” (volume’s root). SubPathExpr and SubPath are mutually exclusive.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.initContainers[*].workingDir

+
+
+
+string + +
+ +
+

Container’s working directory. If not specified, the container runtime’s default will be used, which might be configured in the container image. Cannot be updated.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.log4jConfig

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.nodeSelector

+
+
+
+object + +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.podSecurityContext

+
+
+
+object + +
+ +
+

PodSecurityContext holds pod-level security attributes and common container settings. Some fields are also present in container.securityContext. Field values of container.securityContext take precedence over field values of PodSecurityContext.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.podSecurityContext.fsGroup

+
+
+
+integer + +
+ +
+

A special supplemental group that applies to all containers in a pod. Some volume types allow the Kubelet to change the ownership of that volume to be owned by the pod: + 1. The owning GID will be the FSGroup 2. The setgid bit is set (new files created in the volume will be owned by FSGroup) 3. The permission bits are OR’d with rw-rw—- + If unset, the Kubelet will not modify the ownership and permissions of any volume. Note that this field cannot be set when spec.os.name is windows.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.podSecurityContext.fsGroupChangePolicy

+
+
+
+string + +
+ +
+

fsGroupChangePolicy defines behavior of changing ownership and permission of the volume before being exposed inside Pod. This field will only apply to volume types which support fsGroup based ownership(and permissions). It will have no effect on ephemeral volume types such as: secret, configmaps and emptydir. Valid values are “OnRootMismatch” and “Always”. If not specified, “Always” is used. Note that this field cannot be set when spec.os.name is windows.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.podSecurityContext.runAsGroup

+
+
+
+integer + +
+ +
+

The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.podSecurityContext.runAsNonRoot

+
+
+
+boolean + +
+ +
+

Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.podSecurityContext.runAsUser

+
+
+
+integer + +
+ +
+

The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.podSecurityContext.seLinuxOptions

+
+
+
+object + +
+ +
+

The SELinux context to be applied to all containers. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.podSecurityContext.seLinuxOptions.level

+
+
+
+string + +
+ +
+

Level is SELinux level label that applies to the container.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.podSecurityContext.seLinuxOptions.role

+
+
+
+string + +
+ +
+

Role is a SELinux role label that applies to the container.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.podSecurityContext.seLinuxOptions.type

+
+
+
+string + +
+ +
+

Type is a SELinux type label that applies to the container.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.podSecurityContext.seLinuxOptions.user

+
+
+
+string + +
+ +
+

User is a SELinux user label that applies to the container.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.podSecurityContext.seccompProfile

+
+
+
+object + +
+ +
+

The seccomp options to use by the containers in this pod. Note that this field cannot be set when spec.os.name is windows.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.podSecurityContext.seccompProfile.localhostProfile

+
+
+
+string + +
+ +
+

localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet’s configured seccomp profile location. Must only be set if type is “Localhost”.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.podSecurityContext.seccompProfile.type

+
+
+
+string +Required +
+ +
+

type indicates which kind of seccomp profile will be applied. Valid options are: + Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.podSecurityContext.supplementalGroups

+
+
+
+array + +
+ +
+

A list of groups applied to the first process run in each container, in addition to the container’s primary GID. If unspecified, no groups will be added to any container. Note that this field cannot be set when spec.os.name is windows.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.podSecurityContext.supplementalGroups[*]

+
+
+
+integer + +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.podSecurityContext.sysctls

+
+
+
+array + +
+ +
+

Sysctls hold a list of namespaced sysctls used for the pod. Pods with unsupported sysctls (by the container runtime) might fail to launch. Note that this field cannot be set when spec.os.name is windows.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.podSecurityContext.sysctls[*]

+
+
+
+object + +
+ +
+

Sysctl defines a kernel parameter to be set

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.podSecurityContext.sysctls[*].name

+
+
+
+string +Required +
+ +
+

Name of a property to set

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.podSecurityContext.sysctls[*].value

+
+
+
+string +Required +
+ +
+

Value of a property to set

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.podSecurityContext.windowsOptions

+
+
+
+object + +
+ +
+

The Windows specific settings applied to all containers. If unspecified, the options within a container’s SecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.podSecurityContext.windowsOptions.gmsaCredentialSpec

+
+
+
+string + +
+ +
+

GMSACredentialSpec is where the GMSA admission webhook (https://github.com/kubernetes-sigs/windows-gmsa) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.podSecurityContext.windowsOptions.gmsaCredentialSpecName

+
+
+
+string + +
+ +
+

GMSACredentialSpecName is the name of the GMSA credential spec to use.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.podSecurityContext.windowsOptions.hostProcess

+
+
+
+boolean + +
+ +
+

HostProcess determines if a container should be run as a ‘Host Process’ container. This field is alpha-level and will only be honored by components that enable the WindowsHostProcessContainers feature flag. Setting this field without the feature flag will result in errors when validating the Pod. All of a Pod’s containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.podSecurityContext.windowsOptions.runAsUserName

+
+
+
+string + +
+ +
+

The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.priorityClassName

+
+
+
+string + +
+ +
+

PriorityClassName specifies the priority class name for the CruiseControl pod. If specified, the PriorityClass resource with this PriorityClassName must be created beforehand. If not specified, the CruiseControl pod’s priority is default to zero.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.resourceRequirements

+
+
+
+object + +
+ +
+

ResourceRequirements describes the compute resource requirements.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.resourceRequirements.limits

+
+
+
+object + +
+ +
+

Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.resourceRequirements.requests

+
+
+
+object + +
+ +
+

Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.securityContext

+
+
+
+object + +
+ +
+

SecurityContext allows to set security context for the CruiseControl container

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.securityContext.allowPrivilegeEscalation

+
+
+
+boolean + +
+ +
+

AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note that this field cannot be set when spec.os.name is windows.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.securityContext.capabilities

+
+
+
+object + +
+ +
+

The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.securityContext.capabilities.add

+
+
+
+array + +
+ +
+

Added capabilities

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.securityContext.capabilities.add[*]

+
+
+
+string + +
+ +
+

Capability represent POSIX capabilities type

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.securityContext.capabilities.drop

+
+
+
+array + +
+ +
+

Removed capabilities

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.securityContext.capabilities.drop[*]

+
+
+
+string + +
+ +
+

Capability represent POSIX capabilities type

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.securityContext.privileged

+
+
+
+boolean + +
+ +
+

Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. Note that this field cannot be set when spec.os.name is windows.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.securityContext.procMount

+
+
+
+string + +
+ +
+

procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. Note that this field cannot be set when spec.os.name is windows.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.securityContext.readOnlyRootFilesystem

+
+
+
+boolean + +
+ +
+

Whether this container has a read-only root filesystem. Default is false. Note that this field cannot be set when spec.os.name is windows.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.securityContext.runAsGroup

+
+
+
+integer + +
+ +
+

The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.securityContext.runAsNonRoot

+
+
+
+boolean + +
+ +
+

Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.securityContext.runAsUser

+
+
+
+integer + +
+ +
+

The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.securityContext.seLinuxOptions

+
+
+
+object + +
+ +
+

The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.securityContext.seLinuxOptions.level

+
+
+
+string + +
+ +
+

Level is SELinux level label that applies to the container.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.securityContext.seLinuxOptions.role

+
+
+
+string + +
+ +
+

Role is a SELinux role label that applies to the container.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.securityContext.seLinuxOptions.type

+
+
+
+string + +
+ +
+

Type is a SELinux type label that applies to the container.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.securityContext.seLinuxOptions.user

+
+
+
+string + +
+ +
+

User is a SELinux user label that applies to the container.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.securityContext.seccompProfile

+
+
+
+object + +
+ +
+

The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.securityContext.seccompProfile.localhostProfile

+
+
+
+string + +
+ +
+

localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet’s configured seccomp profile location. Must only be set if type is “Localhost”.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.securityContext.seccompProfile.type

+
+
+
+string +Required +
+ +
+

type indicates which kind of seccomp profile will be applied. Valid options are: + Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.securityContext.windowsOptions

+
+
+
+object + +
+ +
+

The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.securityContext.windowsOptions.gmsaCredentialSpec

+
+
+
+string + +
+ +
+

GMSACredentialSpec is where the GMSA admission webhook (https://github.com/kubernetes-sigs/windows-gmsa) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.securityContext.windowsOptions.gmsaCredentialSpecName

+
+
+
+string + +
+ +
+

GMSACredentialSpecName is the name of the GMSA credential spec to use.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.securityContext.windowsOptions.hostProcess

+
+
+
+boolean + +
+ +
+

HostProcess determines if a container should be run as a ‘Host Process’ container. This field is alpha-level and will only be honored by components that enable the WindowsHostProcessContainers feature flag. Setting this field without the feature flag will result in errors when validating the Pod. All of a Pod’s containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.securityContext.windowsOptions.runAsUserName

+
+
+
+string + +
+ +
+

The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.serviceAccountName

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.tolerations

+
+
+
+array + +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.tolerations[*]

+
+
+
+object + +
+ +
+

The pod this Toleration is attached to tolerates any taint that matches the triple using the matching operator .

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.tolerations[*].effect

+
+
+
+string + +
+ +
+

Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.tolerations[*].key

+
+
+
+string + +
+ +
+

Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.tolerations[*].operator

+
+
+
+string + +
+ +
+

Operator represents a key’s relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.tolerations[*].tolerationSeconds

+
+
+
+integer + +
+ +
+

TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.tolerations[*].value

+
+
+
+string + +
+ +
+

Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.topicConfig

+
+
+
+object + +
+ +
+

TopicConfig holds info for topic configuration regarding partitions and replicationFactor

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.topicConfig.partitions

+
+
+
+integer +Required +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.topicConfig.replicationFactor

+
+
+
+integer +Required +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumeMounts

+
+
+
+array + +
+ +
+

VolumeMounts define some extra Kubernetes Volume mounts for the CruiseControl Pods.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumeMounts[*]

+
+
+
+object + +
+ +
+

VolumeMount describes a mounting of a Volume within a container.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumeMounts[*].mountPath

+
+
+
+string +Required +
+ +
+

Path within the container at which the volume should be mounted. Must not contain ‘:’.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumeMounts[*].mountPropagation

+
+
+
+string + +
+ +
+

mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumeMounts[*].name

+
+
+
+string +Required +
+ +
+

This must match the Name of a Volume.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumeMounts[*].readOnly

+
+
+
+boolean + +
+ +
+

Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumeMounts[*].subPath

+
+
+
+string + +
+ +
+

Path within the volume from which the container’s volume should be mounted. Defaults to “” (volume’s root).

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumeMounts[*].subPathExpr

+
+
+
+string + +
+ +
+

Expanded path within the volume from which the container’s volume should be mounted. Behaves similarly to SubPath but environment variable references $(VAR_NAME) are expanded using the container’s environment. Defaults to “” (volume’s root). SubPathExpr and SubPath are mutually exclusive.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes

+
+
+
+array + +
+ +
+

Volumes define some extra Kubernetes Volumes for the CruiseControl Pods.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*]

+
+
+
+object + +
+ +
+

Volume represents a named volume in a pod that may be accessed by any container in the pod.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].awsElasticBlockStore

+
+
+
+object + +
+ +
+

awsElasticBlockStore represents an AWS Disk resource that is attached to a kubelet’s host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].awsElasticBlockStore.fsType

+
+
+
+string + +
+ +
+

fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: “ext4”, “xfs”, “ntfs”. Implicitly inferred to be “ext4” if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore TODO: how do we prevent errors in the filesystem from compromising the machine

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].awsElasticBlockStore.partition

+
+
+
+integer + +
+ +
+

partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as “1”. Similarly, the volume partition for /dev/sda is “0” (or you can leave the property empty).

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].awsElasticBlockStore.readOnly

+
+
+
+boolean + +
+ +
+

readOnly value true will force the readOnly setting in VolumeMounts. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].awsElasticBlockStore.volumeID

+
+
+
+string +Required +
+ +
+

volumeID is unique ID of the persistent disk resource in AWS (Amazon EBS volume). More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].azureDisk

+
+
+
+object + +
+ +
+

azureDisk represents an Azure Data Disk mount on the host and bind mount to the pod.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].azureDisk.cachingMode

+
+
+
+string + +
+ +
+

cachingMode is the Host Caching mode: None, Read Only, Read Write.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].azureDisk.diskName

+
+
+
+string +Required +
+ +
+

diskName is the Name of the data disk in the blob storage

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].azureDisk.diskURI

+
+
+
+string +Required +
+ +
+

diskURI is the URI of data disk in the blob storage

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].azureDisk.fsType

+
+
+
+string + +
+ +
+

fsType is Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. “ext4”, “xfs”, “ntfs”. Implicitly inferred to be “ext4” if unspecified.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].azureDisk.kind

+
+
+
+string + +
+ +
+

kind expected values are Shared: multiple blob disks per storage account Dedicated: single blob disk per storage account Managed: azure managed data disk (only in managed availability set). defaults to shared

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].azureDisk.readOnly

+
+
+
+boolean + +
+ +
+

readOnly Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].azureFile

+
+
+
+object + +
+ +
+

azureFile represents an Azure File Service mount on the host and bind mount to the pod.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].azureFile.readOnly

+
+
+
+boolean + +
+ +
+

readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].azureFile.secretName

+
+
+
+string +Required +
+ +
+

secretName is the name of secret that contains Azure Storage Account Name and Key

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].azureFile.shareName

+
+
+
+string +Required +
+ +
+

shareName is the azure share Name

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].cephfs

+
+
+
+object + +
+ +
+

cephFS represents a Ceph FS mount on the host that shares a pod’s lifetime

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].cephfs.monitors

+
+
+
+array +Required +
+ +
+

monitors is Required: Monitors is a collection of Ceph monitors More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].cephfs.monitors[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].cephfs.path

+
+
+
+string + +
+ +
+

path is Optional: Used as the mounted root, rather than the full Ceph tree, default is /

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].cephfs.readOnly

+
+
+
+boolean + +
+ +
+

readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].cephfs.secretFile

+
+
+
+string + +
+ +
+

secretFile is Optional: SecretFile is the path to key ring for User, default is /etc/ceph/user.secret More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].cephfs.secretRef

+
+
+
+object + +
+ +
+

secretRef is Optional: SecretRef is reference to the authentication secret for User, default is empty. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].cephfs.secretRef.name

+
+
+
+string + +
+ +
+

Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].cephfs.user

+
+
+
+string + +
+ +
+

user is optional: User is the rados user name, default is admin More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].cinder

+
+
+
+object + +
+ +
+

cinder represents a cinder volume attached and mounted on kubelets host machine. More info: https://examples.k8s.io/mysql-cinder-pd/README.md

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].cinder.fsType

+
+
+
+string + +
+ +
+

fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Examples: “ext4”, “xfs”, “ntfs”. Implicitly inferred to be “ext4” if unspecified. More info: https://examples.k8s.io/mysql-cinder-pd/README.md

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].cinder.readOnly

+
+
+
+boolean + +
+ +
+

readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/mysql-cinder-pd/README.md

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].cinder.secretRef

+
+
+
+object + +
+ +
+

secretRef is optional: points to a secret object containing parameters used to connect to OpenStack.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].cinder.secretRef.name

+
+
+
+string + +
+ +
+

Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].cinder.volumeID

+
+
+
+string +Required +
+ +
+

volumeID used to identify the volume in cinder. More info: https://examples.k8s.io/mysql-cinder-pd/README.md

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].configMap

+
+
+
+object + +
+ +
+

configMap represents a configMap that should populate this volume

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].configMap.defaultMode

+
+
+
+integer + +
+ +
+

defaultMode is optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].configMap.items

+
+
+
+array + +
+ +
+

items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the ‘..’ path or start with ‘..’.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].configMap.items[*]

+
+
+
+object + +
+ +
+

Maps a string key to a path within a volume.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].configMap.items[*].key

+
+
+
+string +Required +
+ +
+

key is the key to project.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].configMap.items[*].mode

+
+
+
+integer + +
+ +
+

mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].configMap.items[*].path

+
+
+
+string +Required +
+ +
+

path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element ‘..’. May not start with the string ‘..’.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].configMap.name

+
+
+
+string + +
+ +
+

Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].configMap.optional

+
+
+
+boolean + +
+ +
+

optional specify whether the ConfigMap or its keys must be defined

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].csi

+
+
+
+object + +
+ +
+

csi (Container Storage Interface) represents ephemeral storage that is handled by certain external CSI drivers (Beta feature).

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].csi.driver

+
+
+
+string +Required +
+ +
+

driver is the name of the CSI driver that handles this volume. Consult with your admin for the correct name as registered in the cluster.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].csi.fsType

+
+
+
+string + +
+ +
+

fsType to mount. Ex. “ext4”, “xfs”, “ntfs”. If not provided, the empty value is passed to the associated CSI driver which will determine the default filesystem to apply.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].csi.nodePublishSecretRef

+
+
+
+object + +
+ +
+

nodePublishSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodePublishVolume and NodeUnpublishVolume calls. This field is optional, and may be empty if no secret is required. If the secret object contains more than one secret, all secret references are passed.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].csi.nodePublishSecretRef.name

+
+
+
+string + +
+ +
+

Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].csi.readOnly

+
+
+
+boolean + +
+ +
+

readOnly specifies a read-only configuration for the volume. Defaults to false (read/write).

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].csi.volumeAttributes

+
+
+
+object + +
+ +
+

volumeAttributes stores driver-specific properties that are passed to the CSI driver. Consult your driver’s documentation for supported values.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].downwardAPI

+
+
+
+object + +
+ +
+

downwardAPI represents downward API about the pod that should populate this volume

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].downwardAPI.defaultMode

+
+
+
+integer + +
+ +
+

Optional: mode bits to use on created files by default. Must be a Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].downwardAPI.items

+
+
+
+array + +
+ +
+

Items is a list of downward API volume file

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].downwardAPI.items[*]

+
+
+
+object + +
+ +
+

DownwardAPIVolumeFile represents information to create the file containing the pod field

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].downwardAPI.items[*].fieldRef

+
+
+
+object + +
+ +
+

Required: Selects a field of the pod: only annotations, labels, name and namespace are supported.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].downwardAPI.items[*].fieldRef.apiVersion

+
+
+
+string + +
+ +
+

Version of the schema the FieldPath is written in terms of, defaults to “v1”.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].downwardAPI.items[*].fieldRef.fieldPath

+
+
+
+string +Required +
+ +
+

Path of the field to select in the specified API version.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].downwardAPI.items[*].mode

+
+
+
+integer + +
+ +
+

Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].downwardAPI.items[*].path

+
+
+
+string +Required +
+ +
+

Required: Path is the relative path name of the file to be created. Must not be absolute or contain the ‘..’ path. Must be utf-8 encoded. The first item of the relative path must not start with ‘..’

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].downwardAPI.items[*].resourceFieldRef

+
+
+
+object + +
+ +
+

Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].downwardAPI.items[*].resourceFieldRef.containerName

+
+
+
+string + +
+ +
+

Container name: required for volumes, optional for env vars

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].downwardAPI.items[*].resourceFieldRef.divisor

+
+
+
+ + +
+ +
+

Specifies the output format of the exposed resources, defaults to “1”

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].downwardAPI.items[*].resourceFieldRef.resource

+
+
+
+string +Required +
+ +
+

Required: resource to select

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].emptyDir

+
+
+
+object + +
+ +
+

emptyDir represents a temporary directory that shares a pod’s lifetime. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].emptyDir.medium

+
+
+
+string + +
+ +
+

medium represents what type of storage medium should back this directory. The default is “” which means to use the node’s default medium. Must be an empty string (default) or Memory. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].emptyDir.sizeLimit

+
+
+
+ + +
+ +
+

sizeLimit is the total amount of local storage required for this EmptyDir volume. The size limit is also applicable for memory medium. The maximum usage on memory medium EmptyDir would be the minimum value between the SizeLimit specified here and the sum of memory limits of all containers in a pod. The default is nil which means that the limit is undefined. More info: http://kubernetes.io/docs/user-guide/volumes#emptydir

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].ephemeral

+
+
+
+object + +
+ +
+

ephemeral represents a volume that is handled by a cluster storage driver. The volume’s lifecycle is tied to the pod that defines it - it will be created before the pod starts, and deleted when the pod is removed. + Use this if: a) the volume is only needed while the pod runs, b) features of normal volumes like restoring from snapshot or capacity tracking are needed, c) the storage driver is specified through a storage class, and d) the storage driver supports dynamic volume provisioning through a PersistentVolumeClaim (see EphemeralVolumeSource for more information on the connection between this volume type and PersistentVolumeClaim). + Use PersistentVolumeClaim or one of the vendor-specific APIs for volumes that persist for longer than the lifecycle of an individual pod. + Use CSI for light-weight local ephemeral volumes if the CSI driver is meant to be used that way - see the documentation of the driver for more information. + A pod can use both types of ephemeral volumes and persistent volumes at the same time.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].ephemeral.volumeClaimTemplate

+
+
+
+object + +
+ +
+

Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be <pod name>-<volume name> where <volume name> is the name from the PodSpec.Volumes array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). + An existing PVC with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster. + This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created. + Required, must not be nil.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].ephemeral.volumeClaimTemplate.metadata

+
+
+
+object + +
+ +
+

May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].ephemeral.volumeClaimTemplate.spec

+
+
+
+object +Required +
+ +
+

The specification for the PersistentVolumeClaim. The entire content is copied unchanged into the PVC that gets created from this template. The same fields as in a PersistentVolumeClaim are also valid here.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].ephemeral.volumeClaimTemplate.spec.accessModes

+
+
+
+array + +
+ +
+

accessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].ephemeral.volumeClaimTemplate.spec.accessModes[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].ephemeral.volumeClaimTemplate.spec.dataSource

+
+
+
+object + +
+ +
+

dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. If the AnyVolumeDataSource feature gate is enabled, this field will always have the same contents as the DataSourceRef field.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].ephemeral.volumeClaimTemplate.spec.dataSource.apiGroup

+
+
+
+string + +
+ +
+

APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].ephemeral.volumeClaimTemplate.spec.dataSource.kind

+
+
+
+string +Required +
+ +
+

Kind is the type of resource being referenced

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].ephemeral.volumeClaimTemplate.spec.dataSource.name

+
+
+
+string +Required +
+ +
+

Name is the name of resource being referenced

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].ephemeral.volumeClaimTemplate.spec.dataSourceRef

+
+
+
+object + +
+ +
+

dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any local object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the DataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, both fields (DataSource and DataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. There are two important differences between DataSource and DataSourceRef: * While DataSource only allows two specific types of objects, DataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While DataSource ignores disallowed values (dropping them), DataSourceRef preserves all values, and generates an error if a disallowed value is specified. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].ephemeral.volumeClaimTemplate.spec.dataSourceRef.apiGroup

+
+
+
+string + +
+ +
+

APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].ephemeral.volumeClaimTemplate.spec.dataSourceRef.kind

+
+
+
+string +Required +
+ +
+

Kind is the type of resource being referenced

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].ephemeral.volumeClaimTemplate.spec.dataSourceRef.name

+
+
+
+string +Required +
+ +
+

Name is the name of resource being referenced

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].ephemeral.volumeClaimTemplate.spec.resources

+
+
+
+object + +
+ +
+

resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than previous value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].ephemeral.volumeClaimTemplate.spec.resources.limits

+
+
+
+object + +
+ +
+

Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].ephemeral.volumeClaimTemplate.spec.resources.requests

+
+
+
+object + +
+ +
+

Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].ephemeral.volumeClaimTemplate.spec.selector

+
+
+
+object + +
+ +
+

selector is a label query over volumes to consider for binding.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].ephemeral.volumeClaimTemplate.spec.selector.matchExpressions

+
+
+
+array + +
+ +
+

matchExpressions is a list of label selector requirements. The requirements are ANDed.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].ephemeral.volumeClaimTemplate.spec.selector.matchExpressions[*]

+
+
+
+object + +
+ +
+

A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].ephemeral.volumeClaimTemplate.spec.selector.matchExpressions[*].key

+
+
+
+string +Required +
+ +
+

key is the label key that the selector applies to.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].ephemeral.volumeClaimTemplate.spec.selector.matchExpressions[*].operator

+
+
+
+string +Required +
+ +
+

operator represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].ephemeral.volumeClaimTemplate.spec.selector.matchExpressions[*].values

+
+
+
+array + +
+ +
+

values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].ephemeral.volumeClaimTemplate.spec.selector.matchExpressions[*].values[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].ephemeral.volumeClaimTemplate.spec.selector.matchLabels

+
+
+
+object + +
+ +
+

matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is “key”, the operator is “In”, and the values array contains only “value”. The requirements are ANDed.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].ephemeral.volumeClaimTemplate.spec.storageClassName

+
+
+
+string + +
+ +
+

storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].ephemeral.volumeClaimTemplate.spec.volumeMode

+
+
+
+string + +
+ +
+

volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].ephemeral.volumeClaimTemplate.spec.volumeName

+
+
+
+string + +
+ +
+

volumeName is the binding reference to the PersistentVolume backing this claim.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].fc

+
+
+
+object + +
+ +
+

fc represents a Fibre Channel resource that is attached to a kubelet’s host machine and then exposed to the pod.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].fc.fsType

+
+
+
+string + +
+ +
+

fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. “ext4”, “xfs”, “ntfs”. Implicitly inferred to be “ext4” if unspecified. TODO: how do we prevent errors in the filesystem from compromising the machine

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].fc.lun

+
+
+
+integer + +
+ +
+

lun is Optional: FC target lun number

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].fc.readOnly

+
+
+
+boolean + +
+ +
+

readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].fc.targetWWNs

+
+
+
+array + +
+ +
+

targetWWNs is Optional: FC target worldwide names (WWNs)

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].fc.targetWWNs[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].fc.wwids

+
+
+
+array + +
+ +
+

wwids Optional: FC volume world wide identifiers (wwids) Either wwids or combination of targetWWNs and lun must be set, but not both simultaneously.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].fc.wwids[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].flexVolume

+
+
+
+object + +
+ +
+

flexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].flexVolume.driver

+
+
+
+string +Required +
+ +
+

driver is the name of the driver to use for this volume.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].flexVolume.fsType

+
+
+
+string + +
+ +
+

fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. “ext4”, “xfs”, “ntfs”. The default filesystem depends on FlexVolume script.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].flexVolume.options

+
+
+
+object + +
+ +
+

options is Optional: this field holds extra command options if any.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].flexVolume.readOnly

+
+
+
+boolean + +
+ +
+

readOnly is Optional: defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].flexVolume.secretRef

+
+
+
+object + +
+ +
+

secretRef is Optional: secretRef is reference to the secret object containing sensitive information to pass to the plugin scripts. This may be empty if no secret object is specified. If the secret object contains more than one secret, all secrets are passed to the plugin scripts.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].flexVolume.secretRef.name

+
+
+
+string + +
+ +
+

Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].flocker

+
+
+
+object + +
+ +
+

flocker represents a Flocker volume attached to a kubelet’s host machine. This depends on the Flocker control service being running

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].flocker.datasetName

+
+
+
+string + +
+ +
+

datasetName is Name of the dataset stored as metadata -> name on the dataset for Flocker should be considered as deprecated

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].flocker.datasetUUID

+
+
+
+string + +
+ +
+

datasetUUID is the UUID of the dataset. This is unique identifier of a Flocker dataset

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].gcePersistentDisk

+
+
+
+object + +
+ +
+

gcePersistentDisk represents a GCE Disk resource that is attached to a kubelet’s host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].gcePersistentDisk.fsType

+
+
+
+string + +
+ +
+

fsType is filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: “ext4”, “xfs”, “ntfs”. Implicitly inferred to be “ext4” if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk TODO: how do we prevent errors in the filesystem from compromising the machine

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].gcePersistentDisk.partition

+
+
+
+integer + +
+ +
+

partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as “1”. Similarly, the volume partition for /dev/sda is “0” (or you can leave the property empty). More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].gcePersistentDisk.pdName

+
+
+
+string +Required +
+ +
+

pdName is unique name of the PD resource in GCE. Used to identify the disk in GCE. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].gcePersistentDisk.readOnly

+
+
+
+boolean + +
+ +
+

readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].gitRepo

+
+
+
+object + +
+ +
+

gitRepo represents a git repository at a particular revision. DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod’s container.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].gitRepo.directory

+
+
+
+string + +
+ +
+

directory is the target directory name. Must not contain or start with ‘..’. If ‘.’ is supplied, the volume directory will be the git repository. Otherwise, if specified, the volume will contain the git repository in the subdirectory with the given name.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].gitRepo.repository

+
+
+
+string +Required +
+ +
+

repository is the URL

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].gitRepo.revision

+
+
+
+string + +
+ +
+

revision is the commit hash for the specified revision.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].glusterfs

+
+
+
+object + +
+ +
+

glusterfs represents a Glusterfs mount on the host that shares a pod’s lifetime. More info: https://examples.k8s.io/volumes/glusterfs/README.md

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].glusterfs.endpoints

+
+
+
+string +Required +
+ +
+

endpoints is the endpoint name that details Glusterfs topology. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].glusterfs.path

+
+
+
+string +Required +
+ +
+

path is the Glusterfs volume path. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].glusterfs.readOnly

+
+
+
+boolean + +
+ +
+

readOnly here will force the Glusterfs volume to be mounted with read-only permissions. Defaults to false. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].hostPath

+
+
+
+object + +
+ +
+

hostPath represents a pre-existing file or directory on the host machine that is directly exposed to the container. This is generally used for system agents or other privileged things that are allowed to see the host machine. Most containers will NOT need this. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath — TODO(jonesdl) We need to restrict who can use host directory mounts and who can/can not mount host directories as read/write.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].hostPath.path

+
+
+
+string +Required +
+ +
+

path of the directory on the host. If the path is a symlink, it will follow the link to the real path. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].hostPath.type

+
+
+
+string + +
+ +
+

type for HostPath Volume Defaults to “” More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].iscsi

+
+
+
+object + +
+ +
+

iscsi represents an ISCSI Disk resource that is attached to a kubelet’s host machine and then exposed to the pod. More info: https://examples.k8s.io/volumes/iscsi/README.md

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].iscsi.chapAuthDiscovery

+
+
+
+boolean + +
+ +
+

chapAuthDiscovery defines whether support iSCSI Discovery CHAP authentication

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].iscsi.chapAuthSession

+
+
+
+boolean + +
+ +
+

chapAuthSession defines whether support iSCSI Session CHAP authentication

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].iscsi.fsType

+
+
+
+string + +
+ +
+

fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: “ext4”, “xfs”, “ntfs”. Implicitly inferred to be “ext4” if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#iscsi TODO: how do we prevent errors in the filesystem from compromising the machine

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].iscsi.initiatorName

+
+
+
+string + +
+ +
+

initiatorName is the custom iSCSI Initiator Name. If initiatorName is specified with iscsiInterface simultaneously, new iSCSI interface : will be created for the connection.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].iscsi.iqn

+
+
+
+string +Required +
+ +
+

iqn is the target iSCSI Qualified Name.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].iscsi.iscsiInterface

+
+
+
+string + +
+ +
+

iscsiInterface is the interface Name that uses an iSCSI transport. Defaults to ‘default’ (tcp).

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].iscsi.lun

+
+
+
+integer +Required +
+ +
+

lun represents iSCSI Target Lun number.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].iscsi.portals

+
+
+
+array + +
+ +
+

portals is the iSCSI Target Portal List. The portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260).

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].iscsi.portals[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].iscsi.readOnly

+
+
+
+boolean + +
+ +
+

readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].iscsi.secretRef

+
+
+
+object + +
+ +
+

secretRef is the CHAP Secret for iSCSI target and initiator authentication

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].iscsi.secretRef.name

+
+
+
+string + +
+ +
+

Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].iscsi.targetPortal

+
+
+
+string +Required +
+ +
+

targetPortal is iSCSI Target Portal. The Portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260).

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].name

+
+
+
+string +Required +
+ +
+

name of the volume. Must be a DNS_LABEL and unique within the pod. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].nfs

+
+
+
+object + +
+ +
+

nfs represents an NFS mount on the host that shares a pod’s lifetime More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].nfs.path

+
+
+
+string +Required +
+ +
+

path that is exported by the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].nfs.readOnly

+
+
+
+boolean + +
+ +
+

readOnly here will force the NFS export to be mounted with read-only permissions. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].nfs.server

+
+
+
+string +Required +
+ +
+

server is the hostname or IP address of the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].persistentVolumeClaim

+
+
+
+object + +
+ +
+

persistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].persistentVolumeClaim.claimName

+
+
+
+string +Required +
+ +
+

claimName is the name of a PersistentVolumeClaim in the same namespace as the pod using this volume. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].persistentVolumeClaim.readOnly

+
+
+
+boolean + +
+ +
+

readOnly Will force the ReadOnly setting in VolumeMounts. Default false.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].photonPersistentDisk

+
+
+
+object + +
+ +
+

photonPersistentDisk represents a PhotonController persistent disk attached and mounted on kubelets host machine

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].photonPersistentDisk.fsType

+
+
+
+string + +
+ +
+

fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. “ext4”, “xfs”, “ntfs”. Implicitly inferred to be “ext4” if unspecified.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].photonPersistentDisk.pdID

+
+
+
+string +Required +
+ +
+

pdID is the ID that identifies Photon Controller persistent disk

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].portworxVolume

+
+
+
+object + +
+ +
+

portworxVolume represents a portworx volume attached and mounted on kubelets host machine

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].portworxVolume.fsType

+
+
+
+string + +
+ +
+

fSType represents the filesystem type to mount Must be a filesystem type supported by the host operating system. Ex. “ext4”, “xfs”. Implicitly inferred to be “ext4” if unspecified.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].portworxVolume.readOnly

+
+
+
+boolean + +
+ +
+

readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].portworxVolume.volumeID

+
+
+
+string +Required +
+ +
+

volumeID uniquely identifies a Portworx volume

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].projected

+
+
+
+object + +
+ +
+

projected items for all in one resources secrets, configmaps, and downward API

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].projected.defaultMode

+
+
+
+integer + +
+ +
+

defaultMode are the mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].projected.sources

+
+
+
+array + +
+ +
+

sources is the list of volume projections

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].projected.sources[*]

+
+
+
+object + +
+ +
+

Projection that may be projected along with other supported volume types

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].projected.sources[*].configMap

+
+
+
+object + +
+ +
+

configMap information about the configMap data to project

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].projected.sources[*].configMap.items

+
+
+
+array + +
+ +
+

items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the ‘..’ path or start with ‘..’.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].projected.sources[*].configMap.items[*]

+
+
+
+object + +
+ +
+

Maps a string key to a path within a volume.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].projected.sources[*].configMap.items[*].key

+
+
+
+string +Required +
+ +
+

key is the key to project.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].projected.sources[*].configMap.items[*].mode

+
+
+
+integer + +
+ +
+

mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].projected.sources[*].configMap.items[*].path

+
+
+
+string +Required +
+ +
+

path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element ‘..’. May not start with the string ‘..’.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].projected.sources[*].configMap.name

+
+
+
+string + +
+ +
+

Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].projected.sources[*].configMap.optional

+
+
+
+boolean + +
+ +
+

optional specify whether the ConfigMap or its keys must be defined

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].projected.sources[*].downwardAPI

+
+
+
+object + +
+ +
+

downwardAPI information about the downwardAPI data to project

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].projected.sources[*].downwardAPI.items

+
+
+
+array + +
+ +
+

Items is a list of DownwardAPIVolume file

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].projected.sources[*].downwardAPI.items[*]

+
+
+
+object + +
+ +
+

DownwardAPIVolumeFile represents information to create the file containing the pod field

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].projected.sources[*].downwardAPI.items[*].fieldRef

+
+
+
+object + +
+ +
+

Required: Selects a field of the pod: only annotations, labels, name and namespace are supported.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].projected.sources[*].downwardAPI.items[*].fieldRef.apiVersion

+
+
+
+string + +
+ +
+

Version of the schema the FieldPath is written in terms of, defaults to “v1”.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].projected.sources[*].downwardAPI.items[*].fieldRef.fieldPath

+
+
+
+string +Required +
+ +
+

Path of the field to select in the specified API version.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].projected.sources[*].downwardAPI.items[*].mode

+
+
+
+integer + +
+ +
+

Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].projected.sources[*].downwardAPI.items[*].path

+
+
+
+string +Required +
+ +
+

Required: Path is the relative path name of the file to be created. Must not be absolute or contain the ‘..’ path. Must be utf-8 encoded. The first item of the relative path must not start with ‘..’

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].projected.sources[*].downwardAPI.items[*].resourceFieldRef

+
+
+
+object + +
+ +
+

Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].projected.sources[*].downwardAPI.items[*].resourceFieldRef.containerName

+
+
+
+string + +
+ +
+

Container name: required for volumes, optional for env vars

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].projected.sources[*].downwardAPI.items[*].resourceFieldRef.divisor

+
+
+
+ + +
+ +
+

Specifies the output format of the exposed resources, defaults to “1”

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].projected.sources[*].downwardAPI.items[*].resourceFieldRef.resource

+
+
+
+string +Required +
+ +
+

Required: resource to select

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].projected.sources[*].secret

+
+
+
+object + +
+ +
+

secret information about the secret data to project

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].projected.sources[*].secret.items

+
+
+
+array + +
+ +
+

items if unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the ‘..’ path or start with ‘..’.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].projected.sources[*].secret.items[*]

+
+
+
+object + +
+ +
+

Maps a string key to a path within a volume.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].projected.sources[*].secret.items[*].key

+
+
+
+string +Required +
+ +
+

key is the key to project.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].projected.sources[*].secret.items[*].mode

+
+
+
+integer + +
+ +
+

mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].projected.sources[*].secret.items[*].path

+
+
+
+string +Required +
+ +
+

path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element ‘..’. May not start with the string ‘..’.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].projected.sources[*].secret.name

+
+
+
+string + +
+ +
+

Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].projected.sources[*].secret.optional

+
+
+
+boolean + +
+ +
+

optional field specify whether the Secret or its key must be defined

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].projected.sources[*].serviceAccountToken

+
+
+
+object + +
+ +
+

serviceAccountToken is information about the serviceAccountToken data to project

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].projected.sources[*].serviceAccountToken.audience

+
+
+
+string + +
+ +
+

audience is the intended audience of the token. A recipient of a token must identify itself with an identifier specified in the audience of the token, and otherwise should reject the token. The audience defaults to the identifier of the apiserver.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].projected.sources[*].serviceAccountToken.expirationSeconds

+
+
+
+integer + +
+ +
+

expirationSeconds is the requested duration of validity of the service account token. As the token approaches expiration, the kubelet volume plugin will proactively rotate the service account token. The kubelet will start trying to rotate the token if the token is older than 80 percent of its time to live or if the token is older than 24 hours.Defaults to 1 hour and must be at least 10 minutes.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].projected.sources[*].serviceAccountToken.path

+
+
+
+string +Required +
+ +
+

path is the path relative to the mount point of the file to project the token into.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].quobyte

+
+
+
+object + +
+ +
+

quobyte represents a Quobyte mount on the host that shares a pod’s lifetime

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].quobyte.group

+
+
+
+string + +
+ +
+

group to map volume access to Default is no group

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].quobyte.readOnly

+
+
+
+boolean + +
+ +
+

readOnly here will force the Quobyte volume to be mounted with read-only permissions. Defaults to false.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].quobyte.registry

+
+
+
+string +Required +
+ +
+

registry represents a single or multiple Quobyte Registry services specified as a string as host:port pair (multiple entries are separated with commas) which acts as the central registry for volumes

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].quobyte.tenant

+
+
+
+string + +
+ +
+

tenant owning the given Quobyte volume in the Backend Used with dynamically provisioned Quobyte volumes, value is set by the plugin

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].quobyte.user

+
+
+
+string + +
+ +
+

user to map volume access to Defaults to serivceaccount user

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].quobyte.volume

+
+
+
+string +Required +
+ +
+

volume is a string that references an already created Quobyte volume by name.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].rbd

+
+
+
+object + +
+ +
+

rbd represents a Rados Block Device mount on the host that shares a pod’s lifetime. More info: https://examples.k8s.io/volumes/rbd/README.md

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].rbd.fsType

+
+
+
+string + +
+ +
+

fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: “ext4”, “xfs”, “ntfs”. Implicitly inferred to be “ext4” if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#rbd TODO: how do we prevent errors in the filesystem from compromising the machine

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].rbd.image

+
+
+
+string +Required +
+ +
+

image is the rados image name. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].rbd.keyring

+
+
+
+string + +
+ +
+

keyring is the path to key ring for RBDUser. Default is /etc/ceph/keyring. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].rbd.monitors

+
+
+
+array +Required +
+ +
+

monitors is a collection of Ceph monitors. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].rbd.monitors[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].rbd.pool

+
+
+
+string + +
+ +
+

pool is the rados pool name. Default is rbd. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].rbd.readOnly

+
+
+
+boolean + +
+ +
+

readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].rbd.secretRef

+
+
+
+object + +
+ +
+

secretRef is name of the authentication secret for RBDUser. If provided overrides keyring. Default is nil. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].rbd.secretRef.name

+
+
+
+string + +
+ +
+

Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].rbd.user

+
+
+
+string + +
+ +
+

user is the rados user name. Default is admin. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].scaleIO

+
+
+
+object + +
+ +
+

scaleIO represents a ScaleIO persistent volume attached and mounted on Kubernetes nodes.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].scaleIO.fsType

+
+
+
+string + +
+ +
+

fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. “ext4”, “xfs”, “ntfs”. Default is “xfs”.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].scaleIO.gateway

+
+
+
+string +Required +
+ +
+

gateway is the host address of the ScaleIO API Gateway.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].scaleIO.protectionDomain

+
+
+
+string + +
+ +
+

protectionDomain is the name of the ScaleIO Protection Domain for the configured storage.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].scaleIO.readOnly

+
+
+
+boolean + +
+ +
+

readOnly Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].scaleIO.secretRef

+
+
+
+object +Required +
+ +
+

secretRef references to the secret for ScaleIO user and other sensitive information. If this is not provided, Login operation will fail.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].scaleIO.secretRef.name

+
+
+
+string + +
+ +
+

Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].scaleIO.sslEnabled

+
+
+
+boolean + +
+ +
+

sslEnabled Flag enable/disable SSL communication with Gateway, default false

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].scaleIO.storageMode

+
+
+
+string + +
+ +
+

storageMode indicates whether the storage for a volume should be ThickProvisioned or ThinProvisioned. Default is ThinProvisioned.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].scaleIO.storagePool

+
+
+
+string + +
+ +
+

storagePool is the ScaleIO Storage Pool associated with the protection domain.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].scaleIO.system

+
+
+
+string +Required +
+ +
+

system is the name of the storage system as configured in ScaleIO.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].scaleIO.volumeName

+
+
+
+string + +
+ +
+

volumeName is the name of a volume already created in the ScaleIO system that is associated with this volume source.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].secret

+
+
+
+object + +
+ +
+

secret represents a secret that should populate this volume. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].secret.defaultMode

+
+
+
+integer + +
+ +
+

defaultMode is Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].secret.items

+
+
+
+array + +
+ +
+

items If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the ‘..’ path or start with ‘..’.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].secret.items[*]

+
+
+
+object + +
+ +
+

Maps a string key to a path within a volume.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].secret.items[*].key

+
+
+
+string +Required +
+ +
+

key is the key to project.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].secret.items[*].mode

+
+
+
+integer + +
+ +
+

mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].secret.items[*].path

+
+
+
+string +Required +
+ +
+

path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element ‘..’. May not start with the string ‘..’.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].secret.optional

+
+
+
+boolean + +
+ +
+

optional field specify whether the Secret or its keys must be defined

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].secret.secretName

+
+
+
+string + +
+ +
+

secretName is the name of the secret in the pod’s namespace to use. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].storageos

+
+
+
+object + +
+ +
+

storageOS represents a StorageOS volume attached and mounted on Kubernetes nodes.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].storageos.fsType

+
+
+
+string + +
+ +
+

fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. “ext4”, “xfs”, “ntfs”. Implicitly inferred to be “ext4” if unspecified.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].storageos.readOnly

+
+
+
+boolean + +
+ +
+

readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].storageos.secretRef

+
+
+
+object + +
+ +
+

secretRef specifies the secret to use for obtaining the StorageOS API credentials. If not specified, default values will be attempted.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].storageos.secretRef.name

+
+
+
+string + +
+ +
+

Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].storageos.volumeName

+
+
+
+string + +
+ +
+

volumeName is the human-readable name of the StorageOS volume. Volume names are only unique within a namespace.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].storageos.volumeNamespace

+
+
+
+string + +
+ +
+

volumeNamespace specifies the scope of the volume within StorageOS. If no namespace is specified then the Pod’s namespace will be used. This allows the Kubernetes name scoping to be mirrored within StorageOS for tighter integration. Set VolumeName to any name to override the default behaviour. Set to “default” if you are not using namespaces within StorageOS. Namespaces that do not pre-exist within StorageOS will be created.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].vsphereVolume

+
+
+
+object + +
+ +
+

vsphereVolume represents a vSphere volume attached and mounted on kubelets host machine

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].vsphereVolume.fsType

+
+
+
+string + +
+ +
+

fsType is filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. “ext4”, “xfs”, “ntfs”. Implicitly inferred to be “ext4” if unspecified.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].vsphereVolume.storagePolicyID

+
+
+
+string + +
+ +
+

storagePolicyID is the storage Policy Based Management (SPBM) profile ID associated with the StoragePolicyName.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].vsphereVolume.storagePolicyName

+
+
+
+string + +
+ +
+

storagePolicyName is the storage Policy Based Management (SPBM) profile name.

+ +
+ +
+
+ +
+
+

.spec.cruiseControlConfig.volumes[*].vsphereVolume.volumePath

+
+
+
+string +Required +
+ +
+

volumePath is the path that identifies vSphere volume vmdk

+ +
+ +
+
+ +
+
+

.spec.disruptionBudget

+
+
+
+object + +
+ +
+

DisruptionBudget defines the configuration for PodDisruptionBudget where the workload is managed by the kafka-operator

+ +
+ +
+
+ +
+
+

.spec.disruptionBudget.budget

+
+
+
+string + +
+ +
+

The budget to set for the PDB, can either be static number or a percentage

+ +
+ +
+
+ +
+
+

.spec.disruptionBudget.create

+
+
+
+boolean + +
+ +
+

If set to true, will create a podDisruptionBudget

+ +
+ +
+
+ +
+
+

.spec.envoyConfig

+
+
+
+object + +
+ +
+

EnvoyConfig defines the config for Envoy

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.adminPort

+
+
+
+integer + +
+ +
+

Envoy admin port

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity

+
+
+
+object + +
+ +
+

Affinity is a group of affinity scheduling rules.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.nodeAffinity

+
+
+
+object + +
+ +
+

Describes node affinity scheduling rules for the pod.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution

+
+
+
+array + +
+ +
+

The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding “weight” to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[*]

+
+
+
+object + +
+ +
+

An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it’s a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op).

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].preference

+
+
+
+object +Required +
+ +
+

A node selector term, associated with the corresponding weight.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].preference.matchExpressions

+
+
+
+array + +
+ +
+

A list of node selector requirements by node’s labels.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].preference.matchExpressions[*]

+
+
+
+object + +
+ +
+

A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].preference.matchExpressions[*].key

+
+
+
+string +Required +
+ +
+

The label key that the selector applies to.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].preference.matchExpressions[*].operator

+
+
+
+string +Required +
+ +
+

Represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].preference.matchExpressions[*].values

+
+
+
+array + +
+ +
+

An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].preference.matchExpressions[*].values[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].preference.matchFields

+
+
+
+array + +
+ +
+

A list of node selector requirements by node’s fields.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].preference.matchFields[*]

+
+
+
+object + +
+ +
+

A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].preference.matchFields[*].key

+
+
+
+string +Required +
+ +
+

The label key that the selector applies to.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].preference.matchFields[*].operator

+
+
+
+string +Required +
+ +
+

Represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].preference.matchFields[*].values

+
+
+
+array + +
+ +
+

An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].preference.matchFields[*].values[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].weight

+
+
+
+integer +Required +
+ +
+

Weight associated with matching the corresponding nodeSelectorTerm, in the range 1-100.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution

+
+
+
+object + +
+ +
+

If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms

+
+
+
+array +Required +
+ +
+

Required. A list of node selector terms. The terms are ORed.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[*]

+
+
+
+object + +
+ +
+

A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[*].matchExpressions

+
+
+
+array + +
+ +
+

A list of node selector requirements by node’s labels.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[*].matchExpressions[*]

+
+
+
+object + +
+ +
+

A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[*].matchExpressions[*].key

+
+
+
+string +Required +
+ +
+

The label key that the selector applies to.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[*].matchExpressions[*].operator

+
+
+
+string +Required +
+ +
+

Represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[*].matchExpressions[*].values

+
+
+
+array + +
+ +
+

An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[*].matchExpressions[*].values[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[*].matchFields

+
+
+
+array + +
+ +
+

A list of node selector requirements by node’s fields.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[*].matchFields[*]

+
+
+
+object + +
+ +
+

A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[*].matchFields[*].key

+
+
+
+string +Required +
+ +
+

The label key that the selector applies to.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[*].matchFields[*].operator

+
+
+
+string +Required +
+ +
+

Represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[*].matchFields[*].values

+
+
+
+array + +
+ +
+

An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[*].matchFields[*].values[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAffinity

+
+
+
+object + +
+ +
+

Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)).

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution

+
+
+
+array + +
+ +
+

The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding “weight” to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[*]

+
+
+
+object + +
+ +
+

The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s)

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm

+
+
+
+object +Required +
+ +
+

Required. A pod affinity term, associated with the corresponding weight.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm.labelSelector

+
+
+
+object + +
+ +
+

A label query over a set of resources, in this case pods.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm.labelSelector.matchExpressions

+
+
+
+array + +
+ +
+

matchExpressions is a list of label selector requirements. The requirements are ANDed.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm.labelSelector.matchExpressions[*]

+
+
+
+object + +
+ +
+

A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm.labelSelector.matchExpressions[*].key

+
+
+
+string +Required +
+ +
+

key is the label key that the selector applies to.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm.labelSelector.matchExpressions[*].operator

+
+
+
+string +Required +
+ +
+

operator represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm.labelSelector.matchExpressions[*].values

+
+
+
+array + +
+ +
+

values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm.labelSelector.matchExpressions[*].values[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm.labelSelector.matchLabels

+
+
+
+object + +
+ +
+

matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is “key”, the operator is “In”, and the values array contains only “value”. The requirements are ANDed.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm.namespaceSelector

+
+
+
+object + +
+ +
+

A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means “this pod’s namespace”. An empty selector ({}) matches all namespaces.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm.namespaceSelector.matchExpressions

+
+
+
+array + +
+ +
+

matchExpressions is a list of label selector requirements. The requirements are ANDed.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm.namespaceSelector.matchExpressions[*]

+
+
+
+object + +
+ +
+

A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm.namespaceSelector.matchExpressions[*].key

+
+
+
+string +Required +
+ +
+

key is the label key that the selector applies to.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm.namespaceSelector.matchExpressions[*].operator

+
+
+
+string +Required +
+ +
+

operator represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm.namespaceSelector.matchExpressions[*].values

+
+
+
+array + +
+ +
+

values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm.namespaceSelector.matchExpressions[*].values[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm.namespaceSelector.matchLabels

+
+
+
+object + +
+ +
+

matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is “key”, the operator is “In”, and the values array contains only “value”. The requirements are ANDed.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm.namespaces

+
+
+
+array + +
+ +
+

namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means “this pod’s namespace”.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm.namespaces[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm.topologyKey

+
+
+
+string +Required +
+ +
+

This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].weight

+
+
+
+integer +Required +
+ +
+

weight associated with matching the corresponding podAffinityTerm, in the range 1-100.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution

+
+
+
+array + +
+ +
+

If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[*]

+
+
+
+object + +
+ +
+

Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key matches that of any node on which a pod of the set of pods is running

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[*].labelSelector

+
+
+
+object + +
+ +
+

A label query over a set of resources, in this case pods.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[*].labelSelector.matchExpressions

+
+
+
+array + +
+ +
+

matchExpressions is a list of label selector requirements. The requirements are ANDed.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[*].labelSelector.matchExpressions[*]

+
+
+
+object + +
+ +
+

A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[*].labelSelector.matchExpressions[*].key

+
+
+
+string +Required +
+ +
+

key is the label key that the selector applies to.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[*].labelSelector.matchExpressions[*].operator

+
+
+
+string +Required +
+ +
+

operator represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[*].labelSelector.matchExpressions[*].values

+
+
+
+array + +
+ +
+

values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[*].labelSelector.matchExpressions[*].values[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[*].labelSelector.matchLabels

+
+
+
+object + +
+ +
+

matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is “key”, the operator is “In”, and the values array contains only “value”. The requirements are ANDed.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[*].namespaceSelector

+
+
+
+object + +
+ +
+

A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means “this pod’s namespace”. An empty selector ({}) matches all namespaces.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[*].namespaceSelector.matchExpressions

+
+
+
+array + +
+ +
+

matchExpressions is a list of label selector requirements. The requirements are ANDed.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[*].namespaceSelector.matchExpressions[*]

+
+
+
+object + +
+ +
+

A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[*].namespaceSelector.matchExpressions[*].key

+
+
+
+string +Required +
+ +
+

key is the label key that the selector applies to.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[*].namespaceSelector.matchExpressions[*].operator

+
+
+
+string +Required +
+ +
+

operator represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[*].namespaceSelector.matchExpressions[*].values

+
+
+
+array + +
+ +
+

values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[*].namespaceSelector.matchExpressions[*].values[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[*].namespaceSelector.matchLabels

+
+
+
+object + +
+ +
+

matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is “key”, the operator is “In”, and the values array contains only “value”. The requirements are ANDed.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[*].namespaces

+
+
+
+array + +
+ +
+

namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means “this pod’s namespace”.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[*].namespaces[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[*].topologyKey

+
+
+
+string +Required +
+ +
+

This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAntiAffinity

+
+
+
+object + +
+ +
+

Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as some other pod(s)).

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution

+
+
+
+array + +
+ +
+

The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding “weight” to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[*]

+
+
+
+object + +
+ +
+

The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s)

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm

+
+
+
+object +Required +
+ +
+

Required. A pod affinity term, associated with the corresponding weight.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm.labelSelector

+
+
+
+object + +
+ +
+

A label query over a set of resources, in this case pods.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm.labelSelector.matchExpressions

+
+
+
+array + +
+ +
+

matchExpressions is a list of label selector requirements. The requirements are ANDed.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm.labelSelector.matchExpressions[*]

+
+
+
+object + +
+ +
+

A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm.labelSelector.matchExpressions[*].key

+
+
+
+string +Required +
+ +
+

key is the label key that the selector applies to.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm.labelSelector.matchExpressions[*].operator

+
+
+
+string +Required +
+ +
+

operator represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm.labelSelector.matchExpressions[*].values

+
+
+
+array + +
+ +
+

values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm.labelSelector.matchExpressions[*].values[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm.labelSelector.matchLabels

+
+
+
+object + +
+ +
+

matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is “key”, the operator is “In”, and the values array contains only “value”. The requirements are ANDed.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm.namespaceSelector

+
+
+
+object + +
+ +
+

A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means “this pod’s namespace”. An empty selector ({}) matches all namespaces.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm.namespaceSelector.matchExpressions

+
+
+
+array + +
+ +
+

matchExpressions is a list of label selector requirements. The requirements are ANDed.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm.namespaceSelector.matchExpressions[*]

+
+
+
+object + +
+ +
+

A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm.namespaceSelector.matchExpressions[*].key

+
+
+
+string +Required +
+ +
+

key is the label key that the selector applies to.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm.namespaceSelector.matchExpressions[*].operator

+
+
+
+string +Required +
+ +
+

operator represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm.namespaceSelector.matchExpressions[*].values

+
+
+
+array + +
+ +
+

values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm.namespaceSelector.matchExpressions[*].values[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm.namespaceSelector.matchLabels

+
+
+
+object + +
+ +
+

matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is “key”, the operator is “In”, and the values array contains only “value”. The requirements are ANDed.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm.namespaces

+
+
+
+array + +
+ +
+

namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means “this pod’s namespace”.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm.namespaces[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].podAffinityTerm.topologyKey

+
+
+
+string +Required +
+ +
+

This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[*].weight

+
+
+
+integer +Required +
+ +
+

weight associated with matching the corresponding podAffinityTerm, in the range 1-100.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution

+
+
+
+array + +
+ +
+

If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[*]

+
+
+
+object + +
+ +
+

Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key matches that of any node on which a pod of the set of pods is running

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[*].labelSelector

+
+
+
+object + +
+ +
+

A label query over a set of resources, in this case pods.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[*].labelSelector.matchExpressions

+
+
+
+array + +
+ +
+

matchExpressions is a list of label selector requirements. The requirements are ANDed.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[*].labelSelector.matchExpressions[*]

+
+
+
+object + +
+ +
+

A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[*].labelSelector.matchExpressions[*].key

+
+
+
+string +Required +
+ +
+

key is the label key that the selector applies to.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[*].labelSelector.matchExpressions[*].operator

+
+
+
+string +Required +
+ +
+

operator represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[*].labelSelector.matchExpressions[*].values

+
+
+
+array + +
+ +
+

values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[*].labelSelector.matchExpressions[*].values[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[*].labelSelector.matchLabels

+
+
+
+object + +
+ +
+

matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is “key”, the operator is “In”, and the values array contains only “value”. The requirements are ANDed.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[*].namespaceSelector

+
+
+
+object + +
+ +
+

A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means “this pod’s namespace”. An empty selector ({}) matches all namespaces.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[*].namespaceSelector.matchExpressions

+
+
+
+array + +
+ +
+

matchExpressions is a list of label selector requirements. The requirements are ANDed.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[*].namespaceSelector.matchExpressions[*]

+
+
+
+object + +
+ +
+

A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[*].namespaceSelector.matchExpressions[*].key

+
+
+
+string +Required +
+ +
+

key is the label key that the selector applies to.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[*].namespaceSelector.matchExpressions[*].operator

+
+
+
+string +Required +
+ +
+

operator represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[*].namespaceSelector.matchExpressions[*].values

+
+
+
+array + +
+ +
+

values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[*].namespaceSelector.matchExpressions[*].values[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[*].namespaceSelector.matchLabels

+
+
+
+object + +
+ +
+

matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is “key”, the operator is “In”, and the values array contains only “value”. The requirements are ANDed.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[*].namespaces

+
+
+
+array + +
+ +
+

namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means “this pod’s namespace”.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[*].namespaces[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.envoyConfig.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[*].topologyKey

+
+
+
+string +Required +
+ +
+

This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.annotations

+
+
+
+object + +
+ +
+

Annotations defines the annotations placed on the envoy ingress controller deployment

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.disruptionBudget

+
+
+
+object + +
+ +
+

DisruptionBudget is the pod disruption budget attached to Envoy Deployment(s)

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.disruptionBudget.budget

+
+
+
+string + +
+ +
+

The budget to set for the PDB, can either be static number or a percentage

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.disruptionBudget.create

+
+
+
+boolean + +
+ +
+

If set to true, will create a podDisruptionBudget

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.disruptionBudget.strategy

+
+
+
+string + +
+ +
+

The strategy to be used, either minAvailable or maxUnavailable

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.enableHealthCheckHttp10

+
+
+
+boolean + +
+ +
+

EnableHealthCheckHttp10 is a toggle for adding HTTP1.0 support to Envoy health-check, default false

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.envoyCommandLineArgs

+
+
+
+object + +
+ +
+

Envoy command line arguments

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.envoyCommandLineArgs.concurrency

+
+
+
+integer + +
+ +
+

Envoy –concurrency command line argument. See https://www.envoyproxy.io/docs/envoy/latest/operations/cli#cmdoption-concurrency

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.healthCheckPort

+
+
+
+integer + +
+ +
+

Envoy health-check port

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.image

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.envoyConfig.imagePullSecrets

+
+
+
+array + +
+ +
+

ImagePullSecrets for the envoy image pull

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.imagePullSecrets[*]

+
+
+
+object + +
+ +
+

LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.imagePullSecrets[*].name

+
+
+
+string + +
+ +
+

Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.loadBalancerIP

+
+
+
+string + +
+ +
+

LoadBalancerIP can be used to specify an exact IP for the LoadBalancer service

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.loadBalancerSourceRanges

+
+
+
+array + +
+ +
+

If specified and supported by the platform, traffic through the cloud-provider load-balancer will be restricted to the specified client IPs. This field will be ignored if the cloud-provider does not support the feature. More info: https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.loadBalancerSourceRanges[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.envoyConfig.nodeSelector

+
+
+
+object + +
+ +
+

NodeSelector is the node selector expression for envoy pods

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.priorityClassName

+
+
+
+string + +
+ +
+

PriorityClassName specifies the priority class name for the Envoy pod(s) If specified, the PriorityClass resource with this PriorityClassName must be created beforehand If not specified, the Envoy pods’ priority is default to zero

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.replicas

+
+
+
+integer + +
+ +
+
+ +
+
+

.spec.envoyConfig.resourceRequirements

+
+
+
+object + +
+ +
+

ResourceRequirements describes the compute resource requirements.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.resourceRequirements.limits

+
+
+
+object + +
+ +
+

Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.resourceRequirements.requests

+
+
+
+object + +
+ +
+

Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.serviceAccountName

+
+
+
+string + +
+ +
+

ServiceAccountName is the name of service account

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.tolerations

+
+
+
+array + +
+ +
+
+ +
+
+

.spec.envoyConfig.tolerations[*]

+
+
+
+object + +
+ +
+

The pod this Toleration is attached to tolerates any taint that matches the triple using the matching operator .

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.tolerations[*].effect

+
+
+
+string + +
+ +
+

Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.tolerations[*].key

+
+
+
+string + +
+ +
+

Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.tolerations[*].operator

+
+
+
+string + +
+ +
+

Operator represents a key’s relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.tolerations[*].tolerationSeconds

+
+
+
+integer + +
+ +
+

TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.tolerations[*].value

+
+
+
+string + +
+ +
+

Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.topologySpreadConstraints

+
+
+
+array + +
+ +
+
+ +
+
+

.spec.envoyConfig.topologySpreadConstraints[*]

+
+
+
+object + +
+ +
+

TopologySpreadConstraint specifies how to spread matching pods among the given topology.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.topologySpreadConstraints[*].labelSelector

+
+
+
+object + +
+ +
+

LabelSelector is used to find matching pods. Pods that match this label selector are counted to determine the number of pods in their corresponding topology domain.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.topologySpreadConstraints[*].labelSelector.matchExpressions

+
+
+
+array + +
+ +
+

matchExpressions is a list of label selector requirements. The requirements are ANDed.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.topologySpreadConstraints[*].labelSelector.matchExpressions[*]

+
+
+
+object + +
+ +
+

A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.topologySpreadConstraints[*].labelSelector.matchExpressions[*].key

+
+
+
+string +Required +
+ +
+

key is the label key that the selector applies to.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.topologySpreadConstraints[*].labelSelector.matchExpressions[*].operator

+
+
+
+string +Required +
+ +
+

operator represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.topologySpreadConstraints[*].labelSelector.matchExpressions[*].values

+
+
+
+array + +
+ +
+

values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.topologySpreadConstraints[*].labelSelector.matchExpressions[*].values[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.envoyConfig.topologySpreadConstraints[*].labelSelector.matchLabels

+
+
+
+object + +
+ +
+

matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is “key”, the operator is “In”, and the values array contains only “value”. The requirements are ANDed.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.topologySpreadConstraints[*].matchLabelKeys

+
+
+
+array + +
+ +
+

MatchLabelKeys is a set of pod label keys to select the pods over which spreading will be calculated. The keys are used to lookup values from the incoming pod labels, those key-value labels are ANDed with labelSelector to select the group of existing pods over which spreading will be calculated for the incoming pod. Keys that don’t exist in the incoming pod labels will be ignored. A null or empty list means only match against labelSelector.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.topologySpreadConstraints[*].matchLabelKeys[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.envoyConfig.topologySpreadConstraints[*].maxSkew

+
+
+
+integer +Required +
+ +
+

MaxSkew describes the degree to which pods may be unevenly distributed. When whenUnsatisfiable=DoNotSchedule, it is the maximum permitted difference between the number of matching pods in the target topology and the global minimum. The global minimum is the minimum number of matching pods in an eligible domain or zero if the number of eligible domains is less than MinDomains. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 2/2/1: In this case, the global minimum is 1. | zone1 | zone2 | zone3 | | P P | P P | P | - if MaxSkew is 1, incoming pod can only be scheduled to zone3 to become 2/2/2; scheduling it onto zone1(zone2) would make the ActualSkew(3-1) on zone1(zone2) violate MaxSkew(1). - if MaxSkew is 2, incoming pod can be scheduled onto any zone. When whenUnsatisfiable=ScheduleAnyway, it is used to give higher precedence to topologies that satisfy it. It’s a required field. Default value is 1 and 0 is not allowed.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.topologySpreadConstraints[*].minDomains

+
+
+
+integer + +
+ +
+

MinDomains indicates a minimum number of eligible domains. When the number of eligible domains with matching topology keys is less than minDomains, Pod Topology Spread treats “global minimum” as 0, and then the calculation of Skew is performed. And when the number of eligible domains with matching topology keys equals or greater than minDomains, this value has no effect on scheduling. As a result, when the number of eligible domains is less than minDomains, scheduler won’t schedule more than maxSkew Pods to those domains. If value is nil, the constraint behaves as if MinDomains is equal to 1. Valid values are integers greater than 0. When value is not nil, WhenUnsatisfiable must be DoNotSchedule. + For example, in a 3-zone cluster, MaxSkew is set to 2, MinDomains is set to 5 and pods with the same labelSelector spread as 2/2/2: | zone1 | zone2 | zone3 | | P P | P P | P P | The number of domains is less than 5(MinDomains), so “global minimum” is treated as 0. In this situation, new pod with the same labelSelector cannot be scheduled, because computed skew will be 3(3 - 0) if new Pod is scheduled to any of the three zones, it will violate MaxSkew. + This is a beta field and requires the MinDomainsInPodTopologySpread feature gate to be enabled (enabled by default).

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.topologySpreadConstraints[*].nodeAffinityPolicy

+
+
+
+string + +
+ +
+

NodeAffinityPolicy indicates how we will treat Pod’s nodeAffinity/nodeSelector when calculating pod topology spread skew. Options are: - Honor: only nodes matching nodeAffinity/nodeSelector are included in the calculations. - Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations. + If this value is nil, the behavior is equivalent to the Honor policy. This is a alpha-level feature enabled by the NodeInclusionPolicyInPodTopologySpread feature flag.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.topologySpreadConstraints[*].nodeTaintsPolicy

+
+
+
+string + +
+ +
+

NodeTaintsPolicy indicates how we will treat node taints when calculating pod topology spread skew. Options are: - Honor: nodes without taints, along with tainted nodes for which the incoming pod has a toleration, are included. - Ignore: node taints are ignored. All nodes are included. + If this value is nil, the behavior is equivalent to the Ignore policy. This is a alpha-level feature enabled by the NodeInclusionPolicyInPodTopologySpread feature flag.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.topologySpreadConstraints[*].topologyKey

+
+
+
+string +Required +
+ +
+

TopologyKey is the key of node labels. Nodes that have a label with this key and identical values are considered to be in the same topology. We consider each as a “bucket”, and try to put balanced number of pods into each bucket. We define a domain as a particular instance of a topology. Also, we define an eligible domain as a domain whose nodes meet the requirements of nodeAffinityPolicy and nodeTaintsPolicy. e.g. If TopologyKey is “kubernetes.io/hostname”, each Node is a domain of that topology. And, if TopologyKey is “topology.kubernetes.io/zone”, each zone is a domain of that topology. It’s a required field.

+ +
+ +
+
+ +
+
+

.spec.envoyConfig.topologySpreadConstraints[*].whenUnsatisfiable

+
+
+
+string +Required +
+ +
+

WhenUnsatisfiable indicates how to deal with a pod if it doesn’t satisfy the spread constraint. - DoNotSchedule (default) tells the scheduler not to schedule it. - ScheduleAnyway tells the scheduler to schedule the pod in any location, but giving higher precedence to topologies that would help reduce the skew. A constraint is considered “Unsatisfiable” for an incoming pod if and only if every possible node assignment for that pod would violate “MaxSkew” on some topology. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 3/1/1: | zone1 | zone2 | zone3 | | P P P | P | P | If WhenUnsatisfiable is set to DoNotSchedule, incoming pod can only be scheduled to zone2(zone3) to become 3/2/1(3/1/2) as ActualSkew(2-1) on zone2(zone3) satisfies MaxSkew(1). In other words, the cluster can still be imbalanced, but scheduler won’t make it more imbalanced. It’s a required field.

+ +
+ +
+
+ +
+
+

.spec.envs

+
+
+
+array + +
+ +
+

Envs defines environment variables for Kafka broker Pods. Adding the “+” prefix to the name prepends the value to that environment variable instead of overwriting it. Add the “+” suffix to append.

+ +
+ +
+
+ +
+
+

.spec.envs[*]

+
+
+
+object + +
+ +
+

EnvVar represents an environment variable present in a Container.

+ +
+ +
+
+ +
+
+

.spec.envs[*].name

+
+
+
+string +Required +
+ +
+

Name of the environment variable. Must be a C_IDENTIFIER.

+ +
+ +
+
+ +
+
+

.spec.envs[*].value

+
+
+
+string + +
+ +
+

Variable references $(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. “$$(VAR_NAME)” will produce the string literal “$(VAR_NAME)”. Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to “”.

+ +
+ +
+
+ +
+
+

.spec.envs[*].valueFrom

+
+
+
+object + +
+ +
+

Source for the environment variable’s value. Cannot be used if value is not empty.

+ +
+ +
+
+ +
+
+

.spec.envs[*].valueFrom.configMapKeyRef

+
+
+
+object + +
+ +
+

Selects a key of a ConfigMap.

+ +
+ +
+
+ +
+
+

.spec.envs[*].valueFrom.configMapKeyRef.key

+
+
+
+string +Required +
+ +
+

The key to select.

+ +
+ +
+
+ +
+
+

.spec.envs[*].valueFrom.configMapKeyRef.name

+
+
+
+string + +
+ +
+

Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?

+ +
+ +
+
+ +
+
+

.spec.envs[*].valueFrom.configMapKeyRef.optional

+
+
+
+boolean + +
+ +
+

Specify whether the ConfigMap or its key must be defined

+ +
+ +
+
+ +
+
+

.spec.envs[*].valueFrom.fieldRef

+
+
+
+object + +
+ +
+

Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'], metadata.annotations['<KEY>'], spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs.

+ +
+ +
+
+ +
+
+

.spec.envs[*].valueFrom.fieldRef.apiVersion

+
+
+
+string + +
+ +
+

Version of the schema the FieldPath is written in terms of, defaults to “v1”.

+ +
+ +
+
+ +
+
+

.spec.envs[*].valueFrom.fieldRef.fieldPath

+
+
+
+string +Required +
+ +
+

Path of the field to select in the specified API version.

+ +
+ +
+
+ +
+
+

.spec.envs[*].valueFrom.resourceFieldRef

+
+
+
+object + +
+ +
+

Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported.

+ +
+ +
+
+ +
+
+

.spec.envs[*].valueFrom.resourceFieldRef.containerName

+
+
+
+string + +
+ +
+

Container name: required for volumes, optional for env vars

+ +
+ +
+
+ +
+
+

.spec.envs[*].valueFrom.resourceFieldRef.divisor

+
+
+
+ + +
+ +
+

Specifies the output format of the exposed resources, defaults to “1”

+ +
+ +
+
+ +
+
+

.spec.envs[*].valueFrom.resourceFieldRef.resource

+
+
+
+string +Required +
+ +
+

Required: resource to select

+ +
+ +
+
+ +
+
+

.spec.envs[*].valueFrom.secretKeyRef

+
+
+
+object + +
+ +
+

Selects a key of a secret in the pod’s namespace

+ +
+ +
+
+ +
+
+

.spec.envs[*].valueFrom.secretKeyRef.key

+
+
+
+string +Required +
+ +
+

The key of the secret to select from. Must be a valid secret key.

+ +
+ +
+
+ +
+
+

.spec.envs[*].valueFrom.secretKeyRef.name

+
+
+
+string + +
+ +
+

Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?

+ +
+ +
+
+ +
+
+

.spec.envs[*].valueFrom.secretKeyRef.optional

+
+
+
+boolean + +
+ +
+

Specify whether the Secret or its key must be defined

+ +
+ +
+
+ +
+
+

.spec.headlessServiceEnabled

+
+
+
+boolean +Required +
+ +
+
+ +
+
+

.spec.ingressController

+
+
+
+string + +
+ +
+

IngressController specifies the type of the ingress controller to be used for external listeners. The istioingress ingress controller type requires the spec.istioControlPlane field to be populated as well.

+ +
+ +
+
+ +
+
+

.spec.istioControlPlane

+
+
+
+object + +
+ +
+

IstioControlPlane is a reference to the IstioControlPlane resource for envoy configuration. It must be specified if istio ingress is used.

+ +
+ +
+
+ +
+
+

.spec.istioControlPlane.name

+
+
+
+string +Required +
+ +
+
+ +
+
+

.spec.istioControlPlane.namespace

+
+
+
+string +Required +
+ +
+
+ +
+
+

.spec.istioIngressConfig

+
+
+
+object + +
+ +
+

IstioIngressConfig defines the config for the Istio Ingress Controller

+ +
+ +
+
+ +
+
+

.spec.istioIngressConfig.annotations

+
+
+
+object + +
+ +
+

Annotations defines the annotations placed on the istio ingress controller deployment

+ +
+ +
+
+ +
+
+

.spec.istioIngressConfig.envs

+
+
+
+array + +
+ +
+

Envs allows to add additional env vars to the istio meshgateway resource

+ +
+ +
+
+ +
+
+

.spec.istioIngressConfig.envs[*]

+
+
+
+object + +
+ +
+

EnvVar represents an environment variable present in a Container.

+ +
+ +
+
+ +
+
+

.spec.istioIngressConfig.envs[*].name

+
+
+
+string +Required +
+ +
+

Name of the environment variable. Must be a C_IDENTIFIER.

+ +
+ +
+
+ +
+
+

.spec.istioIngressConfig.envs[*].value

+
+
+
+string + +
+ +
+

Variable references $(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. “$$(VAR_NAME)” will produce the string literal “$(VAR_NAME)”. Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to “”.

+ +
+ +
+
+ +
+
+

.spec.istioIngressConfig.envs[*].valueFrom

+
+
+
+object + +
+ +
+

Source for the environment variable’s value. Cannot be used if value is not empty.

+ +
+ +
+
+ +
+
+

.spec.istioIngressConfig.envs[*].valueFrom.configMapKeyRef

+
+
+
+object + +
+ +
+

Selects a key of a ConfigMap.

+ +
+ +
+
+ +
+
+

.spec.istioIngressConfig.envs[*].valueFrom.configMapKeyRef.key

+
+
+
+string +Required +
+ +
+

The key to select.

+ +
+ +
+
+ +
+
+

.spec.istioIngressConfig.envs[*].valueFrom.configMapKeyRef.name

+
+
+
+string + +
+ +
+

Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?

+ +
+ +
+
+ +
+
+

.spec.istioIngressConfig.envs[*].valueFrom.configMapKeyRef.optional

+
+
+
+boolean + +
+ +
+

Specify whether the ConfigMap or its key must be defined

+ +
+ +
+
+ +
+
+

.spec.istioIngressConfig.envs[*].valueFrom.fieldRef

+
+
+
+object + +
+ +
+

Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'], metadata.annotations['<KEY>'], spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs.

+ +
+ +
+
+ +
+
+

.spec.istioIngressConfig.envs[*].valueFrom.fieldRef.apiVersion

+
+
+
+string + +
+ +
+

Version of the schema the FieldPath is written in terms of, defaults to “v1”.

+ +
+ +
+
+ +
+
+

.spec.istioIngressConfig.envs[*].valueFrom.fieldRef.fieldPath

+
+
+
+string +Required +
+ +
+

Path of the field to select in the specified API version.

+ +
+ +
+
+ +
+
+

.spec.istioIngressConfig.envs[*].valueFrom.resourceFieldRef

+
+
+
+object + +
+ +
+

Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported.

+ +
+ +
+
+ +
+
+

.spec.istioIngressConfig.envs[*].valueFrom.resourceFieldRef.containerName

+
+
+
+string + +
+ +
+

Container name: required for volumes, optional for env vars

+ +
+ +
+
+ +
+
+

.spec.istioIngressConfig.envs[*].valueFrom.resourceFieldRef.divisor

+
+
+
+ + +
+ +
+

Specifies the output format of the exposed resources, defaults to “1”

+ +
+ +
+
+ +
+
+

.spec.istioIngressConfig.envs[*].valueFrom.resourceFieldRef.resource

+
+
+
+string +Required +
+ +
+

Required: resource to select

+ +
+ +
+
+ +
+
+

.spec.istioIngressConfig.envs[*].valueFrom.secretKeyRef

+
+
+
+object + +
+ +
+

Selects a key of a secret in the pod’s namespace

+ +
+ +
+
+ +
+
+

.spec.istioIngressConfig.envs[*].valueFrom.secretKeyRef.key

+
+
+
+string +Required +
+ +
+

The key of the secret to select from. Must be a valid secret key.

+ +
+ +
+
+ +
+
+

.spec.istioIngressConfig.envs[*].valueFrom.secretKeyRef.name

+
+
+
+string + +
+ +
+

Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?

+ +
+ +
+
+ +
+
+

.spec.istioIngressConfig.envs[*].valueFrom.secretKeyRef.optional

+
+
+
+boolean + +
+ +
+

Specify whether the Secret or its key must be defined

+ +
+ +
+
+ +
+
+

.spec.istioIngressConfig.gatewayConfig

+
+
+
+object + +
+ +
+
+ +
+
+

.spec.istioIngressConfig.gatewayConfig.caCertificates

+
+
+
+string + +
+ +
+

REQUIRED if mode is MUTUAL. The path to a file containing certificate authority certificates to use in verifying a presented client side certificate.

+ +
+ +
+
+ +
+
+

.spec.istioIngressConfig.gatewayConfig.cipherSuites

+
+
+
+array + +
+ +
+

Optional: If specified, only support the specified cipher list. Otherwise default to the default cipher list supported by Envoy.

+ +
+ +
+
+ +
+
+

.spec.istioIngressConfig.gatewayConfig.cipherSuites[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.istioIngressConfig.gatewayConfig.credentialName

+
+
+
+string + +
+ +
+

The credentialName stands for a unique identifier that can be used to identify the serverCertificate and the privateKey. The credentialName appended with suffix “-cacert” is used to identify the CaCertificates associated with this server. Gateway workloads capable of fetching credentials from a remote credential store such as Kubernetes secrets, will be configured to retrieve the serverCertificate and the privateKey using credentialName, instead of using the file system paths specified above. If using mutual TLS, gateway workload instances will retrieve the CaCertificates using credentialName-cacert. The semantics of the name are platform dependent. In Kubernetes, the default Istio supplied credential server expects the credentialName to match the name of the Kubernetes secret that holds the server certificate, the private key, and the CA certificate (if using mutual TLS). Set the ISTIO_META_USER_SDS metadata variable in the gateway’s proxy to enable the dynamic credential fetching feature.

+ +
+ +
+
+ +
+
+

.spec.istioIngressConfig.gatewayConfig.httpsRedirect

+
+
+
+boolean + +
+ +
+

If set to true, the load balancer will send a 301 redirect for all http connections, asking the clients to use HTTPS.

+ +
+ +
+
+ +
+
+

.spec.istioIngressConfig.gatewayConfig.maxProtocolVersion

+
+
+
+string + +
+ +
+

Optional: Maximum TLS protocol version.

+ +
+ +
+
+ +
+
+

.spec.istioIngressConfig.gatewayConfig.minProtocolVersion

+
+
+
+string + +
+ +
+

Optional: Minimum TLS protocol version.

+ +
+ +
+
+ +
+
+

.spec.istioIngressConfig.gatewayConfig.mode

+
+
+
+string + +
+ +
+

Optional: Indicates whether connections to this port should be secured using TLS. The value of this field determines how TLS is enforced.

+ +
+ +
+
+ +
+
+

.spec.istioIngressConfig.gatewayConfig.privateKey

+
+
+
+string + +
+ +
+

REQUIRED if mode is SIMPLE or MUTUAL. The path to the file holding the server’s private key.

+ +
+ +
+
+ +
+
+

.spec.istioIngressConfig.gatewayConfig.serverCertificate

+
+
+
+string + +
+ +
+

REQUIRED if mode is SIMPLE or MUTUAL. The path to the file holding the server-side TLS certificate to use.

+ +
+ +
+
+ +
+
+

.spec.istioIngressConfig.gatewayConfig.subjectAltNames

+
+
+
+array + +
+ +
+

A list of alternate names to verify the subject identity in the certificate presented by the client.

+ +
+ +
+
+ +
+
+

.spec.istioIngressConfig.gatewayConfig.subjectAltNames[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.istioIngressConfig.gatewayConfig.verifyCertificateHash

+
+
+
+array + +
+ +
+

An optional list of hex-encoded SHA-256 hashes of the authorized client certificates. Both simple and colon separated formats are acceptable. Note: When both verify_certificate_hash and verify_certificate_spki are specified, a hash matching either value will result in the certificate being accepted.

+ +
+ +
+
+ +
+
+

.spec.istioIngressConfig.gatewayConfig.verifyCertificateHash[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.istioIngressConfig.gatewayConfig.verifyCertificateSpki

+
+
+
+array + +
+ +
+

An optional list of base64-encoded SHA-256 hashes of the SKPIs of authorized client certificates. Note: When both verify_certificate_hash and verify_certificate_spki are specified, a hash matching either value will result in the certificate being accepted.

+ +
+ +
+
+ +
+
+

.spec.istioIngressConfig.gatewayConfig.verifyCertificateSpki[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.istioIngressConfig.loadBalancerSourceRanges

+
+
+
+array + +
+ +
+

If specified and supported by the platform, traffic through the cloud-provider load-balancer will be restricted to the specified client IPs. This field will be ignored if the cloud-provider does not support the feature.” More info: https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/

+ +
+ +
+
+ +
+
+

.spec.istioIngressConfig.loadBalancerSourceRanges[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.istioIngressConfig.nodeSelector

+
+
+
+object + +
+ +
+
+ +
+
+

.spec.istioIngressConfig.replicas

+
+
+
+integer + +
+ +
+
+ +
+
+

.spec.istioIngressConfig.resourceRequirements

+
+
+
+object + +
+ +
+

ResourceRequirements describes the compute resource requirements.

+ +
+ +
+
+ +
+
+

.spec.istioIngressConfig.resourceRequirements.limits

+
+
+
+object + +
+ +
+

Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/

+ +
+ +
+
+ +
+
+

.spec.istioIngressConfig.resourceRequirements.requests

+
+
+
+object + +
+ +
+

Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/

+ +
+ +
+
+ +
+
+

.spec.istioIngressConfig.tolerations

+
+
+
+array + +
+ +
+
+ +
+
+

.spec.istioIngressConfig.tolerations[*]

+
+
+
+object + +
+ +
+

The pod this Toleration is attached to tolerates any taint that matches the triple using the matching operator .

+ +
+ +
+
+ +
+
+

.spec.istioIngressConfig.tolerations[*].effect

+
+
+
+string + +
+ +
+

Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute.

+ +
+ +
+
+ +
+
+

.spec.istioIngressConfig.tolerations[*].key

+
+
+
+string + +
+ +
+

Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys.

+ +
+ +
+
+ +
+
+

.spec.istioIngressConfig.tolerations[*].operator

+
+
+
+string + +
+ +
+

Operator represents a key’s relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category.

+ +
+ +
+
+ +
+
+

.spec.istioIngressConfig.tolerations[*].tolerationSeconds

+
+
+
+integer + +
+ +
+

TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system.

+ +
+ +
+
+ +
+
+

.spec.istioIngressConfig.tolerations[*].value

+
+
+
+string + +
+ +
+

Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string.

+ +
+ +
+
+ +
+
+

.spec.istioIngressConfig.virtualServiceAnnotations

+
+
+
+object + +
+ +
+
+ +
+
+

.spec.kubernetesClusterDomain

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.listenersConfig

+
+
+
+object +Required +
+ +
+

ListenersConfig defines the Kafka listener types

+ +
+ +
+
+ +
+
+

.spec.listenersConfig.externalListeners

+
+
+
+array + +
+ +
+
+ +
+
+

.spec.listenersConfig.externalListeners[*]

+
+
+
+object + +
+ +
+

ExternalListenerConfig defines the external listener config for Kafka

+ +
+ +
+
+ +
+
+

.spec.listenersConfig.externalListeners[*].accessMethod

+
+
+
+string + +
+ +
+

accessMethod defines the method which the external listener is exposed through. Two types are supported LoadBalancer and NodePort. The recommended and default is the LoadBalancer. NodePort should be used in Kubernetes environments with no support for provisioning Load Balancers.

+ +
+ +
+
+ +
+
+

.spec.listenersConfig.externalListeners[*].anyCastPort

+
+
+
+integer + +
+ +
+

configuring AnyCastPort allows kafka cluster access without specifying the exact broker

+ +
+ +
+
+ +
+
+

.spec.listenersConfig.externalListeners[*].config

+
+
+
+object + +
+ +
+

Config allows to specify ingress controller configuration per external listener if set overrides the the default KafkaClusterSpec.IstioIngressConfig or KafkaClusterSpec.EnvoyConfig for this external listener.

+ +
+ +
+
+ +
+
+

.spec.listenersConfig.externalListeners[*].config.defaultIngressConfig

+
+
+
+string +Required +
+ +
+
+ +
+
+

.spec.listenersConfig.externalListeners[*].config.ingressConfig

+
+
+
+object + +
+ +
+
+ +
+
+

.spec.listenersConfig.externalListeners[*].containerPort

+
+
+
+integer +Required +
+ +
+
+ +
+
+

.spec.listenersConfig.externalListeners[*].externalStartingPort

+
+
+
+integer +Required +
+ +
+
+ +
+
+

.spec.listenersConfig.externalListeners[*].externalTrafficPolicy

+
+
+
+string + +
+ +
+

externalTrafficPolicy denotes if this Service desires to route external traffic to node-local or cluster-wide endpoints. “Local” preserves the client source IP and avoids a second hop for LoadBalancer and Nodeport type services, but risks potentially imbalanced traffic spreading. “Cluster” obscures the client source IP and may cause a second hop to another node, but should have good overall load-spreading.

+ +
+ +
+
+ +
+
+

.spec.listenersConfig.externalListeners[*].hostnameOverride

+
+
+
+string + +
+ +
+

In case of external listeners using LoadBalancer access method the value of this field is used to advertise the Kafka broker external listener instead of the public IP of the provisioned LoadBalancer service (e.g. can be used to advertise the listener using a URL recorded in DNS instead of public IP). In case of external listeners using NodePort access method the broker instead of node public IP (see “brokerConfig.nodePortExternalIP”) is advertised on the address having the following format: -.

+ +
+ +
+
+ +
+
+

.spec.listenersConfig.externalListeners[*].name

+
+
+
+string +Required +
+ +
+
+ +
+
+

.spec.listenersConfig.externalListeners[*].serverSSLCertSecret

+
+
+
+object + +
+ +
+

ServerSSLCertSecret is a reference to the Kubernetes secret that contains the server certificate for the listener to be used for SSL communication. The secret must contain the keystore, truststore jks files and the password for them in base64 encoded format under the keystore.jks, truststore.jks, password data fields. If this field is omitted koperator will auto-create a self-signed server certificate using the configuration provided in ‘sslSecrets’ field.

+ +
+ +
+
+ +
+
+

.spec.listenersConfig.externalListeners[*].serverSSLCertSecret.name

+
+
+
+string + +
+ +
+

Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?

+ +
+ +
+
+ +
+
+

.spec.listenersConfig.externalListeners[*].serviceAnnotations

+
+
+
+object + +
+ +
+

ServiceAnnotations defines annotations which will be placed to the service or services created for the external listener

+ +
+ +
+
+ +
+
+

.spec.listenersConfig.externalListeners[*].serviceType

+
+
+
+string + +
+ +
+

Service Type string describes ingress methods for a service Only “NodePort” and “LoadBalancer” is supported. Default value is LoadBalancer

+ +
+ +
+
+ +
+
+

.spec.listenersConfig.externalListeners[*].sslClientAuth

+
+
+
+string + +
+ +
+

SSLClientAuth specifies whether client authentication is required, requested, or not required. This field defaults to “required” if it is omitted

+ +
+ +
+
+ +
+
+

.spec.listenersConfig.externalListeners[*].type

+
+
+
+string +Required +
+ +
+

SecurityProtocol is the protocol used to communicate with brokers. Valid values are: plaintext, ssl, sasl_plaintext, sasl_ssl.

+ +
+ +
+
+ +
+
+

.spec.listenersConfig.internalListeners

+
+
+
+array +Required +
+ +
+
+ +
+
+

.spec.listenersConfig.internalListeners[*]

+
+
+
+object + +
+ +
+

InternalListenerConfig defines the internal listener config for Kafka

+ +
+ +
+
+ +
+
+

.spec.listenersConfig.internalListeners[*].containerPort

+
+
+
+integer +Required +
+ +
+
+ +
+
+

.spec.listenersConfig.internalListeners[*].name

+
+
+
+string +Required +
+ +
+
+ +
+
+

.spec.listenersConfig.internalListeners[*].serverSSLCertSecret

+
+
+
+object + +
+ +
+

ServerSSLCertSecret is a reference to the Kubernetes secret that contains the server certificate for the listener to be used for SSL communication. The secret must contain the keystore, truststore jks files and the password for them in base64 encoded format under the keystore.jks, truststore.jks, password data fields. If this field is omitted koperator will auto-create a self-signed server certificate using the configuration provided in ‘sslSecrets’ field.

+ +
+ +
+
+ +
+
+

.spec.listenersConfig.internalListeners[*].serverSSLCertSecret.name

+
+
+
+string + +
+ +
+

Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?

+ +
+ +
+
+ +
+
+

.spec.listenersConfig.internalListeners[*].sslClientAuth

+
+
+
+string + +
+ +
+

SSLClientAuth specifies whether client authentication is required, requested, or not required. This field defaults to “required” if it is omitted

+ +
+ +
+
+ +
+
+

.spec.listenersConfig.internalListeners[*].type

+
+
+
+string +Required +
+ +
+

SecurityProtocol is the protocol used to communicate with brokers. Valid values are: plaintext, ssl, sasl_plaintext, sasl_ssl.

+ +
+ +
+
+ +
+
+

.spec.listenersConfig.internalListeners[*].usedForControllerCommunication

+
+
+
+boolean + +
+ +
+
+ +
+
+

.spec.listenersConfig.internalListeners[*].usedForInnerBrokerCommunication

+
+
+
+boolean +Required +
+ +
+
+ +
+
+

.spec.listenersConfig.serviceAnnotations

+
+
+
+object + +
+ +
+
+ +
+
+

.spec.listenersConfig.sslSecrets

+
+
+
+object + +
+ +
+

SSLSecrets defines the Kafka SSL secrets

+ +
+ +
+
+ +
+
+

.spec.listenersConfig.sslSecrets.create

+
+
+
+boolean + +
+ +
+
+ +
+
+

.spec.listenersConfig.sslSecrets.issuerRef

+
+
+
+object + +
+ +
+

ObjectReference is a reference to an object with a given name, kind and group.

+ +
+ +
+
+ +
+
+

.spec.listenersConfig.sslSecrets.issuerRef.group

+
+
+
+string + +
+ +
+

Group of the resource being referred to.

+ +
+ +
+
+ +
+
+

.spec.listenersConfig.sslSecrets.issuerRef.kind

+
+
+
+string + +
+ +
+

Kind of the resource being referred to.

+ +
+ +
+
+ +
+
+

.spec.listenersConfig.sslSecrets.issuerRef.name

+
+
+
+string +Required +
+ +
+

Name of the resource being referred to.

+ +
+ +
+
+ +
+
+

.spec.listenersConfig.sslSecrets.jksPasswordName

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.listenersConfig.sslSecrets.pkiBackend

+
+
+
+string + +
+ +
+

PKIBackend represents an interface implementing the PKIManager

+ +
+ +
+
+ +
+
+

.spec.listenersConfig.sslSecrets.tlsSecretName

+
+
+
+string +Required +
+ +
+
+ +
+
+

.spec.monitoringConfig

+
+
+
+object + +
+ +
+

MonitoringConfig defines the config for monitoring Kafka and Cruise Control

+ +
+ +
+
+ +
+
+

.spec.monitoringConfig.cCJMXExporterConfig

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.monitoringConfig.jmxImage

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.monitoringConfig.kafkaJMXExporterConfig

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.monitoringConfig.pathToJar

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.oneBrokerPerNode

+
+
+
+boolean +Required +
+ +
+

If true OneBrokerPerNode ensures that each kafka broker will be placed on a different node unless a custom Affinity definition overrides this behavior

+ +
+ +
+
+ +
+
+

.spec.propagateLabels

+
+
+
+boolean + +
+ +
+
+ +
+
+

.spec.rackAwareness

+
+
+
+object + +
+ +
+

RackAwareness defines the required fields to enable kafka’s rack aware feature

+ +
+ +
+
+ +
+
+

.spec.rackAwareness.labels

+
+
+
+array +Required +
+ +
+
+ +
+
+

.spec.rackAwareness.labels[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.readOnlyConfig

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.rollingUpgradeConfig

+
+
+
+object +Required +
+ +
+

RollingUpgradeConfig defines the desired config of the RollingUpgrade

+ +
+ +
+
+ +
+
+

.spec.rollingUpgradeConfig.failureThreshold

+
+
+
+integer +Required +
+ +
+

FailureThreshold controls how many failures the cluster can tolerate during a rolling upgrade. Once the number of failures reaches this threshold a rolling upgrade flow stops. The number of failures is computed as the sum of distinct broker replicas with either offline replicas or out of sync replicas and the number of alerts triggered by alerts with ‘rollingupgrade’

+ +
+ +
+
+ +
+
+

.spec.zkAddresses

+
+
+
+array +Required +
+ +
+

ZKAddresses specifies the ZooKeeper connection string in the form hostname:port where host and port are the host and port of a ZooKeeper server.

+ +
+ +
+
+ +
+
+

.spec.zkAddresses[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.zkPath

+
+
+
+string + +
+ +
+

ZKPath specifies the ZooKeeper chroot path as part of its ZooKeeper connection string which puts its data under some path in the global ZooKeeper namespace.

+ +
+ +
+
+ +
+
+

.status

+
+
+
+object + +
+ +
+

KafkaClusterStatus defines the observed state of KafkaCluster

+ +
+ +
+
+ +
+
+

.status.alertCount

+
+
+
+integer +Required +
+ +
+
+ +
+
+

.status.brokersState

+
+
+
+object + +
+ +
+
+ +
+
+

.status.cruiseControlTopicStatus

+
+
+
+string + +
+ +
+

CruiseControlTopicStatus holds info about the CC topic status

+ +
+ +
+
+ +
+
+

.status.listenerStatuses

+
+
+
+object + +
+ +
+

ListenerStatuses holds information about the statuses of the configured listeners. The internal and external listeners are stored in separate maps, and each listener can be looked up by name.

+ +
+ +
+
+ +
+
+

.status.listenerStatuses.externalListeners

+
+
+
+object + +
+ +
+
+ +
+
+

.status.listenerStatuses.internalListeners

+
+
+
+object + +
+ +
+
+ +
+
+

.status.rollingUpgradeStatus

+
+
+
+object + +
+ +
+

RollingUpgradeStatus defines status of rolling upgrade

+ +
+ +
+
+ +
+
+

.status.rollingUpgradeStatus.errorCount

+
+
+
+integer +Required +
+ +
+

ErrorCount keeps track the number of errors reported by alerts labeled with ‘rollingupgrade’. It’s reset once these alerts stop firing.

+ +
+ +
+
+ +
+
+

.status.rollingUpgradeStatus.lastSuccess

+
+
+
+string +Required +
+ +
+
+ +
+
+

.status.state

+
+
+
+string +Required +
+ +
+

ClusterState holds info about the cluster state

+ +
+ +
+
+ + + + + +
+ + + diff --git a/docs/reference/crd/kafkatopics.kafka.banzaicloud.io.md b/docs/reference/crd/kafkatopics.kafka.banzaicloud.io.md new file mode 100644 index 0000000..5305074 --- /dev/null +++ b/docs/reference/crd/kafkatopics.kafka.banzaicloud.io.md @@ -0,0 +1,275 @@ +--- +title: KafkaTopic CRD schema reference (group kafka.banzaicloud.io) +linkTitle: KafkaTopic +description: | + KafkaTopic is the Schema for the kafkatopics API +weight: 100 +crd: + name_camelcase: KafkaTopic + name_plural: kafkatopics + name_singular: kafkatopic + group: kafka.banzaicloud.io + technical_name: kafkatopics.kafka.banzaicloud.io + scope: Namespaced + source_repository: ../../ + source_repository_ref: master + versions: + - v1alpha1 + topics: +layout: crd +owner: + - https://github.com/banzaicloud/ +aliases: + - /reference/cp-k8s-api/kafkatopics.kafka.banzaicloud.io/ +technical_name: kafkatopics.kafka.banzaicloud.io +source_repository: ../../ +source_repository_ref: master +--- + +## KafkaTopic + + +KafkaTopic is the Schema for the kafkatopics API +
+
Full name:
+
kafkatopics.kafka.banzaicloud.io
+
Group:
+
kafka.banzaicloud.io
+
Singular name:
+
kafkatopic
+
Plural name:
+
kafkatopics
+
Scope:
+
Namespaced
+
Versions:
+
v1alpha1
+
+ + + +
+ +## Version v1alpha1 {#v1alpha1} + + + +## Properties {#property-details-v1alpha1} + + +
+
+

.apiVersion

+
+
+
+string + +
+ +
+

APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources

+ +
+ +
+
+ +
+
+

.kind

+
+
+
+string + +
+ +
+

Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds

+ +
+ +
+
+ +
+
+

.metadata

+
+
+
+object + +
+ +
+
+ +
+
+

.spec

+
+
+
+object + +
+ +
+

KafkaTopicSpec defines the desired state of KafkaTopic

+ +
+ +
+
+ +
+
+

.spec.clusterRef

+
+
+
+object +Required +
+ +
+

ClusterReference states a reference to a cluster for topic/user provisioning

+ +
+ +
+
+ +
+
+

.spec.clusterRef.name

+
+
+
+string +Required +
+ +
+
+ +
+
+

.spec.clusterRef.namespace

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.config

+
+
+
+object + +
+ +
+
+ +
+
+

.spec.name

+
+
+
+string +Required +
+ +
+
+ +
+
+

.spec.partitions

+
+
+
+integer +Required +
+ +
+

Partitions defines the desired number of partitions; must be positive, or -1 to signify using the broker’s default

+ +
+ +
+
+ +
+
+

.spec.replicationFactor

+
+
+
+integer +Required +
+ +
+

ReplicationFactor defines the desired replication factor; must be positive, or -1 to signify using the broker’s default

+ +
+ +
+
+ +
+
+

.status

+
+
+
+object + +
+ +
+

KafkaTopicStatus defines the observed state of KafkaTopic

+ +
+ +
+
+ +
+
+

.status.state

+
+
+
+string +Required +
+ +
+

TopicState defines the state of a KafkaTopic

+ +
+ +
+
+ + + + + +
+ + + diff --git a/docs/reference/crd/kafkausers.kafka.banzaicloud.io.md b/docs/reference/crd/kafkausers.kafka.banzaicloud.io.md new file mode 100644 index 0000000..174ef3e --- /dev/null +++ b/docs/reference/crd/kafkausers.kafka.banzaicloud.io.md @@ -0,0 +1,518 @@ +--- +title: KafkaUser CRD schema reference (group kafka.banzaicloud.io) +linkTitle: KafkaUser +description: | + KafkaUser is the Schema for the kafka users API +weight: 100 +crd: + name_camelcase: KafkaUser + name_plural: kafkausers + name_singular: kafkauser + group: kafka.banzaicloud.io + technical_name: kafkausers.kafka.banzaicloud.io + scope: Namespaced + source_repository: ../../ + source_repository_ref: master + versions: + - v1alpha1 + topics: +layout: crd +owner: + - https://github.com/banzaicloud/ +aliases: + - /reference/cp-k8s-api/kafkausers.kafka.banzaicloud.io/ +technical_name: kafkausers.kafka.banzaicloud.io +source_repository: ../../ +source_repository_ref: master +--- + +## KafkaUser + + +KafkaUser is the Schema for the kafka users API +
+
Full name:
+
kafkausers.kafka.banzaicloud.io
+
Group:
+
kafka.banzaicloud.io
+
Singular name:
+
kafkauser
+
Plural name:
+
kafkausers
+
Scope:
+
Namespaced
+
Versions:
+
v1alpha1
+
+ + + +
+ +## Version v1alpha1 {#v1alpha1} + + + +## Properties {#property-details-v1alpha1} + + +
+
+

.apiVersion

+
+
+
+string + +
+ +
+

APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources

+ +
+ +
+
+ +
+
+

.kind

+
+
+
+string + +
+ +
+

Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds

+ +
+ +
+
+ +
+
+

.metadata

+
+
+
+object + +
+ +
+
+ +
+
+

.spec

+
+
+
+object + +
+ +
+

KafkaUserSpec defines the desired state of KafkaUser

+ +
+ +
+
+ +
+
+

.spec.annotations

+
+
+
+object + +
+ +
+

Annotations defines the annotations placed on the certificate or certificate signing request object

+ +
+ +
+
+ +
+
+

.spec.clusterRef

+
+
+
+object +Required +
+ +
+

ClusterReference states a reference to a cluster for topic/user provisioning

+ +
+ +
+
+ +
+
+

.spec.clusterRef.name

+
+
+
+string +Required +
+ +
+
+ +
+
+

.spec.clusterRef.namespace

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.createCert

+
+
+
+boolean + +
+ +
+
+ +
+
+

.spec.dnsNames

+
+
+
+array + +
+ +
+
+ +
+
+

.spec.dnsNames[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.spec.includeJKS

+
+
+
+boolean + +
+ +
+
+ +
+
+

.spec.pkiBackendSpec

+
+
+
+object + +
+ +
+
+ +
+
+

.spec.pkiBackendSpec.issuerRef

+
+
+
+object + +
+ +
+

ObjectReference is a reference to an object with a given name, kind and group.

+ +
+ +
+
+ +
+
+

.spec.pkiBackendSpec.issuerRef.group

+
+
+
+string + +
+ +
+

Group of the resource being referred to.

+ +
+ +
+
+ +
+
+

.spec.pkiBackendSpec.issuerRef.kind

+
+
+
+string + +
+ +
+

Kind of the resource being referred to.

+ +
+ +
+
+ +
+
+

.spec.pkiBackendSpec.issuerRef.name

+
+
+
+string +Required +
+ +
+

Name of the resource being referred to.

+ +
+ +
+
+ +
+
+

.spec.pkiBackendSpec.pkiBackend

+
+
+
+string +Required +
+ +
+
+ +
+
+

.spec.pkiBackendSpec.signerName

+
+
+
+string + +
+ +
+

SignerName indicates requested signer, and is a qualified name.

+ +
+ +
+
+ +
+
+

.spec.secretName

+
+
+
+string +Required +
+ +
+
+ +
+
+

.spec.topicGrants

+
+
+
+array + +
+ +
+
+ +
+
+

.spec.topicGrants[*]

+
+
+
+object + +
+ +
+

UserTopicGrant is the desired permissions for the KafkaUser

+ +
+ +
+
+ +
+
+

.spec.topicGrants[*].accessType

+
+
+
+string +Required +
+ +
+

KafkaAccessType hold info about Kafka ACL

+ +
+ +
+
+ +
+
+

.spec.topicGrants[*].patternType

+
+
+
+string + +
+ +
+

KafkaPatternType hold the Resource Pattern Type of kafka ACL

+ +
+ +
+
+ +
+
+

.spec.topicGrants[*].topicName

+
+
+
+string +Required +
+ +
+
+ +
+
+

.status

+
+
+
+object + +
+ +
+

KafkaUserStatus defines the observed state of KafkaUser

+ +
+ +
+
+ +
+
+

.status.acls

+
+
+
+array + +
+ +
+
+ +
+
+

.status.acls[*]

+
+
+
+string + +
+ +
+
+ +
+
+

.status.state

+
+
+
+string +Required +
+ +
+

UserState defines the state of a KafkaUser

+ +
+ +
+
+ + + + + +
+ + + diff --git a/docs/scenarios.md b/docs/scenarios.md index d1f34fb..2277aaf 100644 --- a/docs/scenarios.md +++ b/docs/scenarios.md @@ -5,34 +5,32 @@ weight: 400 -As highlighted in the [features section](../features/), we removed the reliance on StatefulSet, we support several different scenarios. +As highlighted in the [features section]({{< relref "../_index.md#features" >}}), we removed the reliance on StatefulSet, we support several different scenarios. -> Note: this is not a complete list, if you have a specific requirement or question, [contact us](/contact/). +> Note: this is not a complete list, if you have a specific requirement or question, [contact us](mailto:calisti-support@cisco.com). ## Vertical capacity scaling -We've encountered many situations in which the horizontal scaling of a cluster is impossible. When **only one Broker is throttling** and needs more CPU or requires additional disks (because it handles the most partitions), a StatefulSet-based solution is useless, since it does not distinguishes between replicas' specifications. The handling of such a case requires *unique* Broker configurations. If we need to add a new disk to a unique Broker, we waste a lot of disk space (and money) with a StatefulSet-based solution, since it can't add a disk to a specific Broker, the StatefulSet adds one to each replica. +We've encountered many situations in which the horizontal scaling of a cluster is impossible. When **only one Broker is throttling** and needs more CPU or requires additional disks (because it handles the most partitions), a StatefulSet-based solution is useless, since it does not distinguish between replicas' specifications. The handling of such a case requires *unique* Broker configurations. If we need to add a new disk to a unique Broker, we waste a lot of disk space (and money) with a StatefulSet-based solution, since it can't add a disk to a specific Broker, the StatefulSet adds one to each replica. -With the [Banzai Cloud Kafka operator](https://github.com/banzaicloud/kafka-operator), adding a new disk to any Broker is as easy as changing a CR configuration. Similarly, any Broker-specific configuration can be done on a Broker by Broker basis. +With the [{{< kafka-operator >}}](https://github.com/banzaicloud/koperator), adding a new disk to any Broker is as easy as changing a CR configuration. Similarly, any Broker-specific configuration can be done on a Broker by Broker basis. ## An unhandled error with Broker #1 in a three Broker cluster In the event of an error with Broker #1, we want to handle it without disrupting the other Brokers. Maybe we would like to temporarily remove this Broker from the cluster, and fix its state, reconciling the node that serves the node, or maybe reconfigure the Broker using a new configuration. Again, when using StatefulSet, we lose the ability to remove specific Brokers from the cluster. StatefulSet only supports a field name replica that determines how many replicas an application should use. If there's a downscale/removal, this number can be lowered, however, this means that Kubernetes will remove the most recently added Pod (Broker #3) from the cluster - which, in this case, happens to suit our purposes quite well. -To remove the #1 Broker from the cluster, we need to lower the number of brokers in the cluster from three to one. This will cause a state in which only one Broker is live, while we kill the brokers that handle traffic. The Banzai Cloud Kafka operator supports removing specific brokers without disrupting traffic in the cluster. +To remove the #1 Broker from the cluster, we need to lower the number of brokers in the cluster from three to one. This will cause a state in which only one Broker is live, while we kill the brokers that handle traffic. {{< kafka-operator >}} supports removing specific brokers without disrupting traffic in the cluster. ## Fine grained Broker config support Apache Kafka is a stateful application, where Brokers create/form a cluster with other Brokers. Every Broker is uniquely configurable (we support heterogenous environments, in which no nodes are the same, act the same or have the same specifications - from the infrastructure up through the Brokers' Envoy configuration). Kafka has lots of Broker configs, which can be used to fine tune specific brokers, and we did not want to limit these to ALL Brokers in a StatefulSet. We support unique Broker configs. -*In each of the three scenarios lister above, we decided to not use StatefulSet in our Kafka Operator, relying, instead, on Pods, PVCs and ConfigMaps. We believe StatefulSet is very convenient starting point, as it handles roughly 80% of scenarios but introduces huge limitations when running Kafka on Kubernetes in production.* +*In each of the three scenarios lister above, we decided to not use StatefulSet in our {{< kafka-operator >}}, relying, instead, on Pods, PVCs and ConfigMaps. We believe StatefulSet is very convenient starting point, as it handles roughly 80% of scenarios but introduces huge limitations when running Kafka on Kubernetes in production.* ## Monitoring based control -Use of monitoring is essential for any application, and all relevant information about Kafka should be published to a monitoring solution. When using Kubernetes, the de facto solution is Prometheus, which supports configuring alerts based on previously consumed metrics. We wanted to build a standards-based solution (Prometheus and Alert Manager) that could handle and react to alerts automatically, so human operators wouldn't have to. The Banzai Cloud Kafka operator supports alert-based Kafka cluster management. +Use of monitoring is essential for any application, and all relevant information about Kafka should be published to a monitoring solution. When using Kubernetes, the de facto solution is Prometheus, which supports configuring alerts based on previously consumed metrics. We wanted to build a standards-based solution (Prometheus and Alert Manager) that could handle and react to alerts automatically, so human operators wouldn't have to. {{< kafka-operator >}} supports alert-based Kafka cluster management. ## LinkedIn's Cruise Control -We have a lot of experience in operating both Kafka and Kubernetes at scale. However, we believe that LinkedIn knows how to operate Kafka even better than we do. They built a tool, called Cruise Control, to operate their Kafka infrastructure, and we wanted to build an operator which **handled the infrastructure but did not reinvent the wheel insofar as operating Kafka**. We didn't want to redevelop proven concepts, but wanted to create an operator which leveraged our deep Kubernetes expertise (after all, we've already built a CNCF certified Kubernetes distribution, [PKE](https://github.com/banzaicloud/pke) and a hybrid cloud container management platform, [Pipeline](https://github.com/banzaicloud/pipeline)) by handling all Kafka infrastructure related issues in the way we thought best. We believe managing Kafka is a separate issue, for which there already exist some unique tools and solutions that are standard across the industry, so we took LinkedIn's Cruise Control and integrated it with the operator. - - +We have a lot of experience in operating both Kafka and Kubernetes at scale. However, we believe that LinkedIn knows how to operate Kafka even better than we do. They built a tool, called Cruise Control, to operate their Kafka infrastructure, and we wanted to build an operator which **handled the infrastructure but did not reinvent the wheel insofar as operating Kafka**. We didn't want to redevelop proven concepts, but wanted to create an operator which leveraged our deep Kubernetes expertise by handling all Kafka infrastructure related issues in the way we thought best. We believe managing Kafka is a separate issue, for which there already exist some unique tools and solutions that are standard across the industry, so we took LinkedIn's Cruise Control and integrated it with the operator. diff --git a/docs/ssl.md b/docs/ssl.md index 64a848f..2812ff1 100644 --- a/docs/ssl.md +++ b/docs/ssl.md @@ -1,16 +1,20 @@ --- title: Securing Kafka With SSL -shorttitle: SSL +linktitle: SSL weight: 300 --- -The Kafka operator makes securing your Kafka cluster with SSL simple. +The {{< kafka-operator >}} makes securing your Apache Kafka cluster with SSL simple. -## Enable SSL encryption in Kafka {#enable-ssl} +## Enable SSL encryption in Apache Kafka {#enable-ssl} -To create a Kafka cluster with SSL encryption enabled, you must enable SSL encryption and configure the secrets in the **listenersConfig** section of your **KafkaCluster** Custom Resource. You can provide your own certificates, or instruct the operator to create them for you from your cluster configuration. +To create an Apache Kafka cluster which has listener(s) with SSL encryption enabled, you must enable SSL encryption and configure the secrets in the **listenersConfig** section of your **KafkaCluster** Custom Resource. You can either provide your own CA certificate and the corresponding private key, or let the operator to create them for you from your cluster configuration. Using **sslSecrets**, {{< kafka-operator >}} generates client and server certificates signed using CA. The server certificate is shared across listeners. The client certificate is used by the {{< kafka-operator >}}, Cruise Control, and Cruise Control Metrics Reporter to communicate Kafka brokers using listener with SSL enabled. -{{< include-headless "warning-listener-protocol.md" "supertubes/kafka-operator" >}} +Providing custom certificates per listener is supported from {{< kafka-operator >}} version 0.21.0+. Having configurations where certain external listeners use user provided certificates while others rely on the auto-generated ones provided by {{< kafka-operator >}} are also supported. See details below. + +## Using auto-generated certificates (**ssLSecrets**) + +{{< include-headless "warning-listener-protocol.md" "sdm/koperator" >}} The following example enables SSL and automatically generates the certificates: @@ -22,20 +26,73 @@ If `sslSecrets.create` is `false`, the operator will look for the secret at `ssl |:------------:|:-------------------| | `caCert` | The CA certificate | | `caKey` | The CA private key | -| `clientCert` | A client certificate (this will be used by cruise control and the operator for kafka operations) | -| `clientKey` | The private key for `clientCert` | -| `peerCert` | The certificate for the kafka brokers | -| `peerKey` | The private key for the kafka brokers | + +## Using own certificates + +### Listeners not used for internal broker communication + +In [this **KafkaCluster** custom resource](https://github.com/banzaicloud/koperator/blob/master/config/samples/kafkacluster_with_ssl_hybrid_customcert.yaml), SSL is enabled for all listeners, and certificates are automatically generated for "internal" and "controller" listeners. The "external" and "internal" listeners will use the user-provided certificates. The **serverSSLCertSecret** key is a reference to the Kubernetes secret that contains the server certificate for the listener to be used for SSL communication. + +In the server secret the following keys must be set: + +| Key | Value | +|:----------------:|:------------------------------------------| +| `keystore.jks` | Certificate and private key in JKS format | +| `truststore.jks` | Trusted CA certificate in JKS format | +| `password` | Password for the key and trust store | + +The certificates in the listener configuration must be in JKS format. + +### Listeners used for internal broker or controller communication + +In [this **KafkaCluster** custom resource](https://github.com/banzaicloud/koperator/blob/master/config/samples/kafkacluster_with_ssl_groups_customcert.yaml), SSL is enabled for all listeners, and user-provided certificates are used. In that case, when a custom certificate is used for a listener which is used for internal broker or controller communication, you must also specify the client certificate. The client certificate will be used by {{< kafka-operator >}}, Cruise Control, Cruise Control Metrics Reporter to communicate on SSL. The **clientSSLCertSecret** key is a reference to the Kubernetes secret where the custom client SSL certificate can be provided. The client certificate must be signed by the same CA authority as the server certificate for the corresponding listener. The **clientSSLCertSecret** has to be in the **KafkaCluster** custom resource spec field. +The client secret must contain the keystore and truststore JKS files and the password for them in base64 encoded format. + +In the server secret the following keys must be set: + +| Key | Value | +|:----------------:|:------------------------------------------| +| `keystore.jks` | Certificate and private key in JKS format | +| `truststore.jks` | Trusted CA certificate in JKS format | +| `password` | Password for the key and trust store | + +In the client secret the following keys must be set: + +| Key | Value | +|:----------------:|:------------------------------------------| +| `keystore.jks` | Certificate and private key in JKS format | +| `truststore.jks` | Trusted CA certificate in JKS format | +| `password` | Password for the key and trust store | + +### Generate JKS certificate + +Certificates in JKS format can be generated using OpenSSL and keystore applications. You can also use [this script](https://github.com/confluentinc/confluent-platform-security-tools/blob/master/kafka-generate-ssl.sh). The `keystore.jks` file must contain only one **PrivateKeyEntry**. + +Kafka listeners use 2-way-SSL mutual authentication, so you must properly set the CNAME (Common Name) fields and if needed the SAN (Subject Alternative Name) fields in the certificates. In the following description we assume that the Kafka cluster is in the `kafka` namespace. + +- **For the client certificate**, CNAME must be "kafka-controller.kafka.mgt.cluster.local" (where .kafka. is the namespace of the kafka cluster). +- **For internal listeners which are exposed by a headless service** (kafka-headless), CNAME must be "kafka-headless.kafka.svc.cluster.local", and the SAN field must contain the following: + + - *.kafka-headless.kafka.svc.cluster.local + - kafka-headless.kafka.svc.cluster.local + - *.kafka-headless.kafka.svc + - kafka-headless.kafka.svc + - *.kafka-headless.kafka + - kafka-headless.kafka + - kafka-headless + +- **For internal listeners which are exposed by a normal service** (kafka-all-broker), CNAME must be "kafka-all-broker.kafka.svc.cluster.local" +- **For external listeners**, you need to use the advertised load balancer hostname as CNAME. The hostname need to be specified in the **KafkaCluster** custom resource with **hostnameOverride**, and the **accessMethod** has to be "LoadBalancer". For details about this override, see Step 5 in {{% xref "/sdm/koperator/external-listener/index.md#loadbalancer" %}}. ## Using Kafka ACLs with SSL -> Note: The Kafka operator provides only basic ACL support. For a more complete and robust solution, consider using the [Supertubes](/products/supertubes/) product. -> {{< include-headless "doc/kafka-operator-supertubes-intro.md" >}} +> Note: {{< kafka-operator >}} provides only basic ACL support. For a more complete and robust solution, consider using the [Streaming Data Manager](https://calisti.app) product. +> {{< include-headless "kafka-operator-supertubes-intro.md" "sdm" >}} -If you choose not to enable ACLs for your kafka cluster, you may still use the `KafkaUser` resource to create new certificates for your applications. +If you choose not to enable ACLs for your Apache Kafka cluster, you may still use the `KafkaUser` resource to create new certificates for your applications. You can leave the `topicGrants` out as they will not have any effect. -1. To enable ACL support for your kafka cluster, pass the following configurations along with your `brokerConfig`: +1. To enable ACL support for your Apache Kafka cluster, pass the following configurations along with your `brokerConfig`: ```yaml authorizer.class.name=kafka.security.authorizer.AclAuthorizer @@ -105,108 +162,3 @@ you will need to generate new certificates signed by the CA, and ensure ACLs on |:-----------------------:|:---------------------| | `tls.jks` | The java keystore containing both the user keys and the CA (use this for your keystore AND truststore) | | `pass.txt` | The password to decrypt the JKS (this will be randomly generated) | - -## Using different secret/PKI backends - -The operator supports using a back-end other than `cert-manager` for the PKI and user secrets. -For now there is just an additional option of using `vault`. -An easy way to get up and running quickly with `vault` on your Kubernetes cluster is to use the open source [`bank-vaults`](/products//bank-vaults/). - -1. To set up `bank-vaults`, a `vault` instance, and the `vault-secrets-webhook`, you can run the following: - - ```bash - git clone https://github.com/banzaicloud/bank-vaults - cd bank-vaults - - # setup the operator and a vault instance - kubectl apply -f operator/deploy/rbac.yaml - kubectl apply -f operator/deploy/operator-rbac.yaml - kubectl apply -f operator/deploy/operator.yaml - kubectl apply -f operator/deploy/cr.yaml - - # install the pod injector webhook (optional) - helm install --namespace vault-infra --name vault-secrets-webhook banzaicloud-stable/vault-secrets-webhook - ``` - - With a vault instance in the cluster, you can deploy the operator with vault credentials. -1. First create a secret with the vault token and CA certificate by running: - - ```bash - # These values match the manifests applied above, they may be different for you - VAULT_TOKEN=$(kubectl get secrets vault-unseal-keys -o jsonpath={.data.vault-root} | base64 --decode) - VAULT_CACERT=$(kubectl get secret vault-tls -o jsonpath="{.data.ca\.crt}" | base64 --decode) - - # create the kafka namespace if you haven't already - kubectl create ns kafka - - # Create a Kubernetes secret with the token and CA cert - kubectl -n kafka create secret generic vault-keys --from-literal=vault.token=${VAULT_TOKEN} --from-literal=ca.crt="${VAULT_CACERT}" - ``` - -1. Then, if using the `kafka-operator` helm chart: - - ```bash - helm install \ - --name kafka-operator \ - --namespace kafka \ - --set operator.vaultAddress=https://vault.default.svc.cluster.local:8200 \ - --set operator.vaultSecret=vault-keys \ - banzaicloud-stable/kafka-operator - ``` - -1. You will now be able to specify the `vault` back-end when using the managed PKI. Your `sslSecrets` would look like this: - - ```yaml - sslSecrets: - tlsSecretName: "test-kafka-operator" - jksPasswordName: "test-kafka-operator-pass" - create: true - pkiBackend: vault - ``` - -1. When a cluster is using the `vault` back-end, the `KafkaUser` CRs will store their secrets in `vault` instead of Kubernetes secrets. For example, if you installed the `vault-secrets-webhook` above, you could create a `KafkaUser` and ingest the keys like so: - - ```yaml - # A KafkaUser with permission to read from 'test-topic' - apiVersion: kafka.banzaicloud.io/v1alpha1 - kind: KafkaUser - metadata: - name: test-kafka-consumer - spec: - clusterRef: - name: kafka - namespace: kafka - secretName: test-kafka-consumer - topicGrants: - - topicName: test-topic - accessType: read - --- - # A pod containing a consumer using the above credentials - apiVersion: v1 - kind: Pod - metadata: - name: kafka-test-pod - annotations: - # annotations for the vault-secrets-webhook - vault.security.banzaicloud.io/vault-addr: "https://vault:8200" - vault.security.banzaicloud.io/vault-tls-secret: "vault-tls" - spec: - - containers: - - # Container reading from topic with the consumer credentials - - name: consumer - image: banzaicloud/kafka-test:latest - env: - - name: KAFKA_MODE - value: consumer - - name: KAFKA_TLS_CERT - value: "vault:secret/data/test-kafka-consumer#tls.crt" - - name: KAFKA_TLS_KEY - value: "vault:secret/data/test-kafka-consumer#tls.key" - - name: KAFKA_TLS_CA - value: "vault:secret/data/test-kafka-consumer#ca.crt" - ``` - - When no `kv` mount is supplied for `secretName` like in the user above, the operator will assume the default `kv` mount at `secret/`. - You can pass the `secretName` as a full `vault` path to specify a different secrets mount to store your user certificates. diff --git a/docs/support.md b/docs/support.md index 4578d7a..3a0d452 100644 --- a/docs/support.md +++ b/docs/support.md @@ -5,12 +5,12 @@ weight: 800 ## Support -{{% include-headless "doc/kafka-operator-supertubes-intro.md" %}} +{{% include-headless "kafka-operator-supertubes-intro.md" "sdm" %}} ### Community support -If you encounter problems while using the Kafka operator the documentation does not address, [open an issue](https://github.com/banzaicloud/kafka-operator/issues) or talk to us on the Banzai Cloud Slack channel [#kafka-operator](https://pages.banzaicloud.com/invite-slack). +If you encounter problems while using {{< kafka-operator >}} the documentation does not address, [open an issue](https://github.com/banzaicloud/kafka-operator/issues) or talk to us in our Slack channel [#kafka-operator](https://banzaicloud.com/invite-slack). ### Commercial support -If you are using the Kafka operator in a production environment and [require commercial support, contact Banzai Cloud](/contact/), the company backing the development of the Kafka operator. +If you are using {{< kafka-operator >}} in a production environment and [require commercial support, contact Cisco](mailto:calisti-support@cisco.com), the company backing the development of {{< kafka-operator >}}. diff --git a/docs/test.md b/docs/test.md index 239da93..4269803 100644 --- a/docs/test.md +++ b/docs/test.md @@ -1,23 +1,29 @@ --- title: Test provisioned Kafka Cluster -shorttitle: Test your cluster +linktitle: Test your cluster weight: 100 --- ## Create Topic -Topic creation by default is enabled in Kafka, but if it is configured otherwise, you'll need to create a topic first. +Topic creation by default is enabled in Apache Kafka, but if it is configured otherwise, you'll need to create a topic first. -- You can use the `KafkaTopic` CRD to create a topic called **my-topic** like this: +- You can use the `KafkaTopic` CR to create a topic called **my-topic** like this: - {{< include-code "create-topic.sample" "bash" >}} + {{< include-code "create-topic.sample" "yaml" >}} > Note: The previous command will fail if the cluster has not finished provisioning. + Expected output: + + ```bash + kafkatopic.kafka.banzaicloud.io/my-topic created + ``` + - To create a sample topic from the CLI you can run the following: ```bash - kubectl -n kafka run kafka-topics -it --image=banzaicloud/kafka:2.13-2.4.0 --rm=true --restart=Never -- /opt/kafka/bin/kafka-topics.sh --zookeeper zookeeper-client.zookeeper:2181 --topic my-topic --create --partitions 1 --replication-factor 1 + kubectl -n kafka run kafka-topics -it --image=ghcr.io/banzaicloud/kafka:2.13-3.1.0 --rm=true --restart=Never -- /opt/kafka/bin/kafka-topics.sh --zookeeper zookeeper-client.zookeeper:2181 --topic my-topic --create --partitions 1 --replication-factor 1 ``` After you have created a topic, produce and consume some messages: @@ -31,31 +37,112 @@ You can use the following commands to send and receive messages within a Kuberne - Produce messages: - ```bash - kubectl -n kafka run kafka-producer -it --image=banzaicloud/kafka:2.13-2.4.0 --rm=true --restart=Never -- /opt/kafka/bin/kafka-console-producer.sh --bootstrap-server kafka-headless:29092 --topic my-topic - ``` + 1. Start the producer container - And type some test messages. + ```bash + kubectl run \ + -n kafka \ + kafka-producer \ + -it \ + --image=ghcr.io/banzaicloud/kafka:2.13-3.1.0 \ + --rm=true \ + --restart=Never \ + -- \ + /opt/kafka/bin/kafka-console-producer.sh \ + --bootstrap-server kafka-headless:29092 \ + --topic my-topic + ``` + + 1. Wait for the producer container to run, this may take a couple seconds. + + Expected output: + + ```bash + If you don't see a command prompt, try pressing enter. + ``` + + 1. Press enter to get a command prompt. + + Expected output: + + ```bash + > + ``` + + 1. Type your messages and press enter, each line will be sent through Kafka. + + Example: + + ```bash + > test + > message + > + + 1. Stop the container. (You can CTRL-D out of it.) + + Expected output: + + ```bash + pod "kafka-producer" deleted + ``` - Consume messages: - ```bash - kubectl -n kafka run kafka-consumer -it --image=banzaicloud/kafka:2.13-2.4.0 --rm=true --restart=Never -- /opt/kafka/bin/kafka-console-consumer.sh --bootstrap-server kafka-headless:29092 --topic my-topic --from-beginning - ``` + 1. Start the consumer container. - You should see the messages you have created. + ```bash + kubectl run \ + -n kafka \ + kafka-consumer \ + -it \ + --image=ghcr.io/banzaicloud/kafka:2.13-3.1.0 \ + --rm=true \ + --restart=Never \ + -- \ + /opt/kafka/bin/kafka-console-consumer.sh \ + --bootstrap-server kafka-headless:29092 \ + --topic my-topic \ + --from-beginning + ``` + + 1. Wait for the consumer container to run, this may take a couple seconds. + + Expected output: + + ```bash + If you don't see a command prompt, try pressing enter. + ``` + + 1. The messages sent by the producer should be displayed here. + + Example: + + ```bash + test + message + ``` + + 1. Stop the container. (You can CTRL-C out of it.) + + Expected output: + + ```bash + Processed a total of 3 messages + pod "kafka-consumer" deleted + pod kafka/kafka-consumer terminated (Error) + ``` ## Send and receive messages with SSL within a cluster {#internal-ssl} -You can use the following procedure to send and receive messages within a Kubernetes cluster [when SSL encryption is enabled for Kafka]({{< relref "/docs/supertubes/kafka-operator/ssl.md#enable-ssl" >}}). To test a Kafka instance secured by SSL we recommend using [Kafkacat](https://github.com/edenhill/kafkacat). +You can use the following procedure to send and receive messages within a Kubernetes cluster [when SSL encryption is enabled for Kafka]({{< relref "/sdm/koperator/ssl.md#enable-ssl" >}}). To test a Kafka instance secured by SSL we recommend using [kcat](https://github.com/edenhill/kcat). -> To use the java client instead of Kafkacat, generate the proper truststore and keystore using the [official docs](https://kafka.apache.org/documentation/#security_ssl). +> To use the java client instead of kcat, generate the proper truststore and keystore using the [official docs](https://kafka.apache.org/documentation/#security_ssl). 1. Create a Kafka user. The client will use this user account to access Kafka. You can use the KafkaUser custom resource to customize the access rights as needed. For example: {{< include-code "create-kafkauser.sample" "bash" >}} -1. To use Kafka inside the cluster, create a Pod which contains `Kafkacat`. Create a `kafka-test` pod in the `kafka` namespace. Note that the value of the **secretName** parameter must be the same as you used when creating the KafkaUser resource, for example, example-kafkauser-secret. +1. To use Kafka inside the cluster, create a Pod which contains `kcat`. Create a `kafka-test` pod in the `kafka` namespace. Note that the value of the **secretName** parameter must be the same as you used when creating the KafkaUser resource, for example, example-kafkauser-secret. {{< include-code "kafkacat-ssl.sample" "bash" >}} @@ -68,7 +155,7 @@ You can use the following procedure to send and receive messages within a Kubern 1. Run the following command to check that you can connect to the brokers. ```bash - kafkacat -L -b kafka-headless:29092 -X security.protocol=SSL -X ssl.key.location=/ssl/certs/tls.key -X ssl.certificate.location=/ssl/certs/tls.crt -X ssl.ca.location=/ssl/certs/ca.crt + kcat -L -b kafka-headless:29092 -X security.protocol=SSL -X ssl.key.location=/ssl/certs/tls.key -X ssl.certificate.location=/ssl/certs/tls.crt -X ssl.ca.location=/ssl/certs/ca.crt ``` The first line of the output should indicate that the communication is encrypted, for example: @@ -80,7 +167,7 @@ You can use the following procedure to send and receive messages within a Kubern 1. Produce some test messages. Run: ```bash - kafkacat -P -b kafka-headless:29092 -t my-topic \ + kcat -P -b kafka-headless:29092 -t my-topic \ -X security.protocol=SSL \ -X ssl.key.location=/ssl/certs/tls.key \ -X ssl.certificate.location=/ssl/certs/tls.crt \ @@ -93,7 +180,7 @@ You can use the following procedure to send and receive messages within a Kubern The following command will use the certificate provisioned with the cluster to connect to Kafka. If you'd like to create and use a different user, create a `KafkaUser` CR, for details, see the [SSL documentation](../ssl/). ```bash - kafkacat -C -b kafka-headless:29092 -t my-topic \ + kcat -C -b kafka-headless:29092 -t my-topic \ -X security.protocol=SSL \ -X ssl.key.location=/ssl/certs/tls.key \ -X ssl.certificate.location=/ssl/certs/tls.crt \ @@ -106,7 +193,7 @@ You can use the following procedure to send and receive messages within a Kubern ### Prerequisites {#external-prerequisites} -1. Producers and consumers that are not in the same Kubernetes cluster can access the Kafka cluster only if an [external listener]({{< relref "/docs/supertubes/kafka-operator/external-listener/index.md" >}}) is configured in your KafkaCluster CR. Check that the **listenersConfig.externalListeners** section exists in the KafkaCluster CR. +1. Producers and consumers that are not in the same Kubernetes cluster can access the Kafka cluster only if an [external listener]({{< relref "/sdm/koperator/external-listener/index.md" >}}) is configured in your KafkaCluster CR. Check that the **listenersConfig.externalListeners** section exists in the KafkaCluster CR. 1. Obtain the external address and port number of the cluster by running the following commands. @@ -127,10 +214,10 @@ You can use the following procedure to send and receive messages within a Kubern 1. Produce some test messages on the the external client. - - If you have [Kafkacat](https://github.com/edenhill/kafkacat) installed, run: + - If you have [kcat](https://github.com/edenhill/kcat) installed, run: ```bash - kafkacat -P -b $SERVICE_IP:$SERVICE_PORT -t my-topic + kcat -P -b $SERVICE_IP:$SERVICE_PORT -t my-topic ``` - If you have the Java Kafka client installed, run: @@ -143,10 +230,10 @@ You can use the following procedure to send and receive messages within a Kubern 1. Consume some messages. - - If you have [Kafkacat](https://github.com/edenhill/kafkacat) installed, run: + - If you have [kcat](https://github.com/edenhill/kcat) installed, run: ```bash - kafkacat -C -b $SERVICE_IP:$SERVICE_PORT -t my-topic + kcat -C -b $SERVICE_IP:$SERVICE_PORT -t my-topic ``` - If you have the Java Kafka client installed, run: @@ -159,23 +246,23 @@ You can use the following procedure to send and receive messages within a Kubern ### SSL enabled {#external-ssl} -You can use the following procedure to send and receive messages from an external host that is outside a Kubernetes cluster when SSL encryption is enabled for Kafka. To test a Kafka instance secured by SSL we recommend using [Kafkacat](https://github.com/edenhill/kafkacat). +You can use the following procedure to send and receive messages from an external host that is outside a Kubernetes cluster when SSL encryption is enabled for Kafka. To test a Kafka instance secured by SSL we recommend using [kcat](https://github.com/edenhill/kcat). -> To use the java client instead of Kafkacat, generate the proper truststore and keystore using the [official docs](https://kafka.apache.org/documentation/#security_ssl). +> To use the java client instead of kcat, generate the proper truststore and keystore using the [official docs](https://kafka.apache.org/documentation/#security_ssl). -1. Install Kafkacat. +1. Install kcat. - __MacOS__: ```bash - brew install kafkacat + brew install kcat ``` - __Ubuntu__: ```bash apt-get update - apt-get install kafkacat + apt-get install kcat ``` 1. Connect to the Kubernetes cluster that runs your Kafka deployment. @@ -199,7 +286,7 @@ You can use the following procedure to send and receive messages from an externa 1. Produce some test messages on the host that is outside your cluster. ```bash - kafkacat -b $SERVICE_IP:$SERVICE_PORT -P -X security.protocol=SSL \ + kcat -b $SERVICE_IP:$SERVICE_PORT -P -X security.protocol=SSL \ -X ssl.key.location=client.key.pem \ -X ssl.certificate.location=client.crt.pem \ -X ssl.ca.location=ca.crt.pem \ @@ -211,11 +298,11 @@ You can use the following procedure to send and receive messages from an externa 1. Consume some messages. ```bash - kafkacat -b $SERVICE_IP:$SERVICE_PORT -C -X security.protocol=SSL \ + kcat -b $SERVICE_IP:$SERVICE_PORT -C -X security.protocol=SSL \ -X ssl.key.location=client.key.pem \ -X ssl.certificate.location=client.crt.pem \ -X ssl.ca.location=ca.crt.pem \ -t my-topic ``` - You should see the messages you have created. \ No newline at end of file + You should see the messages you have created. diff --git a/docs/tips-tricks.md b/docs/tips-tricks.md new file mode 100644 index 0000000..93c4b83 --- /dev/null +++ b/docs/tips-tricks.md @@ -0,0 +1,31 @@ +--- +title: Tips and tricks for the Koperator +linktitle: Tips and tricks +weight: 970 +--- + +## Rebalancing + +The {{< kafka-operator >}} installs Cruise Control (CC) to oversee your Kafka cluster. When you change the cluster (for example, add new nodes), the {{< kafka-operator >}} engages CC to perform a rebalancing if needed. How and when CC performs rebalancing depends on its settings (see goal settings in the official CC documentation) and on how long CC was trained with Kafka’s behavior (this may take weeks). + +You can also trigger rebalancing manually from the CC UI: + +```bash +kubectl port-forward -n kafka svc/kafka-cruisecontrol-svc 8090:8090 +``` + +Cruise Control UI will be available at [http://localhost:8090](http://localhost:8090). + +## Headless service + +When the **headlessServiceEnabled** option is enabled (true) in your KafkaCluster CR, it creates a headless service for accessing the kafka cluster from within the Kubernetes cluster. + +When the **headlessServiceEnabled** option is disabled (false), it creates a ClusterIP service. When using a ClusterIP service, your client application doesn’t need to be aware of every Kafka broker endpoint, it simply connects to *kafka-all-broker:29092* which covers dynamically all the available brokers. That way if the Kafka cluster is scaled dynamically, there is no need to reconfigure the client applications. + +## Retrieving broker configuration during downscale operation + +When a broker is downscaling, the broker configuration is missing from the kafkaCluster/spec/brokers field. You can retrieve the last broker configuration with the following command. + +```bash +echo | base64 -d | gzip -d +``` diff --git a/docs/topics.md b/docs/topics.md index d8c0b86..9910b34 100644 --- a/docs/topics.md +++ b/docs/topics.md @@ -1,44 +1,23 @@ --- title: Provisioning Kafka Topics shorttitle: Kafka topics -weight: 200 +weight: 280 --- +## Create topic + You can create Kafka topics either: - directly against the cluster with command line utilities, or - via the `KafkaTopic` CRD. -Below is an example `KafkaTopic` CR. - -```yaml -# topic.yaml ---- -apiVersion: kafka.banzaicloud.io/v1alpha1 -kind: KafkaTopic -metadata: - name: example-topic - namespace: kafka -spec: - clusterRef: - name: kafka - name: example-topic - partitions: 3 - replicationFactor: 2 - config: - # For a full list of configuration options, refer to the official documentation. - # https://kafka.apache.org/documentation/#topicconfigs - "retention.ms": "604800000" - "cleanup.policy": "delete" -``` +Below is an example `KafkaTopic` CR you can apply with kubectl. -You can apply the above topic with kubectl: +{{< include-code "create-topic.sample" "yaml" >}} -```shell -banzai@cloud:~$ kubectl apply -n kafka -f topic.yaml +For a full list of configuration options, see the [official Kafka documentation](https://kafka.apache.org/documentation/#topicconfigs). -kafkatopic.kafka.banzaicloud.io/example-topic created -``` +## Update topic If you want to update the configuration of the topic after it's been created, you can either: @@ -48,9 +27,9 @@ If you want to update the configuration of the topic after it's been created, yo You can increase the partition count for a topic the same way, or by running the following one-liner using `patch`: ```shell -banzai@cloud:~$ kubectl patch -n kafka kafkatopic example-topic --patch '{"spec": {"partitions": 5}}' --type=merge +kubectl patch -n kafka kafkatopic example-topic --patch '{"spec": {"partitions": 5}}' --type=merge kafkatopic.kafka.banzaicloud.io/example-topic patched ``` -> Note: Topics created by the Kafka operator are not enforced in any way. From the Kubernetes perspective, Kafka Topics are external resources. +> Note: Topics created by the {{< kafka-operator >}} are not enforced in any way. From the Kubernetes perspective, Kafka Topics are external resources. diff --git a/docs/troubleshooting/_index.md b/docs/troubleshooting/_index.md index d5e0367..be79f72 100644 --- a/docs/troubleshooting/_index.md +++ b/docs/troubleshooting/_index.md @@ -1,14 +1,14 @@ --- -title: Kafka operator troubleshooting -shorttitle: Troubleshooting +title: Troubleshooting the operator +linktitle: Troubleshooting weight: 400 --- -The following tips and commands can help you to troubleshoot your Kafka operator installation. +The following tips and commands can help you to troubleshoot your {{< kafka-operator >}} installation. ## First things to do -1. Verify that the Kafka operator pod is running. Issue the following command: `kubectl get pods -n kafka|grep kafka-operator` +1. Verify that the {{< kafka-operator >}} pod is running. Issue the following command: `kubectl get pods -n kafka|grep kafka-operator` The output should include a running pod, for example: ```bash @@ -41,14 +41,12 @@ The following tips and commands can help you to troubleshoot your Kafka operator kubectl get KafkaCluster kafka -n kafka -o jsonpath="{.status}" |jq ``` -1. Check the status of your Zookeeper deployment, and the logs of the zookeeper-operator and zookeeper pods. +1. Check the status of your ZooKeeper deployment, and the logs of the zookeeper-operator and zookeeper pods. ```bash kubectl get pods -n zookeeper ``` -{{< toc >}} - ## Check the KafkaCluster configuration You can display the current configuration of your Kafka cluster using the following command: @@ -57,130 +55,149 @@ You can display the current configuration of your Kafka cluster using the follow The output looks like the following: ```yaml -Name: kafka -Namespace: kafka -Labels: controller-tools.k8s.io=1.0 -Annotations: -API Version: kafka.banzaicloud.io/v1beta1 -Kind: KafkaCluster -Metadata: - Creation Timestamp: 2021-02-15T09:46:02Z - Finalizers: - finalizer.kafkaclusters.kafka.banzaicloud.io - topics.kafkaclusters.kafka.banzaicloud.io - users.kafkaclusters.kafka.banzaicloud.io - Generation: 2 -Spec: - Broker Config Groups: - Default: - Broker Annotations: - prometheus.io/port: 9020 - prometheus.io/scrape: true - Storage Configs: - Mount Path: /kafka-logs - Pvc Spec: - Access Modes: - ReadWriteOnce - Resources: - Requests: - Storage: 10Gi - Brokers: - Broker Config Group: default - Id: 0 - Broker Config Group: default - Id: 1 - Broker Config Group: default - Id: 2 - Cluster Image: ghcr.io/banzaicloud/kafka:2.13-2.6.0-bzc.1 - Cruise Control Config: - Cluster Config: { - "min.insync.replicas": 3 -} - +apiVersion: kafka.banzaicloud.io/v1beta1 +kind: KafkaCluster +metadata: + creationTimestamp: "2022-11-21T16:02:55Z" + finalizers: + - finalizer.kafkaclusters.kafka.banzaicloud.io + - topics.kafkaclusters.kafka.banzaicloud.io + - users.kafkaclusters.kafka.banzaicloud.io + generation: 4 + labels: + controller-tools.k8s.io: "1.0" + name: kafka + namespace: kafka + resourceVersion: "3474369" + uid: f8744017-1264-47d4-8b9c-9ee982728ecc +spec: + brokerConfigGroups: + default: + storageConfigs: + - mountPath: /kafka-logs + pvcSpec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 10Gi + terminationGracePeriodSeconds: 120 + brokers: + - brokerConfigGroup: default + id: 0 + - brokerConfigGroup: default + id: 1 + clusterImage: ghcr.io/banzaicloud/kafka:2.13-3.1.0 + cruiseControlConfig: + clusterConfig: | + { + "min.insync.replicas": 3 + } + config: | ... - - Cruise Control Task Spec: - Retry Duration Minutes: 5 - Topic Config: - Partitions: 12 - Replication Factor: 3 - Disruption Budget: - Envoy Config: - Headless Service Enabled: true - Istio Ingress Config: - Listeners Config: - Internal Listeners: - Container Port: 29092 - Name: internal - Type: plaintext - Used For Inner Broker Communication: true - Container Port: 29093 - Name: controller - Type: plaintext - Used For Controller Communication: true - Used For Inner Broker Communication: false - Monitoring Config: - Jmx Image: - Path To Jar: - One Broker Per Node: false - Read Only Config: auto.create.topics.enable=false -cruise.control.metrics.topic.auto.create=true -cruise.control.metrics.topic.num.partitions=1 -cruise.control.metrics.topic.replication.factor=2 - - Rolling Upgrade Config: - Failure Threshold: 1 - Vault Config: - Auth Role: - Issue Path: - Pki Path: - User Store: - Zk Addresses: - zookeeper-client.zookeeper:2181 -Status: - Alert Count: 0 - Brokers State: - 0: - Configuration State: ConfigInSync - Graceful Action State: - Cruise Control State: GracefulUpscaleSucceeded - Error Message: CruiseControl not yet ready - Rack Awareness State: - 1: - Configuration State: ConfigInSync - Graceful Action State: - Cruise Control State: GracefulUpscaleSucceeded - Error Message: CruiseControl not yet ready - Rack Awareness State: - 2: - Configuration State: ConfigInSync - Graceful Action State: - Cruise Control State: GracefulUpscaleSucceeded - Error Message: CruiseControl not yet ready - Rack Awareness State: - Cruise Control Topic Status: CruiseControlTopicReady - Rolling Upgrade Status: - Error Count: 0 - Last Success: - State: ClusterRunning -Events: + cruiseControlTaskSpec: + RetryDurationMinutes: 0 + disruptionBudget: {} + envoyConfig: {} + headlessServiceEnabled: true + istioIngressConfig: {} + listenersConfig: + externalListeners: + - containerPort: 9094 + externalStartingPort: 19090 + name: external + type: plaintext + internalListeners: + - containerPort: 29092 + name: plaintext + type: plaintext + usedForInnerBrokerCommunication: true + - containerPort: 29093 + name: controller + type: plaintext + usedForControllerCommunication: true + usedForInnerBrokerCommunication: false + monitoringConfig: {} + oneBrokerPerNode: false + readOnlyConfig: | + auto.create.topics.enable=false + cruise.control.metrics.topic.auto.create=true + cruise.control.metrics.topic.num.partitions=1 + cruise.control.metrics.topic.replication.factor=2 + rollingUpgradeConfig: + failureThreshold: 1 + zkAddresses: + - zookeeper-client.zookeeper:2181 +status: + alertCount: 0 + brokersState: + "0": + configurationBackup: H4sIAAAAAAAA/6pWykxRsjLQUUoqys9OLXLOz0vLTHcvyi8tULJSSklNSyzNKVGqBQQAAP//D49kqiYAAAA= + configurationState: ConfigInSync + gracefulActionState: + cruiseControlState: GracefulUpscaleSucceeded + volumeStates: + /kafka-logs: + cruiseControlOperationReference: + name: kafka-rebalance-bhs7n + cruiseControlVolumeState: GracefulDiskRebalanceSucceeded + image: ghcr.io/banzaicloud/kafka:2.13-3.1.0 + perBrokerConfigurationState: PerBrokerConfigInSync + rackAwarenessState: "" + version: 3.1.0 + "1": + configurationBackup: H4sIAAAAAAAA/6pWykxRsjLUUUoqys9OLXLOz0vLTHcvyi8tULJSSklNSyzNKVGqBQQAAP//pYq+WyYAAAA= + configurationState: ConfigInSync + gracefulActionState: + cruiseControlState: GracefulUpscaleSucceeded + volumeStates: + /kafka-logs: + cruiseControlOperationReference: + name: kafka-rebalance-bhs7n + cruiseControlVolumeState: GracefulDiskRebalanceSucceeded + image: ghcr.io/banzaicloud/kafka:2.13-3.1.0 + perBrokerConfigurationState: PerBrokerConfigInSync + rackAwarenessState: "" + version: 3.1.0 + cruiseControlTopicStatus: CruiseControlTopicReady + listenerStatuses: + externalListeners: + external: + - address: a0abb7ab2e4a142d793f0ec0cb9b58ae-1185784192.eu-north-1.elb.amazonaws.com:29092 + name: any-broker + - address: a0abb7ab2e4a142d793f0ec0cb9b58ae-1185784192.eu-north-1.elb.amazonaws.com:19090 + name: broker-0 + - address: a0abb7ab2e4a142d793f0ec0cb9b58ae-1185784192.eu-north-1.elb.amazonaws.com:19091 + name: broker-1 + internalListeners: + plaintext: + - address: kafka-headless.kafka.svc.cluster.local:29092 + name: headless + - address: kafka-0.kafka-headless.kafka.svc.cluster.local:29092 + name: broker-0 + - address: kafka-1.kafka-headless.kafka.svc.cluster.local:29092 + name: broker-1 + rollingUpgradeStatus: + errorCount: 0 + lastSuccess: "" + state: ClusterRunning ``` ## Getting Support -If you encounter any problems that the documentation does not address, [file an issue](https://github.com/banzaicloud/kafka-operator/issues) or talk to us on the Banzai Cloud Slack channel [#kafka-operator](https://slack.banzaicloud.io/). +If you encounter any problems that the documentation does not address, [file an issue](https://github.com/banzaicloud/koperator/issues) or talk to us on our Slack channel [#kafka-operator](https://banzaicloud.com/invite-slack). -[Commercial support]({{< relref "/docs/supertubes/kafka-operator/support.md">}}) is also available for the Kafka operator. +[Commercial support]({{< relref "/sdm/koperator/support.md">}}) is also available for {{< kafka-operator >}}. Before asking for help, prepare the following information to make troubleshooting faster: -- Kafka operator version +- {{< kafka-operator >}} version - Kubernetes version (**kubectl version**) -- Helm/chart version (if you installed the Kafka operator with Helm) -- Kafka operator logs, for example **kubectl logs kafka-operator-operator-6968c67c7b-9d2xq manager -n kafka** and **kubectl logs kafka-operator-operator-6968c67c7b-9d2xq kube-rbac-proxy -n kafka** +- Helm/chart version (if you installed {{< kafka-operator >}} with Helm) +- {{< kafka-operator >}} logs, for example **kubectl logs kafka-operator-operator-6968c67c7b-9d2xq manager -n kafka** and **kubectl logs kafka-operator-operator-6968c67c7b-9d2xq kube-rbac-proxy -n kafka** - Kafka broker logs -- Kafka operator configuration +- {{< kafka-operator >}} configuration - Kafka cluster configuration (**kubectl describe KafkaCluster kafka -n kafka**) -- Zookeeper configuration (**kubectl describe ZookeeperCluster zookeeper -n zookeeper**) -- Zookeeper logs (**kubectl logs zookeeper-operator-5c9b597bcc-vkdz9 -n zookeeper**) -Do not forget to remove any sensitive information (for example, passwords and private keys) before sharing. \ No newline at end of file +- ZooKeeper configuration (**kubectl describe ZookeeperCluster zookeeper -n zookeeper**) +- ZooKeeper logs (**kubectl logs zookeeper-operator-5c9b597bcc-vkdz9 -n zookeeper**) +Do not forget to remove any sensitive information (for example, passwords and private keys) before sharing. diff --git a/docs/troubleshooting/common-errors.md b/docs/troubleshooting/common-errors.md index c02d5dd..580050f 100644 --- a/docs/troubleshooting/common-errors.md +++ b/docs/troubleshooting/common-errors.md @@ -5,10 +5,10 @@ weight: 100 ## Upgrade failed -If you get the following error in the logs of the Kafka operator, update your KafkaCluster CRD. This error typically occurs when you upgrade your Kafka operator to a new version, but forget to update the KafkaCluster CRD. +If you get the following error in the logs of {{< kafka-operator >}}, update your KafkaCluster CRD. This error typically occurs when you upgrade your {{< kafka-operator >}} to a new version, but forget to update the KafkaCluster CRD. ```bash Error: UPGRADE FAILED: cannot patch "kafka" with kind KafkaCluster: KafkaCluster.kafka.banzaicloud.io "kafka" is invalid ``` -The recommended way to upgrade the Kafka operator is to upgrade the KafkaCluster CRD, then update the Kafka operator. For details, see {{% xref "/docs/supertubes/kafka-operator/upgrade-kafka-operator.md" %}}. +The recommended way to upgrade {{< kafka-operator >}} is to upgrade the KafkaCluster CRD, then update {{< kafka-operator >}}. For details, see {{% xref "/sdm/koperator/upgrade-kafka-operator.md" %}}. diff --git a/docs/upgrade-kafka-operator.md b/docs/upgrade-kafka-operator.md index 475bbd3..9d60850 100644 --- a/docs/upgrade-kafka-operator.md +++ b/docs/upgrade-kafka-operator.md @@ -1,22 +1,22 @@ --- -title: Upgrade the Kafka operator -shorttitle: Upgrade +title: Upgrade the operator +linktitle: Upgrade weight: 15 --- -When upgrading your Kafka operator deployment to a new version, complete the following steps. +When upgrading your {{< kafka-operator >}} deployment to a new version, complete the following steps. -1. Download the CRDs for the new release from the [Kafka operator releases page](https://github.com/banzaicloud/kafka-operator/releases). They are included in the assets of the release. +1. Download the CRDs for the new release from the [{{< kafka-operator >}} releases page](https://github.com/banzaicloud/koperator/releases). They are included in the assets of the release. {{< warning >}}**Hazard of data loss** Do not delete the old CRD from the cluster. Deleting the CRD removes your Kafka cluster.{{< /warning >}} 1. Replace the KafkaCluster CRD with the new one on your cluster by running the following command (replace <versionnumber> with the release you are upgrading to, for example, **v0.14.0**). ```bash - kubectl replace --validate=false -f https://github.com/banzaicloud/kafka-operator/releases/download//kafka-operator.crds.yaml + kubectl replace --validate=false -f https://github.com/banzaicloud/koperator/releases/download//kafka-operator.crds.yaml ``` -1. Update the Kafka operator by running: +1. Update your {{< kafka-operator >}} deployment by running: ```bash helm repo update