Skip to content
2 changes: 1 addition & 1 deletion source/default-conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -331,7 +331,7 @@
.. |SNSD| replace:: :abbr:`SNSD (Single-Node Single-Drive)`
.. |SNMD| replace:: :abbr:`SNMD (Single-Node Multi-Drive)`
.. |MNMD| replace:: :abbr:`MNMD (Multi-Node Multi-Drive)`
.. |operator-version-stable| replace:: 5.0.15
.. |operator-version-stable| replace:: OPERATOR
.. |helm-charts| replace:: `Helm Charts <https://github.com/minio/operator/tree/vOPERATOR/helm>`__
.. |helm-operator-chart| replace:: `Helm Operator Charts <https://github.com/minio/operator/blob/vOPERATOR/helm/operator>`__
.. |helm-tenant-chart| replace:: `Helm Tenant Charts <https://github.com/minio/operator/tree/vOPERATOR/helm/tenant>`__
Expand Down
100 changes: 5 additions & 95 deletions source/includes/common/common-install-operator-kustomize.rst
Original file line number Diff line number Diff line change
Expand Up @@ -66,42 +66,6 @@ The following procedure uses ``kubectl -k`` to install the Operator from the Min

.. _minio-k8s-deploy-operator-access-console:

#. *(Optional)* Configure access to the Operator Console service

The Operator Console service does not automatically bind or expose itself for external access on the Kubernetes cluster.
You must instead configure a network control plane component, such as a load balancer or ingress, to grant that external access.

For testing purposes or short-term access, expose the Operator Console service through a NodePort using the following patch:

.. code-block:: shell
:class: copyable

kubectl patch service -n minio-operator console -p '
{
"spec": {
"ports": [
{
"name": "http",
"port": 9090,
"protocol": "TCP",
"targetPort": 9090,
"nodePort": 30090
},
{
"name": "https",
"port": 9443,
"protocol": "TCP",
"targetPort": 9443,
"nodePort": 30433
}
],
"type": "NodePort"
}
}'

The patch command should output ``service/console patched``.
You can now access the service through ports ``30433`` (HTTPS) or ``30090`` (HTTP) on any of your Kubernetes worker nodes.

#. Verify the Operator installation

Check the contents of the specified namespace (``minio-operator``) to ensure all pods and services have started successfully.
Expand All @@ -123,7 +87,6 @@ The following procedure uses ``kubectl -k`` to install the Operator from the Min
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/operator ClusterIP 10.43.135.241 <none> 4221/TCP 5m20s
service/sts ClusterIP 10.43.117.251 <none> 4223/TCP 5m20s
service/console NodePort 10.43.235.38 <none> 9090:30090/TCP,9443:30433/TCP 5m20s

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/console 1/1 1 1 5m20s
Expand All @@ -133,63 +96,10 @@ The following procedure uses ``kubectl -k`` to install the Operator from the Min
replicaset.apps/console-56c7d8bd89 1 1 1 5m20s
replicaset.apps/minio-operator-6c758b8c45 2 2 2 5m20s

#. Retrieve the Operator Console JWT for login

.. code-block:: shell
:class: copyable

kubectl apply -f - <<EOF
apiVersion: v1
kind: Secret
metadata:
name: console-sa-secret
namespace: minio-operator
annotations:
kubernetes.io/service-account.name: console-sa
type: kubernetes.io/service-account-token
EOF
SA_TOKEN=$(kubectl -n minio-operator get secret console-sa-secret -o jsonpath="{.data.token}" | base64 --decode)
echo $SA_TOKEN

The output of this command is the JSON Web Token (JWT) login credential for Operator Console.

#. Log into the MinIO Operator Console


.. tab-set::

.. tab-item:: NodePort
:selected:

If you configured the service for access through a NodePort, specify the hostname of any worker node in the cluster with that port as ``HOSTNAME:NODEPORT`` to access the Console.

For example, a deployment configured with a NodePort of 30090 and the following ``InternalIP`` addresses can be accessed at ``http://172.18.0.5:30090``.

.. code-block:: shell
:class: copyable

kubectl get nodes -o custom-columns=IP:.status.addresses[:]
IP
map[address:172.18.0.5 type:InternalIP],map[address:k3d-MINIO-agent-3 type:Hostname]
map[address:172.18.0.6 type:InternalIP],map[address:k3d-MINIO-agent-2 type:Hostname]
map[address:172.18.0.2 type:InternalIP],map[address:k3d-MINIO-server-0 type:Hostname]
map[address:172.18.0.4 type:InternalIP],map[address:k3d-MINIO-agent-1 type:Hostname]
map[address:172.18.0.3 type:InternalIP],map[address:k3d-MINIO-agent-0 type:Hostname]

.. tab-item:: Ingress or Load Balancer

If you configured the ``svc/console`` service for access through ingress or a cluster load balancer, you can access the Console using the configured hostname and port.

.. tab-item:: Port Forwarding

You can use ``kubectl port forward`` to temporary forward ports for the Console:

.. code-block:: shell
:class: copyable

kubectl port-forward svc/console -n minio-operator 9090:9090
#. Next Steps

You can then use ``http://localhost:9090`` to access the MinIO Operator Console.
You can deploy MinIO tenants using the MinIO CRD and Kustomize.

Once you access the Console, use the Console JWT to log in.
You can now :ref:`deploy and manage MinIO Tenants using the Operator Console <deploy-minio-distributed>`.
MinIO also provides a Helm chart for deploying Tenants.
However, MinIO recommends using the same method of Tenant deployment and management used to install the Operator.
Mixing Kustomize and Helm for Operator or Tenant management may increase operational complexity.
Original file line number Diff line number Diff line change
@@ -1,8 +1,6 @@
The Operator Console service does not automatically bind or expose itself for external access on the Kubernetes cluster.
Instead, you must configure a network control plane component, such as a load balancer or ingress, to grant external access.

.. cond:: k8s

For testing purposes or short-term access, expose the Operator Console service through a NodePort using the following patch:

.. code-block:: shell
Expand Down
9 changes: 2 additions & 7 deletions source/includes/k8s/deploy-operator.rst
Original file line number Diff line number Diff line change
Expand Up @@ -65,19 +65,14 @@ The tenant utilizes Persistent Volume Claims to talk to the Persistent Volumes t
Prerequisites
-------------

Kubernetes Version 1.21.0
Kubernetes Version 1.28.0
~~~~~~~~~~~~~~~~~~~~~~~~~

.. important::

MinIO **strongly recommends** upgrading Production clusters running `End-Of-Life <https://kubernetes.io/releases/patch-releases/#non-active-branch-history>`__ Kubernetes APIs.

Starting with v5.0.0, MinIO **requires** Kubernetes 1.21.0 or later for both the infrastructure and the ``kubectl`` CLI tool.

.. versionadded:: Operator 5.0.6

For Kubernetes 1.25.0 and later, MinIO supports deploying in environments with the :kube-docs:`Pod Security admission (PSA) <concepts/security/pod-security-admission>` ``restricted`` policy enabled.

Starting with v5.0.0, MinIO **requires** Kubernetes 1.28.0 or later for both the infrastructure and the ``kubectl`` CLI tool.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Which release pushed the minimum required k8s to 1.28? Or was 1.21 incorrect already for 5.0.x?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1.21 is EOL and has been for a while. I think we just never got around to updating this.

1.28 will be EOL this year, but is at least still in-support (for now).

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, but EOL is different than our code requiring something that's in a specific version of K8s, as was the case with 5.0.0 and K8s 1.21. You cannot run Operator 5.0.0 on something older than 1.21. It doesn't work.

Does 6.0.0 require 1.28 because of something that's only in 1.28 and newer? Or is it just the "please don't use unsupported, old versions of software?"

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From internal convos it sounds like even if we do technically support older K8s, we want to push users to always stay on latest stable.

I'm going to leave this for now and then see whether there is a way for us to pull this out of the code if we really want to keep a backstop in place.


Kustomize and ``kubectl``
~~~~~~~~~~~~~~~~~~~~~~~~~
Expand Down
Loading