Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DOCS-1273: Operator 6.0.0 Deploy/Upgrade docs, removing Console references #1284

Merged
merged 11 commits into from
Aug 1, 2024

Conversation

ravindk89
Copy link
Collaborator

Addresses #1273

Summary

This pass does three things:

  1. Updates all tutorials related to Operator/Tenant deployment for Kustomize and Helm
  2. Removes references to Operator Console + updates to reference Kustomize/Helm wherever possible
  3. Slightly tidies up old or dangling references

This pass does not do these things:

  • Link out heavily to Kubernetes docs (for later)
  • Clean up organization (singleplat build handles this)
  • Addresses OpenShift, Rancher, etc.

@ravindk89 ravindk89 added the WIP Work In Progress (Review Only If Requested) label Jul 25, 2024
@ravindk89 ravindk89 self-assigned this Jul 25, 2024
@feorlen
Copy link
Collaborator

feorlen commented Jul 25, 2024


For Kubernetes 1.25.0 and later, MinIO supports deploying in environments with the :kube-docs:`Pod Security admission (PSA) <concepts/security/pod-security-admission>` ``restricted`` policy enabled.

Starting with v5.0.0, MinIO **requires** Kubernetes 1.28.0 or later for both the infrastructure and the ``kubectl`` CLI tool.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Which release pushed the minimum required k8s to 1.28? Or was 1.21 incorrect already for 5.0.x?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1.21 is EOL and has been for a while. I think we just never got around to updating this.

1.28 will be EOL this year, but is at least still in-support (for now).

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, but EOL is different than our code requiring something that's in a specific version of K8s, as was the case with 5.0.0 and K8s 1.21. You cannot run Operator 5.0.0 on something older than 1.21. It doesn't work.

Does 6.0.0 require 1.28 because of something that's only in 1.28 and newer? Or is it just the "please don't use unsupported, old versions of software?"

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From internal convos it sounds like even if we do technically support older K8s, we want to push users to always stay on latest stable.

I'm going to leave this for now and then see whether there is a way for us to pull this out of the code if we really want to keep a backstop in place.

* - ``tenant.certificate.requestAutoCert``
- Enables or disables MinIO :ref:`automatic TLS certificate generation <minio-tls>`
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

are the accepted values here true or false?

Suggested change
* - ``tenant.certificate.requestAutoCert``
- Enables or disables MinIO :ref:`automatic TLS certificate generation <minio-tls>`
* - ``tenant.certificate.requestAutoCert``
- Enable or disable MinIO :ref:`automatic TLS certificate generation <minio-tls>`

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes @feorlen

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ravindk89 @feorlen Are we using the imperative mood or simply describing the field (indicative mood)? It's not clear what the standard is.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My intent was to be imperative (I think), but not necessarily because it's our standard - it's just how I learned to write these sort of things.

In which case i think we'd leave this as-is. Maybe for docs to discuss.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The step (numbered line at ln144) is imperative. These are all descriptions, so should be indicative. I'd leave 156 as is to match the other field descriptions. Or you'll need to change all of the following descriptions as well.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I changed everything and lets see how we feel about it.

Comment on lines 159 to 160
* - ``tenant.certificate.certConfig``
- Controls the settings for :ref:`automatic TLS <minio-tls>`.
Requires ``spec.requestAutoCert: true``
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"Controls" is a bit vague, is it worth mentioning the type of accepted values here? There is more info in the CRD, which is reasonable to presume the reader is looking at.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also spec.requestAutoCert can be true, false, or simply not present. It is not a required field.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@allanrogerr iirc if `requestAutoCertisfalse`` then changes to this field do nothing, right?

Indeed omitting requestAutoCert defaults to true behavior so I could reverse this

Suggested change
* - ``tenant.certificate.certConfig``
- Controls the settings for :ref:`automatic TLS <minio-tls>`.
Requires ``spec.requestAutoCert: true``
* - ``tenant.certificate.certConfig``
- Controls the settings for :ref:`automatic TLS <minio-tls>`.
Only has affect if ``spec.requestAutoCert`` is not false.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we say

Suggested change
* - ``tenant.certificate.certConfig``
- Controls the settings for :ref:`automatic TLS <minio-tls>`.
Requires ``spec.requestAutoCert: true``
* - ``tenant.certificate.certConfig``
- Controls the settings for :ref:`automatic TLS <minio-tls>`.
Only has an effect if ``spec.requestAutoCert`` is set to ``true``.

Say that it must be "not false" sounds really awkward.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The easiest path here is to just state 'controls he settings...if enabled` and let the user infer that disabling one thing disables the othe.r

@@ -182,8 +246,78 @@ This method may support easier pre-configuration of the Tenant compared to the :

Each chart contains a ``values.yaml`` file you can customize to suit your needs.
For details on the options available in the MinIO Tenant ``values.yaml``, see :ref:`minio-tenant-chart-values`.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this link still needed if we are now actually discussing the contents of values.yaml here?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think so - values.yaml has a lot more gritty detail, this tutorial (for now) is very high level.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd suggest keeping the separate chart file to cover all of the options. Then the tutorials can focus on just the basics of getting something up and running.


:ref:`create-tenant-access-minio-operator-console`
You can select a different base or pre-built template from the repository as your starting point, or build your own Kustomization resources using the MinIO Custom Resource Documentation.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a good link for the DIY option?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, will do that in this pass

Access the Tenant's :ref:`minio-console` by navigating to ``http://localhost:9443`` in a browser.
Log in to the Console with the default credentials ``myminio | minio123``.
If you modified these credentials in the ``values.yaml`` specify those values instead.
watch kubectl get all -n minio-tenant
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
watch kubectl get all -n minio-tenant
watch kubectl get all -n minio-tenant-1

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure if you want to use MINIO_TENANT_NAMESPACE and MINIO_TENANT_NAME nomenclature, but we should be consistent.

To configure long term access to the pod, configure :kube-docs:`Ingress <concepts/services-networking/ingress/>` or similar network control components within Kubernetes to route traffic to and from the pod.
Configuring Ingress is out of the scope for this documentation.
helm install \
--namespace minio-tenant-1 \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure if you want to use MINIO_TENANT_NAMESPACE and MINIO_TENANT_NAME nomenclature, but we should be consistent.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm. Yeah we kind of go back and forth on this. I'll migrate to the var-replace style.

Copy link
Collaborator

@djwfyi djwfyi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you have to be on 5.0.15 to upgrade to Operator 6.0.x?
The upgrade docs make that a little unclear.

Lots of suggestions here and there for clarity and whatnot, but otherwise what a lift to get this done! Nice work.


For Kubernetes 1.25.0 and later, MinIO supports deploying in environments with the :kube-docs:`Pod Security admission (PSA) <concepts/security/pod-security-admission>` ``restricted`` policy enabled.

Starting with v5.0.0, MinIO **requires** Kubernetes 1.28.0 or later for both the infrastructure and the ``kubectl`` CLI tool.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, but EOL is different than our code requiring something that's in a specific version of K8s, as was the case with 5.0.0 and K8s 1.21. You cannot run Operator 5.0.0 on something older than 1.21. It doesn't work.

Does 6.0.0 require 1.28 because of something that's only in 1.28 and newer? Or is it just the "please don't use unsupported, old versions of software?"

Do not use the ``kubectl krew`` or similar methods to update or manage the MinIO Tenant installation.
If you use Helm charts to deploy the Tenant, you must use Helm to manage that deployment.
If you use Helm to deploy a MinIO Tenant, you must use Helm to manage or upgrade that deployment.
Do not use ``kubectl krew``, Kustomize, or similar methods to manage or upgrade the MinIO Tenant.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should drop krew here, right?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think keep it because it was around at one point. We want to emphasize that even if someone was using or heard of or read of using krew, its no longer recommended.

@@ -72,14 +72,16 @@ You can modify the Operator deployment after installation.

NAME CHART VERSION APP VERSION DESCRIPTION
minio-operator/minio-operator 4.3.7 v4.3.7 A Helm chart for MinIO Operator
minio-operator/operator 5.0.10 v5.0.10 A Helm chart for MinIO Operator
minio-operator/tenant 5.0.10 v5.0.10 A Helm chart for MinIO Operator
minio-operator/operator 6.0.1 v6.0.1 A Helm chart for MinIO Operator
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why specify the version number here and not use the variable?

- Your Kubernetes cluster runs 1.21.0 or later
- Your local host has ``kubectl`` installed and configured with access to the Kubernetes cluster

This procedure upgrades the MinIO Operator from any 4.5.8 or later release to |operator-version-stable|.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it upgrading from 4.5.8 to 6.0.1? The heading says to 5.0.15 on ln 38.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

typo!

@ravindk89 ravindk89 added Review Assigned Reviewers Must LGTM To Close and removed WIP Work In Progress (Review Only If Requested) labels Jul 25, 2024
font-weight: bold;
}

div.procedure ol.arabic li ul.simple li p {
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel dumb having done this. @rushenn @kaankabalak can we make this cleaner? for some reason it cascades down w/o this.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure @ravindk89, I can take a look

@ravindk89 ravindk89 requested a review from djwfyi July 29, 2024 20:11
@ravindk89
Copy link
Collaborator Author

I tested upgrading from 5.0.15 to 6.0.1 with Helm 3.15.3, to see how the CRD would change.

I can see the change in CRD and the diff shows that it does appear to update. But online docs and internal discussion indicate that Helm does not deal with CRD updates cleanly, so I'm not sure what's going on here or how to validate Helm upgrades working as expected.

That said we can still push this forward to at least get the docs out, possibly leaving a note on Helm Operator install docs stating that Helm does not handle CRD upgrades cleanly and that additional work may be required - though I still do not know what work.

cc @cniackz @pjuarezd @dvaldivia @ramondeklein for clarity on the above

@pjuarezd
Copy link
Member

pjuarezd commented Jul 29, 2024

I tested upgrading from 5.0.15 to 6.0.1 with Helm 3.15.3, to see how the CRD would change.

I can see the change in CRD and the diff shows that it does appear to update. But online docs and internal discussion indicate that Helm does not deal with CRD updates cleanly, so I'm not sure what's going on here or how to validate Helm upgrades working as expected.

That said we can still push this forward to at least get the docs out, possibly leaving a note on Helm Operator install docs stating that Helm does not handle CRD upgrades cleanly and that additional work may be required - though I still do not know what work.

cc @cniackz @pjuarezd @dvaldivia @ramondeklein for clarity on the above

CRD's are allways updated on the Operator helm chart update @ravindk89, we don't store them in the /crd directory, preciselly to ensure are always updated, otherwise what you mention as Helm does not handle CRD upgrades cleanly would happen, instead we store CRD's in the /templates and allways update it, as a regular helm chart resource.

At some point even the plan of provide a separated helm chart only for the CRDS was proposed, but got no traction (Helm does not handle CRD upgrades cleanly)

For now I think we only need to mention we allways update CRD's and are not stored in the /crd directory.

@ravindk89
Copy link
Collaborator Author

Thanks @pjuarezd

From https://helm.sh/docs/chart_best_practices/custom_resource_definitions/

Another important point to consider in the discussion around CRD support is how the rendering of templates is handled. One of the distinct disadvantages of the crd-install method used in Helm 2 was the inability to properly validate charts due to changing API availability (a CRD is actually adding another available API to your Kubernetes cluster). If a chart installed a CRD, helm no longer had a valid set of API versions to work against. This is also the reason behind removing templating support from CRDs. With the new crds method of CRD installation, we now ensure that helm has completely valid information about the current state of the cluster.

It sounds like we're OK as long as we are building the charts with Helm 2, but Helm 3 removes the templating hook/approach we are currently using.

Is there any issue for the end user here to consider w.r.t. Helm 3, or can we safely assume that it's only baking the charts where it matters?

Copy link
Collaborator

@djwfyi djwfyi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to clarify the upgrade path.
I think you have to get to 5.0.15 before upgrading to 6.0.1, but a few places seem to suggest that you can upgrade from 4.5.8 straight to 6.0.1.

Otherwise, another lot of minor fixes and nits I noted. And, in general, LGTM. I don't feel the need to see it again.

@@ -219,146 +216,165 @@ You can select a different base or pre-built template from the repository as you
This procedure is not exhaustive of all possible configuration options available in the :ref:`Tenant CRD <minio-operator-crd>`.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tenant CRD text is linked to the Operator CRD. Should be ref to the tenant CRD, correct?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Errr they're basically the same. the Operator CRD page includes the entire Tenant CRD definition.

there are other CRDs so technically at some point we should deal with that, but its not being done yet.


#. Specify the following information for the new pool:
Decommissioning a server pool involves three steps:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Four.
They'll also need to manually delete the decommissioned storage. See operator/PR#2203.

Add Trusted Certificate Authorities
The MinIO Tenant validates the TLS certificate presented by each connecting client against the host system's trusted root certificate store.
The MinIO Operator can attach additional third-party Certificate Authorities (CA) to the Tenant to allow validation of client TLS certificates signed by those CAs.
.. tab-item:: Operator Console
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are we keeping this tab?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe call it kubectl instead of Operator Console?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good question actually.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh the intent was to guide users away from it for existing installs, but that's handled in the upgrade docs.

.. include:: /includes/common-installation.rst
:start-after: start-pool-order-must-not-change
:end-before: end-pool-order-must-not-change
You can also use ``kubectl edit`` to 'live edit' Kubernetes objects.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The single quote marks look odd. I'd swap them for " or just drop the emphasis altogether.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Elsewhere, it seems like you can upgrade to 6.0.1 from 4.5.8.
However, lines 15 and 16 seem to imply that you have to be on 5.0.15 before upgrading to 6.0.1.

Which is it?

Also, line 19 should be 4.5.7 or earlier.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AFAIK we want users on 5.0.15 first before jumping. Changed this up.

@ravindk89
Copy link
Collaborator Author

@allanrogerr @cesnietor @pjuarezd final thoughts before I push tihs out? Staging updated

Copy link
Contributor

@allanrogerr allanrogerr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Spelling

Co-authored-by: Allan Roger Reid <[email protected]>
Copy link
Contributor

@allanrogerr allanrogerr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PTAL

Co-authored-by: Allan Roger Reid <[email protected]>
Copy link
Contributor

@allanrogerr allanrogerr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@djwfyi djwfyi merged commit 23253dd into main Aug 1, 2024
@djwfyi djwfyi deleted the DOCS-1273 branch August 1, 2024 19:51
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Review Assigned Reviewers Must LGTM To Close
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants