Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Discussion: how many kubernetes versions should we support directly? #23

Open
egeland opened this issue Mar 21, 2019 · 4 comments
Open
Assignees

Comments

@egeland
Copy link
Collaborator

egeland commented Mar 21, 2019

As we generate images based both on the kubectl version (directly tied to the Kubernetes release version) and the version of this tool, we need to decide how many versions of Kubernetes we should support.
I mean the MINOR version numbers - so, for example, LATEST is currently 1.13.4, LATEST-1 is 1.12.6 - at time of writing.

My suggestion is to support LATEST, LATEST-1, LATEST-2 at their highest PATCH version level.

This would involve creating version branches for the MINOR k8s, and applying tags in these branches, after backporting new features.
We would create branches 1.13, 1.12, and 1.11 and figure out the easiest way to port changes to them.

Thoughts?

@egeland
Copy link
Collaborator Author

egeland commented Mar 21, 2019

For example, what other versions should actually be included in #22 ?

@mumoshu
Copy link
Contributor

mumoshu commented Mar 22, 2019

A few notes:

  • AFAIK, the upstream Kubernetes project supports 4 minor versions including the latest release, currently 1.13, 1.12, 1.11, 1.10. Using something older than 1.10 means that we can't receive patches for critical bugs if any.

  • Also, if you rely on a new feature only available in recent kubectl versions, it raises the lower version req. In case you rely on kubectl apply to configure any field relevant to "alpha" features of Kubernetes, it raises the lower ver req as well, because kubectl get -o yaml something --export | kubectl apply -f - throws away all the fields unknown to the old kubectl!

@mumoshu
Copy link
Contributor

mumoshu commented Mar 22, 2019

That being said, can we safely use the oldest possible kubectl and only release one docker image per the project version, not per kubectl?

If I remember correctly, I think it was me who introduced the current versioning scheme for kube-spot-termination-notice-handler? Shamelessly throwing away my previous decisions and habits, can we just start from a semver 1.0.0 PLUS the rule to use the oldest supported kubectl ver, which will result in tags like the followings?

  • kubeaws/kube-spot-termination-notice-handler:1.0.0 (includes the oldest supported kubectl at the time
  • kubeaws/kube-spot-termination-notice-handler:1.1.0
  • kubeaws/kube-spot-termination-notice-handler:1.1.1

@egeland
Copy link
Collaborator Author

egeland commented Mar 23, 2019

It was me that introduced the current naming, I think.. 😉

Happy to switch to a semver scheme like you propose.
To clarify, we will only generate one image, being for the "oldest supported kubectl" (1.10.x at time of writing).

Version increments

  • Each time kubernetes releases, we will increment the major version number.
  • Each time we add a feature to the tool itself, we'll bump the minor number, and
  • for any bugfix or security patch, we increment the patch number.

Correct?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants