Skip to content

feat: add nodeclaim disruption status for forceful disruptions #2151

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

liafizan
Copy link

Fixes #2023

Description
Add disruption status to forceful disruptions so that we can improve tracking of things like how many pods disrupted for a reason

How was this change tested?
Unit test & local kwok provider

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

Copy link

linux-foundation-easycla bot commented Apr 17, 2025

CLA Signed

The committers listed above are authorized under a signed CLA.

@k8s-ci-robot k8s-ci-robot added the cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. label Apr 17, 2025
@k8s-ci-robot
Copy link
Contributor

Welcome @liafizan!

It looks like this is your first PR to kubernetes-sigs/karpenter 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.

You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.

You can also check if kubernetes-sigs/karpenter has its own contribution guidelines.

You may want to refer to our testing guide if you run into trouble with your tests not passing.

If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!

Thank you, and welcome to Kubernetes. 😃

@k8s-ci-robot k8s-ci-robot added the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label Apr 17, 2025
@k8s-ci-robot
Copy link
Contributor

Hi @liafizan. Thanks for your PR.

I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added size/L Denotes a PR that changes 100-499 lines, ignoring generated files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. and removed cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. labels Apr 17, 2025
@coveralls
Copy link

coveralls commented Apr 17, 2025

Pull Request Test Coverage Report for Build 15888112882

Details

  • 23 of 36 (63.89%) changed or added relevant lines in 3 files are covered.
  • 2 unchanged lines in 1 file lost coverage.
  • Overall coverage decreased (-0.07%) to 81.796%

Changes Missing Coverage Covered Lines Changed/Added Lines %
pkg/controllers/node/health/controller.go 8 12 66.67%
pkg/controllers/nodeclaim/expiration/controller.go 8 12 66.67%
pkg/controllers/nodeclaim/garbagecollection/controller.go 7 12 58.33%
Files with Coverage Reduction New Missed Lines %
pkg/test/expectations/expectations.go 2 93.14%
Totals Coverage Status
Change from base Build 15884139327: -0.07%
Covered Lines: 10240
Relevant Lines: 12519

💛 - Coveralls

@cnmcavoy
Copy link
Contributor

cnmcavoy commented May 2, 2025

lgtm

Copy link
Member

@jonathan-innis jonathan-innis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for making this update 🎉 This is awesome! A few comments about messaging and ensuring the patches don't accidentally overwrite data

@@ -130,6 +130,11 @@ func (c *Controller) deleteNodeClaim(ctx context.Context, nodeClaim *v1.NodeClai
if !nodeClaim.DeletionTimestamp.IsZero() {
return reconcile.Result{}, nil
}
stored := nodeClaim.DeepCopy()
nodeClaim.StatusConditions().SetTrueWithReason(v1.ConditionTypeDisruptionReason, v1.DisruptionReasonUnhealthy, "node unhealthy")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does it make sense for us to have a more verbose message here?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added some more information to status message. Please let me know if this would suffice?

@@ -130,6 +130,11 @@ func (c *Controller) deleteNodeClaim(ctx context.Context, nodeClaim *v1.NodeClai
if !nodeClaim.DeletionTimestamp.IsZero() {
return reconcile.Result{}, nil
}
stored := nodeClaim.DeepCopy()
nodeClaim.StatusConditions().SetTrueWithReason(v1.ConditionTypeDisruptionReason, v1.DisruptionReasonUnhealthy, "node unhealthy")
if err := c.kubeClient.Status().Patch(ctx, nodeClaim, client.MergeFrom(stored)); err != nil {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to optimistic lock here or it's possible that some other client can override it

@@ -78,6 +78,11 @@ func (c *Controller) Reconcile(ctx context.Context, nodeClaim *v1.NodeClaim) (re
return reconcile.Result{RequeueAfter: expirationTime.Sub(c.clock.Now())}, nil
}
// 3. Otherwise, if the NodeClaim is expired we can forcefully expire the nodeclaim (by deleting it)
stored := nodeClaim.DeepCopy()
nodeClaim.StatusConditions().SetTrueWithReason(v1.ConditionTypeDisruptionReason, v1.DisruptionReasonExpired, "nodeClaim expired")
if err := c.kubeClient.Status().Patch(ctx, nodeClaim, client.MergeFrom(stored)); err != nil {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same comment here -- we should do an optimistic lock and handle the conflict

@@ -78,6 +78,11 @@ func (c *Controller) Reconcile(ctx context.Context, nodeClaim *v1.NodeClaim) (re
return reconcile.Result{RequeueAfter: expirationTime.Sub(c.clock.Now())}, nil
}
// 3. Otherwise, if the NodeClaim is expired we can forcefully expire the nodeclaim (by deleting it)
stored := nodeClaim.DeepCopy()
nodeClaim.StatusConditions().SetTrueWithReason(v1.ConditionTypeDisruptionReason, v1.DisruptionReasonExpired, "nodeClaim expired")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same comment here: Can we add a more descriptive message to this status condition?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

added more info, please do let me know if you feel we need some additional information

@@ -95,7 +95,7 @@ func (c *Controller) Reconcile(ctx context.Context) (reconcile.Result, error) {
if node != nil && nodeutils.GetCondition(node, corev1.NodeReady).Status == corev1.ConditionTrue {
return
}
if err := c.kubeClient.Delete(ctx, nodeClaims[i]); err != nil {
if err := c.updateStatusAndDelete(ctx, nodeClaims[i]); err != nil {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Could you consider doing this inline and add it above like you did the other ones -- if it fires the gocyclo linter, it's fine to disable it for the function

func (c *Controller) updateStatusAndDelete(ctx context.Context, nodeClaim *v1.NodeClaim) error {
stored := nodeClaim.DeepCopy()
nodeClaim.StatusConditions().SetTrueWithReason(v1.ConditionTypeDisruptionReason, v1.DisruptionReasonGarbageCollected, "nodeClaim garbage collected")
if err := c.kubeClient.Status().Patch(ctx, nodeClaim, client.MergeFrom(stored)); err != nil {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Optimistic locking

@@ -116,6 +116,18 @@ func (c *Controller) Reconcile(ctx context.Context) (reconcile.Result, error) {
return reconcile.Result{RequeueAfter: time.Minute * 2}, nil
}

func (c *Controller) updateStatusAndDelete(ctx context.Context, nodeClaim *v1.NodeClaim) error {
stored := nodeClaim.DeepCopy()
nodeClaim.StatusConditions().SetTrueWithReason(v1.ConditionTypeDisruptionReason, v1.DisruptionReasonGarbageCollected, "nodeClaim garbage collected")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

More descriptive message?

@jonathan-innis
Copy link
Member

/assign @jonathan-innis

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: cnmcavoy, liafizan
Once this PR has been reviewed and has the lgtm label, please ask for approval from jonathan-innis. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label May 30, 2025
@k8s-ci-robot k8s-ci-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jun 25, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

DisruptionReason status condition isn't propagated for forceful disruptions
5 participants