-
Notifications
You must be signed in to change notification settings - Fork 297
feat: addition of launch TTL for nodeclaim lifecycle #2349
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: rschalo The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Pull Request Test Coverage Report for Build 16059632783Details
💛 - Coveralls |
|
||
func (l *Liveness) Reconcile(ctx context.Context, nodeClaim *v1.NodeClaim) (reconcile.Result, error) { | ||
registered := nodeClaim.StatusConditions().Get(v1.ConditionTypeRegistered) | ||
if registered.IsTrue() { | ||
return reconcile.Result{}, nil | ||
} | ||
launched := nodeClaim.StatusConditions().Get(v1.ConditionTypeLaunched) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we're gonna functionally decompose the delete, why not add the status condition check? You could pass in the condition and the TTL to check for in addition to the reason.
I'm not sure they need to be functionally decomposed in the first place but if you want to go that route I think we should decompose it all. We could also define the types of conditions we check for in an array or something more explicit
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The status conditions and TTL aren't handled the same way and I think it's fine to functionally decompose the delete and metric because of that. I think somewhere in between is something like:
type NodeClaimTTL struct {
duration time.Duration
reason string
}
var (
RegistrationTTL = NodeClaimTTL{
duration: registrationTTL,
reason: registrationTTLReason,
}
LaunchTTL = NodeClaimTTL{
duration: launchTTL,
reason: launchTTLReason,
}
)
WDYT?
Fixes #N/A
Description
If Karpenter encounters issues launching instances then we should retry for a shorter amount of time than the full registration TTL. This unblocks provisioning decisions for pods that may be stuck waiting for compute that will not come up due to launch failures. This also allows Karpenter to make new decisions about what compute should be provisioned.
How was this change tested?
Unit tests.
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.