-
Notifications
You must be signed in to change notification settings - Fork 128
Open
Labels
bugReport a bug encountered while operating LiqoReport a bug encountered while operating Liqo
Description
Is there an existing issue for this?
- I have searched the existing issues
Version
v1.0.1
What happened?
When resource enforcement is enabled on a provider cluster, the shadowpod webhook does not free resources for pods that are terminating (phase Failed or Succeeded). As a result, the quota calculation is higher than expected, and the consumer might not be able to schedule new workloads even if it is not actually consuming all of the resources provided by the provider.
The webhook should react when a pod's phase becomes Failed or Succeeded, and update its local cache to subtract resources used by that pod.
Relevant log output
How can we reproduce the issue?
- Peer 2 clusters with resource enforcement enabled on the provider (note: it is enabled by default)
- Schedule a pod on the virtual node that requests ALL resources provided by the provider (look at resourceslice status if in doubt) for a specific type (e.g., CPU).
- Let the pod succeed or fail, but do not delete it
- Schedule any new pod (with requests or limits set) on the virtual node
At the end of step 4, the pod should be (incorrectly) rejected by the provider's shadowpod webhook
Provider or distribution
any
CNI version
any
Kernel Version
any
Kubernetes Version
1.34
Code of Conduct
- I agree to follow this project's Code of Conduct
dennispan
Metadata
Metadata
Assignees
Labels
bugReport a bug encountered while operating LiqoReport a bug encountered while operating Liqo