-
Notifications
You must be signed in to change notification settings - Fork 4.2k
Description
Which component are you using?:
/area vertical-pod-autoscaler
What version of the component are you using?:
1.4.1
Component version:
What k8s version are you using (kubectl version
)?:
kubectl version
Output
$ kubectl versionClient Version: v1.33.2
Kustomize Version: v5.6.0
Server Version: v1.33.3-gke.1136000
What environment is this in?:
GKE
What did you expect to happen?:
VPA gets information from the Prometheus and continues to provide future recommendations without a pod restart.
What happened instead?:
After the VPA restart, the pods are restarted.
How to reproduce it (as minimally and precisely as possible):
Configure VPA with Prometheus storage, create vpa resource for a pod, and restart the VPA recommender after some time.
The VPA flags:
args:
- --address=:8942
- --container-name-label=container
- --container-namespace-label=namespace
- --container-pod-name-label=pod
- --cpu-histogram-decay-half-life=120h
- --metric-for-pod-labels=kube_pod_labels{job="kube-state-metrics"}[8d]
- --pod-label-prefix=label_
- --pod-name-label=pod
- --pod-namespace-label=namespace
- --pod-recommendation-min-memory-mb=100
- --prometheus-address=http://kube-prometheus-stack-prometheus.monitoring.svc.cluster.local:9090
- --prometheus-cadvisor-job-name=kubelet
- --storage=prometheus
- --vpa-object-namespace=test
- --history-length=1h
- --history-resolution=30s
- --confidence-interval-memory=10m
- --v=8
Anything else we need to know?:

In the screenshot above, you can see the behavior on the recommender restart.
From logs, I can see that recommender goes to Prometheus and receives historical data:

And in 1 minute it recommends decreasing the memory to the lower VPA limit (200Mi):
