You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We are seeing pushgateway occasionally get into a state where it will not accept metrics. The UI reports everything as "last push failed", and no new metrics are collected. This is rare, and only happens in prod. Killing the process fixes the issue (so it doesn't seem to be persisted state). It looks a lot like the situation with identical metrics across groups, but seems to impact everything. At this point the pod does seem to still serve /metrics (prom see samples from scrapes).
If client are submitting bad data, then one option would be to run with --push.disable-consistency-check. And then wait for /metrics scrapes to fail, and have the pod die. An even nicer approach would be that, once scrapes fail, /-/healthy should also fail (the process is literally unhealthy, and wont serve metrics), allowing orchestration to kill it.
What did you do?
Run
docker run -p 9091:9091 prom/pushgateway:v1.3.0 --push.disable-consistency-check
cat <<EOF | curl -X POST --data-binary @- http://127.0.0.1:9091/metrics/job/some_job/tag/val1
# TYPE some_metric counter
some_metric 1
EOF
cat <<EOF | curl -v -X POST --data-binary @- http://127.0.0.1:9091/metrics/job/some_job
# TYPE some_metric counter
some_metric{tag="val1"} 42
EOF
At this point the server is in an inconsistant state
Hmmm… first of all I would like to understand what's actually going wrong in the first case (where consistency checks are enabled IIUC, and still the PGW reaches an inconsistent state). This smells like an actual bug that we shouldn't gloss over.
I'm also surprised that this state isn't persisted. It makes things even weirder.
If persistence worked (which it should), restarting the PGW wouldn't really help. If persistence is switched off, a restart would help for the time being, but it would also wipe the state you might want to investigate to understand what inconsistent metric has been pushed. And finally, with consistency checks active, the PGW should never get into this state in the first place, and auto-restarts would just be a work-around for a bug we don't understand.
From that perspective, I'm not so sure if failing the health check in case of inconsistent metrics is a good idea. A middle ground could be to put that behavior behind a flag.
I do agree, and some other aspects strongly suggest something else was going on in our case. We've seen this twice, but unforatunately over the course of 2 (possibly 3) years.
Feature request
We are seeing pushgateway occasionally get into a state where it will not accept metrics. The UI reports everything as "last push failed", and no new metrics are collected. This is rare, and only happens in prod. Killing the process fixes the issue (so it doesn't seem to be persisted state). It looks a lot like the situation with identical metrics across groups, but seems to impact everything. At this point the pod does seem to still serve
/metrics
(prom see samples from scrapes).If client are submitting bad data, then one option would be to run with
--push.disable-consistency-check
. And then wait for/metrics
scrapes to fail, and have the pod die. An even nicer approach would be that, once scrapes fail,/-/healthy
should also fail (the process is literally unhealthy, and wont serve metrics), allowing orchestration to kill it.What did you do?
Run
At this point the server is in an inconsistant state
What did you expect to see?
What did you see instead? Under which circumstances?
Ideally , once
/metrics
cannot be served,/-/healthy
should returnan error code.
v1.3.0
--push.disable-consistency-check
(I've no complaint about anything in the logs)
The text was updated successfully, but these errors were encountered: