Skip to content

fix: add messages for high latency during scrape and publish #1215

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

danielstokes
Copy link
Contributor

Description

  • Add messages for high latency during scrape and publish
  • Fix the sleep interval so that it waits for the next interval, if it exceeds the interval, it will run at the next scheduled time
  • Fix log message when setting up the kubelet scraper
  • Fix kubelet scraper during shutdown

Currently, when we scrape KSM, Kubelet or the Control-Plane, if the scrape duration (the total time it takes to collect metrics, transform them, then publish them) exceeds the configured limit, then it will sleep for a negative time (meaning, do it immediately), which not only breaks the interval in which data is reported, but also means when something is overloaded, it does not wait at all.

Type of change

  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • New feature / enhancement (non-breaking change which adds functionality)
  • Security fix
  • Bug fix (non-breaking change which fixes an issue)

Checklist:

  • Add changelog entry following the contributing guide
  • Documentation has been updated
  • This change requires changes in testing:
    • unit tests
    • E2E tests

@danielstokes danielstokes requested a review from a team as a code owner May 5, 2025 16:43
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants