Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[request]: Cached /metrics result #621

Open
victoramsantos opened this issue Dec 11, 2023 · 3 comments
Open

[request]: Cached /metrics result #621

victoramsantos opened this issue Dec 11, 2023 · 3 comments

Comments

@victoramsantos
Copy link
Contributor

Use case. Why is this important?

I'm working in a company where we are already reaching the AWS quota limits for API calls for cloudwatch. We are thinking about solutions where we can reduce these calls without impacting user experience like removing metrics or increase too much the period_seconds for all metrics.

I want to discuss if would be interesting to have a cache solution for cloudwatch-exporter. Like a ttl that even though we have other requests to /metrics we will still answering with the cache until this ttl goes way and then we would apply another request to collect metrics and to cache the new answer repeating the process.

This solution could reduce to half of our requests (since we have 2 prometheus replicas running).

Is this a desirable feature that we could spend some time on?

@SuperQ
Copy link
Member

SuperQ commented Dec 12, 2023

Caching of the /metrics would probably be more likely implemented as caching of specific cloudwatch API calls. For example, ListMetrics caching was added in #453.

Something similar could be done for the actual metric fetching calls as well.

As a workaround, it's already possible to implement this with any caching reverse proxy. For example it's pretty easy to do with an EnvoyProxy sidecar. This is what we do in production.

@matthiasr
Copy link
Contributor

matthiasr commented Jan 15, 2024 via email

@SuperQ
Copy link
Member

SuperQ commented Jan 15, 2024

@matthiasr I was thinking TTLs could be configured with more granularity. Caching some metric data longer than others.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants