Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Monitoring tags consumes more Redis RAM than expected #1314

Closed
benholmen opened this issue Sep 7, 2023 · 7 comments
Closed

Monitoring tags consumes more Redis RAM than expected #1314

benholmen opened this issue Sep 7, 2023 · 7 comments

Comments

@benholmen
Copy link

Horizon Version

5.20.0

Laravel Version

10.22.0

PHP Version

8.2.9

Redis Driver

PhpRedis

Redis Version

Predis 2.2.1

Database Driver & Version

No response

Description

We ran into out of memory issues with Redis and after investigation found that several zsets were unusally large:

Biggest   zset found 'hubventory_horizon:media-import' has 16922051 members
...
Biggest   zset found 'hubventory_horizon:media-import' has 2379151023 bytes

Those correspond to a media-import tag that we are monitoring in Horizon. The oldest score/timestamp in that zset was 7 months old, which is much older than we expected.

Our Horizon config includes the following trim settings:

    'trim' => [
        'recent' => 60,
        'pending' => 60,
        'completed' => 60,
        'recent_failed' => 10080,
        'failed' => 10080,
        'monitored' => 1440,
    ],

I did some source diving and found RedisJobRepository@trimMonitoredJobs that seems to intend to clean this up using config('horizon.trim.monitored') but it only works on the monitored_jobs key.

Is it up to us to trim these monitored jobs or is there a Horizon method we can use to keep them from growing over time and consuming so much RAM?

Thanks so much!

Steps To Reproduce

  1. Monitor a tag via the UI.
  2. Complete many jobs with that tag.
@benholmen benholmen changed the title Monitoring tags consumes a lot of Redis RAM Monitoring tags consumes more Redis RAM than expected Sep 7, 2023
@benholmen
Copy link
Author

Similar issues I found: 715, 333

Related PR: 484

@driesvints
Copy link
Member

Hi @benholmen. Thanks for reporting this. Which exact key would we also target with this trim?

@driesvints
Copy link
Member

@benholmen have you had time to look at my question above?

@benholmen
Copy link
Author

Dries - I have, I am working on a small PR to demonstrate it. I'll submit it shortly.

@benholmen
Copy link
Author

@driesvints PR submitted!

@benholmen
Copy link
Author

For anyone with a similar issue, @taylorotwell has clarified that the intended use case for monitoring is short term, not permanent monitoring (see #1317).

If you do monitor a tag indefinitely, your Redis memory consumption will grow over time. You can manually trim those monitored tags using a redis command like the following:

zremrangebyscore [app name]_horizon:[monitored tag name] -inf +inf

This would completely empty the redis key. You can also use timestamps to limit the range:

zremrangebyscore [app name]_horizon:[monitored tag name] -inf 1694062800

To identify large redis keys that may need to be trimmed, you can use the redis-cli command:

redis-cli --bigkeys
redis-cli --memkeys

@fangyuan0306
Copy link

fangyuan0306 commented Jul 22, 2024

I am currently facing a memory overflow issue. I checked the large keys in Redis and discovered the following problems. How should I handle this?

Key Data Type Encoding Memory Usage (Bytes) Number of Elements Max Element Length Average Element Length Expiration Timestamp
ai_orion_horizon:recent_jobs set skiplist 577,221,526 4,204,694 36 36 0
ai_orion_horizon:completed_jobs set skiplist 577,185,982 4,204,534 36 36 0
ai_orion_horizon:failed_jobs set skiplist 21478 159 36 36 0
ai_orion_horizon:recent_failed_jobs set skiplist 21422 159 36 36 0

Horizon Version
5.16.1

Laravel Version
10.13.2

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants