Replies: 5 comments
-
Hi @cinderrrr, I haven't considered that ability because I am not sure what is it, given that the maximum memory capacity is always fixed. What is the difference between having no upper bound on the local cache and having the ability to resize it? |
Beta Was this translation helpful? Give feedback.
-
Hiya. We have multiple individual rueidis clients to talk to each of our redis pods. We don't use redis cluster, but multiple stand-alone redis pods and manage our own sharding. So when we spin up a new redis pod, and it's discovered, we make a new rueidis client for that pod. This has the potential to runaway with scale. Does that make sense? So resizing would allow us to dynamically reduce how large the combined caches would be across all of our clients. |
Beta Was this translation helpful? Give feedback.
-
I see. While changing the size limit of the current LRU cache at runtime internally is quite easy, I haven't had any idea how to expose such a method to users externally. |
Beta Was this translation helpful? Give feedback.
-
Allowing users to bring their own cache implementation may be also a good solution to this. |
Beta Was this translation helpful? Give feedback.
-
Being able to bring our own cache implementation is something we would be interested in. |
Beta Was this translation helpful? Give feedback.
-
Hi there-- our team is interested in using the local caching feature, but we have the problem where we can potentially OOM our servers if we have too many new redis clients come online. We can't rely on a theoretical upper bound, because sometimes we run into very large scaling events, and we want to handle that growth as gracefully as possible.
Have you considered having the ability to resize the local caches without having to flush the caches or closing the connection and remaking the client?
Beta Was this translation helpful? Give feedback.
All reactions