You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am not sure if this is a bug, but I couldn't find a better category 😄
We are running a 5-node Redis sentinel setup, where each node is running both Redis and the application using BullMQ, therefore the node that is currently the master Redis instance has naturally the fastest connection to Redis.
What we observe is that this master node ends up executing the majority of jobs across queues, with the rest of the nodes usually picking up jobs only when this node has reached its worker concurrency limit.
I suppose this might be considered normal (I am not aware the mechanism that implements job picking, but it would make sense for it to be a on first-come-first-serve basis, with the node running the Redis master instance locally being the node that inherently arrives first), but I would like to know if there is some way to overcome this and get a more even job distribution between workers.
How to reproduce.
No response
Relevant log output
No response
Code of Conduct
I agree to follow this project's Code of Conduct
The text was updated successfully, but these errors were encountered:
I'm also facing this issue on my project.
I have about 32 servers connected to a redis cluster. Usually there are some servers with 60-70% CPU usage while others are 5-15%.
Did you find any solution to this problem?
As mentioned, I suppose it is caused by the latency difference between the workers and Redis but it would be nice to have a confirmation (and possible workarounds) by a maintainer!
Version
v5.4.3
Platform
NodeJS
What happened?
I am not sure if this is a bug, but I couldn't find a better category 😄
We are running a 5-node Redis sentinel setup, where each node is running both Redis and the application using BullMQ, therefore the node that is currently the master Redis instance has naturally the fastest connection to Redis.
What we observe is that this master node ends up executing the majority of jobs across queues, with the rest of the nodes usually picking up jobs only when this node has reached its worker concurrency limit.
I suppose this might be considered normal (I am not aware the mechanism that implements job picking, but it would make sense for it to be a on first-come-first-serve basis, with the node running the Redis master instance locally being the node that inherently arrives first), but I would like to know if there is some way to overcome this and get a more even job distribution between workers.
How to reproduce.
No response
Relevant log output
No response
Code of Conduct
The text was updated successfully, but these errors were encountered: