Replies: 1 comment 2 replies
-
|
Another option, use the BullMQ Proxy so that your cloud runner is called by the processor itself, not sure it will work in your specific case but it seems doable: https://github.com/taskforcesh/bullmq-proxy |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi! BullMQ user for around two years now, been loving it!
I have a use case for which I'd like to get some advice.
Say that I have a potentially long-running (from 1 min to a few hours) job that I need to execute as a (Google) Cloud Run Job, and that I can have many of these jobs running at the same time. I have thought of two possible ways to do this, and I'd like to know whether I've got my bases covered :)
Scenario 1:
Job
Jgets placed on queueQ. Worker forQpicks upJ, and its processor updatesJ's data with some essential stuff that is needed for the cloud execution, and then spawns an instance of the Cloud Run Job providing it withJ's id. The cloud runner connects toQand requests jobJ, and starts its long-running operation.Scenario 2:
Job
Jgets placed on queueQ. Worker forQpicks upJ, and its processor adds a job on queueX(whereXis a unique name) and spawns the cloud runner, providing it withX. The runner then creates a worker forXand processes the (only) job available, which will run until completion.In both scenarios, I do update the progress with some custom objects that help me track what's going on in the job.
There are a few questions I have, because I'm not sure if the Sc2 is overkill or is actually the better way to do this:
job.progressfrom within the processor? Or would I have to start aQueueEventsto intercept those updates? (I'm assuming the latter)job.updateProgress()andjob.updateData()from within the cloud runner, or is there a lock placed which would prevent me from doing so? (I'm assuming the lock mentioned in the docs is just to prevent two workers from working on the same job, but in Sc1 the runner just picks up the job data from the queue)QueueEventsinstance again, right?Q's worker on hold until the runner finishes (and then execute it there), or to create another queue for post-job cleanup? (think of this as afinallyin atry...catch...finally)By reading #2489 it appears that having many queues is just painful in terms of debugging (but we can solve that with observability tools), but I'd like to be sure before proceeding.
Hope I've been clear in my explanation :D
Thanks in advance!
Beta Was this translation helpful? Give feedback.
All reactions