You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In a separate project, I experienced deadlocks when using multiprocessing.Queue.
I found a solution by switching to Manager.Queue, as recommended in the Python documentation:
Warning
As mentioned above, if a child process has put items on a queue (and it has not used JoinableQueue.cancel_join_thread), then that process will not terminate until all buffered items have been flushed to the pipe.
This means that if you try joining that process you may get a deadlock unless you are sure that all items which have been put on the queue have been consumed. Similarly, if the child process is non-daemonic then the parent process may hang on exit when it tries to join all its non-daemonic children.
Note that a queue created using a manager does not have this issue. See Programming guidelines.
Although I have not encountered deadlocks in dadi-cli, I suggest switching from multiprocessing.Queue to Manager.Queue to avoid potential deadlock issues.
In a separate project, I experienced deadlocks when using
multiprocessing.Queue
.I found a solution by switching to
Manager.Queue
, as recommended in the Python documentation:Although I have not encountered deadlocks in
dadi-cli
, I suggest switching frommultiprocessing.Queue
toManager.Queue
to avoid potential deadlock issues.dadi-cli/dadi_cli/__main__.py
Line 344 in 708667f
The text was updated successfully, but these errors were encountered: