Possible optimisation for BaseInputTransport
ThreadPoolExecutor?
#910
Labels
question
Further information is requested
BaseInputTransport
ThreadPoolExecutor?
#910
Description
While having a conversation with Claude to understand this project better, we hit this subject based on
BaseInputTransport.py
. Conversation:Conversation
User: Instruction: Given that this class has a thread pool executor, explain how the task from the event loop runs in this thread pool executor
Selected text:
AI: This code demonstrates how to run CPU-intensive work in a thread pool without blocking the event loop. Here's how it works:
The process:
This pattern is used because:
The await ensures we get the result before continuing but doesn't block the event loop while waiting.
User: If I create 10 ThreadPoolExecutors with max threads 5 and my cpu has 8 cores, how will this be handled?
AI: Creating 10 ThreadPoolExecutors with 5 max_workers each means you're potentially creating a pool of 50 threads, but this doesn't map directly to CPU cores. Here's what happens:
Better Approach:
Rule of thumb:
Creating multiple ThreadPoolExecutors is usually unnecessary and potentially harmful to performance.
User: I see. So in the case of BaseInputTransport, the fact that it creates a ThreadPoolExecutor for each instantiation is bad?
AI: Yes, the current implementation in BaseInputTransport could be improved. Looking at the code:
Problems:
Better Design:
Or even better:
This would be a good issue to raise on the project - it's an easy optimization with real benefits.
// End conversation
Issue description
Based on this conversation, the
BaseInputTransport
class creates a new ThreadPoolExecutor for each instantiation. I realize that for each pipeline, you will have just one transport input; however, if you create a big application that makes and receives calls, you can have multiple pipelines in parallel, and then you potentially can have multiple Thread Pools for the same work. Code:Please let me know if this issue is relevant. I can open a PR with a fix.
The text was updated successfully, but these errors were encountered: