Skip to content

Load balancing or async support for REST API bottleneck due to synchronous calls #2447

Open
@AndreaRacoon

Description

@AndreaRacoon

Hi Cog team,

I'm currently exploring the use of cog microservices to power a service-based website that would serve machine learning APIs to thousands of customers. However, I'm running into scalability issues due to the "blocking REST API call" nature of Cog — essentially a synchronous call bottleneck.

Because each inference call ties up a worker until it completes, it becomes difficult to serve multiple concurrent API requests efficiently. This can lead to timeout errors, especially when multiple users are trying to access different endpoints or models at once.

Use case:
I'm building a system where several Cog-powered APIs (each wrapping different models) need to be called in parallel by many users. Ideally, I'd like to horizontally scale these microservices and balance requests internally without spinning up redundant full containers for each concurrent call.

My questions:
Is there a recommended way to load balance REST API calls internally within Cog to prevent timeouts and scale more efficiently?

Would there be value in supporting async inference endpoints or a built-in queue that handles concurrent requests with appropriate workers?

Are there any best practices for deploying Cog in a production-grade setup with autoscaling, load balancing, or using async workers behind a gateway?

I'd appreciate any suggestions, workarounds, or roadmap visibility regarding better concurrency support.

Thanks for the great work you're doing with Cog!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions