Replies: 1 comment 2 replies
-
So technically if you deploy trame in kubernete, it should be a setup similar to JupyterHub where you get a dedicated pod per user. Using the default docker image lead indeed to 1 process per user within that same pod. And if your trame app is based on ParaView, that ~300MB of memory is about right. Depending on your app design, you could collocate several apps into the same process, but at that point you are slicing 1 CPU across many users for all the network, data processing and rendering... So no easy answer unfortunately, but without knowing more about your app and your infrastructure it won't be easy to help. Also it might get into details that can go beyond general community support. If you need professional support we can help. |
Beta Was this translation helpful? Give feedback.
-
Hi,
I've managed to deploy a Trame application on a Kubernetes cluster by simply converting the single Dockerfile to Kubernetes resources. However, this is not scalable - I've noticed that Trame runs a separate Python process per websocket link, which is quite heavy on memory resources (about 500 MiB on fresh start for our application). Regardless of the memory limits I set, clients can easily bring the deployment down by simply refreshing the page fast enough (websocket link servers seem to have quite long life even when unused). When the pod gets out of memory, Kubernetes restarts it and this affects all clients.
Do you have some guidelines for production deployments on Kubernetes (or another system)? Or in general, is it possible to aggregate websocket links to a handful of worker processes so the memory consumption cannot grow above all limits?
Beta Was this translation helpful? Give feedback.
All reactions