Loading 100s of workers? #351
Replies: 1 comment 3 replies
-
Hmm. By default, V8 puts all isolates in the process into a single "pointer cage" of 4GB, which makes it hard to load a huge number of isolates in one process. This might be what you're hitting here. V8 can be configured (at build time) to not use pointer cages. However, this requires turning off pointer compression as well, so overall memory usage will be higher even for a single isolate. But maybe we should flip that flag for workerd? In our edge runtime we actually use an unsupported middle-ground configuration in which we ask V8 to create a separate pointer cage for each isolate. For workerd I wanted to avoid using a V8 configuration that the V8 team isn't committed to supporting. We will probably end up working with the V8 team to find a compromise that we can upstream later on, to get ourselves out of unsupported configuration hell, and then hopefully our solution there can apply to workerd as well. But in the meantime, maybe we should simply disable pointer compression for workerd? |
Beta Was this translation helpful? Give feedback.
-
I've tried to load hundreds of relatively large workers, and I get an OOM exception when starting workerd:
The generated capnp binary config is around 2.5GB, so I assume this OOM is coming from V8 itself.
Could workerd load this whole config into memory, and then manage creating and destroying v8 isolates for me? This seems like a decent solution to this kind of scaling problem without adding multithreading to workerd itself.
Without this built in support, I'm going to need to spread my workers over X number of workerd processes or use some heuristic to restart workerd with only active workers.
Beta Was this translation helpful? Give feedback.
All reactions