|
| 1 | +--- |
| 2 | +pcx_content_type: reference |
| 3 | +title: Architecture |
| 4 | +sidebar: |
| 5 | + order: 9 |
| 6 | +--- |
| 7 | + |
| 8 | +### How and where containers run |
| 9 | + |
| 10 | +After you deploy a Worker that uses a Container, your image is uploaded to |
| 11 | +[Cloudflare's Registry](/containers/image-management) and distributed globally to Cloudflare's Network. |
| 12 | +Cloudflare will pre-schedule instances and pre-fetch images across the globe to ensure quick start |
| 13 | +times when scaling up the number of concurrent container instances. This allows you to call |
| 14 | +`env.YOUR_CONTAINER.get(id)` and get a new instance quickly without worrying |
| 15 | +about the underlying scaling. |
| 16 | + |
| 17 | +When a request is made to start a new container instance, the nearest location |
| 18 | +with a pre-fetched image is selected. Subsequent requests to the same instance, |
| 19 | +regardless of where they originate, will be routed to this location as long as |
| 20 | +the instance stays alive. |
| 21 | + |
| 22 | +Starting additional container instances will use other locations with pre-fetched images, |
| 23 | +and Cloudflare will automatically begin prepping additional machines behind the scenes |
| 24 | +for additional scaling and quick cold starts. Because there are a finite number pre-warmed |
| 25 | +locations, some container instances may be started in locations that are farther away from |
| 26 | +the end-user. This is done to ensure that the container instance starts quickly. You are |
| 27 | +only charged for actively running instances and not for any unused pre-warmed images. |
| 28 | + |
| 29 | +Each container instance runs inside its own VM, which provides strong |
| 30 | +isolation from other workloads running on Cloudflare's network. Containers |
| 31 | +should be built for the `linux/amd64` architecture, and should stay within |
| 32 | +[size limits](/containers/platform-details/#limits). Logging, metrics collection, and |
| 33 | +networking are automatically set up on each container. |
| 34 | + |
| 35 | +### Life of a Container Request |
| 36 | + |
| 37 | +When a request is made to any Worker, including one with an associated Container, it is generally handled |
| 38 | +by a datacenter in a location with the best latency between itself and the requesting user. |
| 39 | +A different datacenter may be selected to optimize overall latency, if [Smart Placement](/workers/configuration/smart-placement/) |
| 40 | +is on, or if the nearest location is under heavy load. |
| 41 | + |
| 42 | +When a request is made to a Container instance, it is sent through a Durable Object, which |
| 43 | +can be defined by either using a `DurableObject` or the [`Container` class](/containers/container-package), which |
| 44 | +extends Durable Objects with Container-specific APIs and helpers. We recommend using `Container`, see |
| 45 | +the [`Container` class documentation](/containers/container-package) for more details. |
| 46 | + |
| 47 | +Each Durable Object is a globally routable isolate that can execute code and store state. This allows |
| 48 | +developers to easily address and route to specific container instances (no matter where they are placed) |
| 49 | +define and run hooks on container status changes, execute recurring checks on the instance, and store persistent |
| 50 | +state associated with each instance. |
| 51 | + |
| 52 | +As mentioned above, when a container instance starts, it is launched in the nearest pre-warmed location. This means that |
| 53 | +code in a container is usually executed in a different location than the one handling the Workers request. |
| 54 | + |
| 55 | +:::note |
| 56 | +Currently, Durable Objects may be co-located with their associated Container instance, but often are not. |
| 57 | + |
| 58 | +Cloudflare is currently working on expanding the number of locations in which a Durable Object can run, |
| 59 | +which will allow container instances to always run in the same location as their Durable Object. |
| 60 | +::: |
| 61 | + |
| 62 | +Because all Container requests are passed through a Worker, end-users cannot make TCP or |
| 63 | +UDP requests to a Container instance. Workers themselves cal make TCO and UDP requests to |
| 64 | +the Container using the [`connect()` API](/workers/runtime-apis/tcp-sockets/#connect). If you have a use |
| 65 | +case that requires inbound TCP or UDP from an end-user, please [let us know](https://forms.gle/AGSq54VvUje6kmKu8). |
0 commit comments