Replies: 1 comment 7 replies
-
Hi @thegagne, Yes, we need some sort of deployment story. I agree that you should be able to "wrangler publish" to your own self-hosted cluster. That said, I'm not sure that recreating the whole Cloudflare service is the right approach. I think we want something that integrates well into the self-managed stacks people are already using today, so that workerd can easily live along-side other server technologies. For example, I think we should support deploying to a kubernetes cluster. Basically what I'm imagining is That said, I personally don't have much experience with k8s, and I could be way off-base on how this should work. What do you think? |
Beta Was this translation helpful? Give feedback.
-
It would be fantastic if there was a way to deploy scripts to some sort of self-run orchestrator that was capable of deploying and running multiple scripts, updating existing scripts, and configuring a lightweight routing layer, and targeting these deployments via
wrangler
.Dev XP:
In the
wrangler.toml
, allow definition ofdeployment_target = http://workers-api.domain.com
.When this is defined,
wrangler publish
contacts this endpoint instead of Cloudflare.API:
This could API would have identical surface as the standard worker apis (or maybe a subset). Under the hood, operation would obviously be a little different.
Routing:
Configurable from the api, this would setup a lightweight routing engine that is capable of creating routes with deployed scripts as backends.
Script deployment:
This gets tricky and I'm not going to propose a specific method for actually running the various scripts. Not my area of expertise, but something that requires minimal footprint, but could potentially scale horizontally if necessary.
The end result should be a fully self-run worker environment, capable of running multiple scripts, exposing routes, etc. Bonus if it eventually supports some of the other worker features such as shared KV storage, Crons, Cache, etc..
This raises a real question, how much of Cloudflares full toolset could be run locally, and why would they give away this capability?
I think there are real benefits to having the ability to run this stuff in your own private network. Building things like internal tools without having to switch to use a different toolset would be amazing. Running this in your own private network does not remove the need for Cloudflares other offerings, especially the fantastic network and DDOS protection. I think it rather complements it and provides some peace of mind that if CF had a major outage for a couple days, you could potentially have a DR plan and limp along with something you run yourself.
Beta Was this translation helpful? Give feedback.
All reactions