-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add vpc support for capl clusters #159
Conversation
What happens to NodePort services in this model? Are they bound on eth0 still? |
Request for service exposed as NodePort can come on any ip-address (public, private or vpc address). So they are bound to all interfaces on the host. However, we are going to configure cilium-node to have |
a359922
to
c8c90be
Compare
# ipMasqAgent: | ||
# enabled: true | ||
# bpf: | ||
# masquerade: true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not using bpf by default as I want to get VPC working first. Once the PR is merged, we can have a minor PR to switch from iptables based masquerading to bpf based masquerading.
919f64d
to
686044d
Compare
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #159 +/- ##
==========================================
- Coverage 54.42% 54.21% -0.21%
==========================================
Files 27 27
Lines 1560 1577 +17
==========================================
+ Hits 849 855 +6
- Misses 663 671 +8
- Partials 48 51 +3 ☔ View full report in Codecov by Sentry. |
03c6152
to
ae1bf19
Compare
@rahulait given our default will now leverage VPC+direct routing, can we create a new flavor that includes VPCless deployments so we still have documented support for DCs without VPC support yet? |
Sure, that might need a bit more change. Based on our discussions, my understanding was that everything will be there within VPCs and we won't support VPCless. We need metadata service as well and its not available in all datacenters. Let me see how much change it would be and if we can also add support for VPCless deployments. |
CCM e2e tests were failing when run against VPC. We need the PR linode/linode-cloud-controller-manager#182 to be merged as well. Without this fix, it doesn't break cluster installs but breaks exposing services of type loadbalancer. |
80f49b1
to
ff1b7c6
Compare
Linode CCM tests pass when running within VPC and route-controller enabled
|
I was able to test that this successfully creates a cluster and all pods come up within the VPC along with nodes showing the correct internal IP. From my perspective this PR looks good. I don't necessarily want to hold this PR up on a doc update, so can we get a follow up PR to add a VPC topic describing how we are using VPCs by default and what the different configurations on the |
Going to merge it in and open doc and VPCLess PR soon. |
* add vpc support for capl clusters * disable kube-proxy, address review comments * use updated version of linode-ccm * address review comments * don't use /etc/hosts for node-ip * add additional interface using machinetemplate * rebase fix and address comments * reduce cognitive complexity, update tests and use ccm v0.4.3 for updated helm chart * update linode-ccm to v0.4.4 * address review comments, add vpc note --------- Co-authored-by: Rahul Sharma <[email protected]>
What type of PR is this?
/kind feature
What this PR does / why we need it:
With this change, k8s clusters are provisioned by default within a VPC.
Since nodebalancers don't yet work with VPC subnets, we attach two interfaces to a linode:
We have to keep pub+priv on eth0 because we want cloud-init to work within VPC. If eth0 is on VPC, then cloud-init fails unless we explicitly add route to make it go over eth1. Once nodebalancer and cloud-init works natively with VPC, we'll use just one interface.
We have cilium installed and configured with native routing (no vxlan). All pod-to-pod traffic goes over eth1 within the VPC.
One should be able to ping to pod ip's from any linode within the VPC.
To check if routes are added to interface or not, one can use curl as well:
The range returned in the output should match with the range present in node's spec
k get node <nodename> -o yaml | yq .spec.podCIDRs
Which issue(s) this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close the issue(s) when PR gets merged):Fixes #
Special notes for your reviewer:
TODOs: