-
Notifications
You must be signed in to change notification settings - Fork 67
✨ Adding CAPI Provisioner to vc-manager #136
✨ Adding CAPI Provisioner to vc-manager #136
Conversation
} | ||
|
||
return ctrl.Result{}, nil | ||
return ctrl.Result{RequeueAfter: 5 * time.Second}, nil |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For now this will just constant poll for changes until status provisioned, this should in the long run be triggered by the NCP status updating instead.
@@ -0,0 +1,128 @@ | |||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Apparently the role.yaml
wasn't working cause the comments were nested under a function. Moving them allows controller-gen to regenerate these, I'm not sure if they are 100% but this work can be punted to #130
image: virtualcluster/manager-amd64 | ||
imagePullPolicy: Always | ||
imagePullPolicy: Never |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will be handled properly with #130 using kustomize
to set them differently based on the env/stage, for now Never
allows us to build images locally and load them into kind for testing.
github.com/moby/spdystream => github.com/moby/spdystream v0.2.0 | ||
k8s.io/api => k8s.io/api v0.21.1 | ||
k8s.io/apiextensions-apiserver => k8s.io/apiextensions-apiserver v0.21.1 | ||
k8s.io/apimachinery => k8s.io/apimachinery v0.21.1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You'll notice this also updates us to v0.21 for the lates Controller Runtime version that CAPI v1alpha4 shipped with.
25ff104
to
b25255e
Compare
/assign @Fei-Guo @charleszheng44 @weiling61 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome work! Nits.
virtualcluster/pkg/controller/controllers/capi_virtualcluster_controller.go
Outdated
Show resolved
Hide resolved
Signed-off-by: Chris Hein <[email protected]>
Signed-off-by: Chris Hein <[email protected]>
b25255e
to
3799f8b
Compare
thanks @Fei-Guo updated based on your feedback. |
return ctrl.NewControllerManagedBy(mgr). | ||
WithOptions(opts). | ||
For(&tenancyv1alpha1.VirtualCluster{}). | ||
Owns(&clusterv1.Cluster{}). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think setting owner reference for the Cluster object is a must now, otherwise, the vcmanager will miss the event of Cluster state update?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sorta, right now if the cluster isn't up we blindly retry until successful, which accounts for this case, the code I removed wasn't being used at all since I didn't set and ownerref.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
my concern is I'm not sure I fully agree that a VC should be the owner of the cluster, like there could be a case where you want to move this can not delete all the resources and having an ownerref would setup GC to delete it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This all happens right - https://github.com/kubernetes-sigs/cluster-api-provider-nested/pull/136/files#diff-607056cc549543fc88ebc9695705629f15915ccaac3161e854845ca5a2f0b548R149-R159
We can definitely improve that too trying to figure out the right model for how this wires together.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My concern is that if vc is not the cluster owner, once cluster become ready, the event will not trigger vc reconciler because the cluster does not have owner. Then the vc status cannot be updated to ready unless we add periodic check. Do I miss anything?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, check the link in that last message, if the cluster is in Provisioning which is auto set if blank then I return ctrl.Result{RequeueAfter: 1*time.Second}, nil
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see. Since vc lifecycle and cluster lifecycle are completely decoupled now, polling in VC controller may not be ideal since it may take longtime until the cluster is ready. It is ok for now, moving forward we can set up cluster Watch
manually and implement a custom enqueue struct so that we can add the matching VC in the queue by checking the annotation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, let me file an issue to track on improving this, cause I 100% agree.
LGTM. I will let Chao give a final approve. |
@christopherhein Everything works as expected until I created a pod on the tenant cluster. The pod hangs in the I0617 16:08:03.453969 1 syncer.go:383] cluster default shutdown: Get "https://cluster-sample-apiserver:6443/api?timeout=30s": dial tcp 127.0.0.1:6443: connect: connection refused That's wired. Because I was able to do port forwarding on |
@charleszheng44 It looks like you are using the wrong specs, can you delete that cluster then redo it making sure that the Cluster object includes the namespace in the |
if c.ProvisionerName == "capi" { | ||
if err := (&controllers.ReconcileCAPIVirtualCluster{ | ||
Client: mgr.GetClient(), | ||
Log: c.Log.WithName("virtualcluster"), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should we change the log name here to distinguish between different controller?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We're still reconciling virtualcluster
resources so I don't think so and they can't be turned on with other reconcilers since it's done via the --master-prov
flag. Also we don't do this with native
vs aliyun
:
So I would say no, but I'm open to it.
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: charleszheng44, christopherhein The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/lgtm |
What this PR does / why we need it:
This adds integration points between VC and CAPN but more so CAPI as it doesn't use any CAPN components, solely reliant on CAPI's
v1alpha4.Cluster{}
resource. The templates/cluster-template-virtualcluster.yaml is aclusterctl
--flavor
for auto configuring the VirtualCluster CR with the cluster.Testing:
master
master
clusterctl
cd virtualcluster/
make build-images
kind load docker-image virtualcluster/vn-agent-amd64 && kind load docker-image virtualcluster/syncer-amd64 && kind load docker-image virtualcluster/manager-amd64
kubectl apply -f config/crd/
kubectl apply -f config/setup/all_in_one_capi.yaml
../cluster-api/bin/clusterctl generate cluster ${CLUSTER_NAME} --from templates/cluster-template-virtualcluster.yaml | k apply -f -
Which issue(s) this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close the issue(s) when PR gets merged):Fixes #81
Fixes #135
/milestone v0.1.x