-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
baremetal: Add cluster admin kubeconfig into a secret #4456
Conversation
/label platform/baremetal |
Some additional context - it seems that the kubeconfig file is appended to the MCS config ref https://github.com/openshift/machine-config-operator/blob/master/pkg/server/cluster_server.go#L116 This means it's missing when we consume the rendered config via the merged MachineConfig object in openshift/cluster-api-provider-baremetal#127 because that doesn't access via the MCS (due to the network rules discussed previously ref openshift/machine-config-operator#1690) |
/retest |
1 similar comment
/retest |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are we OK that anyone who has RBAC permissions to get MachineConfigs can get the admin kubeconfig?
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
This seems reasonable, but I'd like feedback from the MCO team to understand why the MCO generates the kubeconfig on the fly, and loads the cert/secret from disk ref https://github.com/openshift/machine-config-operator/blob/master/pkg/server/cluster_server.go#L116 I'm not clear if that's a historical thing or a deliberate choice, e.g to ensure the admin kubeconfig and related secret isn't accessible directly via the k8s API? The other alternative would be to generate the kubeconfig in the cluster-api-provider-baremetal similar to the MCS, but I'm not clear if the following cert/secret is sufficient for that?
I don't see the All that said, I'm fine with solving this in the installer, provided we can get sufficient feedback re the security aspects. |
/retest |
I did a little bit of digging and noticed that the contents in MCO comes from a secret in the I surmise that the decision made by MCO (to defer adding the contents of the secret in the MCS) is that this kubeconfig data is secure and only needs to exist in the master nodes, where the MCS daemonset runs. This also explains why there was an explicit firewall rule in the workers nodes to block access from the pods to MCS. In any case, this PR now creates a similar secret asset in the openshift-machine-api namespace, which the baremetal controller now [1] can read and append to the rendered machineconfig much like what MCS does... |
/retest |
@staebler . Hello!! Can you PTAL at this now? I've made the kubeconfig a secret in the openshift-machine-api namespace, which we can access from the baremetal machine-controller. Thanks. |
/retest |
@kirankt: The following tests failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
This secret is managed by kubernetes and is associated with the |
Looking at openshift/cluster-api-provider-baremetal@5391f7e#diff-2635a4094a1b9403011c9044be0b7adff3ed8d6bcec5cb76f215e95f33e04699R605, it seems to me that the kubeconfig is going to be used by the kubelet on the machines. This should almost certainly not be the admin kubeconfig. |
Hi @staebler . Yes. This secret is the kubelet secret NOT the admin secret as originally planned. Sorry, if I wasn't clear. I believe the node-bootstrapper-token one lives in the openshift-machine-controller-operator namespace and is the exact same thing as this kubelet kubeconfig which this PR creates in the openshift-machine-api namespace. Thanks. Here is the proof: 😄
|
Oh yeah, I missed that it was using the kubelet kubeconfig. It is still not clear to me why this needs to be laid down by the installer instead of handling with a service account token by the machine-api-operator. |
Closing this PR as we need explore iso-based userdata installs. |
This doesn't work for IPI baremetal deployments driven via hive, because there are firewall rules that prevent access to the bootstrap MCS from the pod running the installer. This was implemented in: openshift#4427 But we ran into problems making the same approach work for worker machines ref: openshift#4456 We're now looking at other approaches to resolve the network-config requirements driving that work, so switching back to the pointer config for masters seems reasonable, particularly given this issue discovered for hive deployments. Conflicts: pkg/tfvars/baremetal/baremetal.go This reverts commit 98dc381.
This doesn't work for IPI baremetal deployments driven via hive, because there are firewall rules that prevent access to the bootstrap MCS from the pod running the installer. This was implemented in: openshift#4427 But we ran into problems making the same approach work for worker machines ref: openshift#4456 We're now looking at other approaches to resolve the network-config requirements driving that work, so switching back to the pointer config for masters seems reasonable, particularly given this issue discovered for hive deployments. Conflicts: pkg/tfvars/baremetal/baremetal.go This reverts commit 98dc381.
This doesn't work for IPI baremetal deployments driven via hive, because there are firewall rules that prevent access to the bootstrap MCS from the pod running the installer. This was implemented in: openshift#4427 But we ran into problems making the same approach work for worker machines ref: openshift#4456 We're now looking at other approaches to resolve the network-config requirements driving that work, so switching back to the pointer config for masters seems reasonable, particularly given this issue discovered for hive deployments. Conflicts: pkg/tfvars/baremetal/baremetal.go This reverts commit 98dc381.
This doesn't work for IPI baremetal deployments driven via hive, because there are firewall rules that prevent access to the bootstrap MCS from the pod running the installer. This was implemented in: openshift#4427 But we ran into problems making the same approach work for worker machines ref: openshift#4456 We're now looking at other approaches to resolve the network-config requirements driving that work, so switching back to the pointer config for masters seems reasonable, particularly given this issue discovered for hive deployments. Conflicts: pkg/tfvars/baremetal/baremetal.go This reverts commit 98dc381.
The baremetal platform is pivoting away from using pointer ignition config. It will be using a combination of full rendered ignition config fetched from the MCS (to deploy masters) [1][2] and the use of the rendered machineconfig objects for adding workers [3].
This PR saves the admin kubeconfig into a machineconfig asset. This is required for the baremetal machine controller to save the contents of the admin kubeconfig on the workers.
[1] #4427
[2] #4413
[3] openshift/cluster-api-provider-baremetal#127