|
| 1 | +--- |
| 2 | +title: "EKS Access" |
| 3 | +linkTitle: "EKS Access" |
| 4 | +date: 2022-01-03 |
| 5 | +draft: false |
| 6 | +weight: 1 |
| 7 | +description: How to access your Opta EKS Cluster |
| 8 | +--- |
| 9 | + |
| 10 | +## EKS Access |
| 11 | +As each Kubernetes cluster maintains its own cloud-agnostic rbac system to govern its own usage, extra steps |
| 12 | +must be taken on each cloud provider to reconcile the given cloud's IAM with the cluster's. For EKS, this is done |
| 13 | +via the `aws-auth` [configmap](https://kubernetes.io/docs/concepts/configuration/configmap/) stored in the `kube-system` |
| 14 | +namespace (see [here](https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html) for the official documentation). |
| 15 | +This configmap is essentially a mapping stating "AWS IAM user/role X is in group/has permissions A, B, C" in this cluster. |
| 16 | +An admin can view this configmap via this command `kubectl get cm -n kube-system aws-auth -o yaml` and these configmaps |
| 17 | +typically look like so: |
| 18 | +```yaml |
| 19 | +apiVersion: v1 |
| 20 | +data: # NOTE there are separate sections for AWS IAM Users and AWS IAM roles. |
| 21 | + mapRoles: | |
| 22 | + - groups: ['system:bootstrappers', 'system:nodes'] |
| 23 | + rolearn: arn:aws:iam::ACCOUNT_ID:role/opta-live-example-dev-eks-default-node-group |
| 24 | + username: system:node:{{EC2PrivateDNSName}} |
| 25 | + - groups: ['system:bootstrappers', 'system:nodes'] |
| 26 | + rolearn: arn:aws:iam::ACCOUNT_ID:role/opta-live-example-dev-eks-nodegroup1-node-group |
| 27 | + username: system:node:{{EC2PrivateDNSName}} |
| 28 | + - groups: ['system:masters'] |
| 29 | + rolearn: arn:aws:iam::ACCOUNT_ID:role/live-example-dev-live-example-dev-deployerrole |
| 30 | + username: opta-managed |
| 31 | + mapUsers: | |
| 32 | + - groups: ['system:masters'] |
| 33 | + userarn: arn:aws:iam::ACCOUNT_ID:user/live-example-dev-live-example-dev-deployeruser |
| 34 | + username: opta-managed |
| 35 | +``` |
| 36 | +
|
| 37 | +> Note: the IAM user/role who created the cluster is always considered root/admin and does not appear |
| 38 | +
|
| 39 | +As you can see, each entry has the following fields: |
| 40 | +* rolearn/userarn: the arn of the AWS IAM user/role to link. |
| 41 | +* username: the human-friendly distinct name/alias to recognize the rbac request from. |
| 42 | +* groups: the list of Kubernetes rbac groups to give the role/user access to. |
| 43 | +
|
| 44 | +Please refer to the [official docs](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) for full details, but |
| 45 | +note that if you want admin privileges, you simply need the `system:masters` group. For convenience, Opta has exposed a |
| 46 | +field in the `k8s-base` module for AWS known as `admin_arns`, which is where users can quickly add IAM users/roles to |
| 47 | +add as admins without dealing with Kubernetes directly. |
| 48 | + |
| 49 | +```yaml |
| 50 | +name: staging |
| 51 | +org_name: my-org |
| 52 | +providers: |
| 53 | + aws: |
| 54 | + region: us-east-1 |
| 55 | + account_id: XXXX # Your 12 digit AWS account id |
| 56 | +modules: |
| 57 | + - type: base |
| 58 | + - type: dns |
| 59 | + domain: staging.startup.com |
| 60 | + subdomains: |
| 61 | + - hello |
| 62 | + - type: k8s-cluster |
| 63 | + - type: k8s-base |
| 64 | + admin_arns: |
| 65 | + - "arn:aws:iam::XXXX:user/my-user" |
| 66 | + - "arn:aws:iam::XXXX:role/my-role" |
| 67 | +``` |
| 68 | + |
| 69 | +## K8s RBAC Groups |
| 70 | +Admittedly, Kubernetes rbac groups are |
| 71 | +[currently difficult to view](https://stackoverflow.com/questions/51612976/how-to-view-members-of-subject-with-group-kind), |
| 72 | +but you should be able to see details the current ones with the following command (you will need `jq` installed): |
| 73 | +`kubectl get clusterrolebindings -o json | jq -r '.items[] | select(.subjects[0].kind=="Group")` and |
| 74 | +`kubectl get rolebindings -A -o json | jq -r '.items[] | select(.subjects[0].kind=="Group")` (none for this by default). |
| 75 | + |
| 76 | +Essentially an rbac group is created by creating a ClusterRoleBinding (or RoleBinding for namespace-limited permissions) |
| 77 | +between the CluterRole/Role whose permissions you want to give and a new or pre-existing Group to give it to. Take the |
| 78 | +following yaml for instace: |
| 79 | + |
| 80 | +```yaml |
| 81 | +apiVersion: rbac.authorization.k8s.io/v1 |
| 82 | +kind: ClusterRoleBinding |
| 83 | +metadata: |
| 84 | + name: my-cluster-role-binding |
| 85 | +roleRef: |
| 86 | + apiGroup: rbac.authorization.k8s.io |
| 87 | + kind: ClusterRole |
| 88 | + name: system:discovery |
| 89 | +subjects: |
| 90 | +- apiGroup: rbac.authorization.k8s.io |
| 91 | + kind: Group |
| 92 | + name: my-group |
| 93 | +``` |
| 94 | + |
| 95 | +For this case, this ClusterRoleBinding says "give all member of the Group named my-group all the permissions of the |
| 96 | +ClusterRole named system:discovery on all namespaces" (you can bind to ServiceAccounts as well, please see the docs for |
| 97 | +more details). |
| 98 | + |
| 99 | +## Conclusion |
| 100 | +So, to summarize: |
| 101 | + |
| 102 | +* If you wish to add an IAM role/user to be an admin in the K8s cluster, go ahead and use the `admin_arns` field for the |
| 103 | + AWS `k8s-base` module |
| 104 | +* If you wish to add an IAM role/user to a different set of K8s permissions already found in a pre-existing group, go |
| 105 | + ahead and manually add them in the `aws-auth` configmap on the `kube-system` namespace |
| 106 | +* If you wish to create a new K8s group to capture a new set of permissions, go ahead and do so with role binding/cluster role bindings. |
0 commit comments