Skip to content

Commit 4e2f0bc

Browse files
authored
all (#178)
1 parent f668ad1 commit 4e2f0bc

File tree

12 files changed

+254
-54
lines changed

12 files changed

+254
-54
lines changed
Lines changed: 106 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,106 @@
1+
---
2+
title: "EKS Access"
3+
linkTitle: "EKS Access"
4+
date: 2022-01-03
5+
draft: false
6+
weight: 1
7+
description: How to access your Opta EKS Cluster
8+
---
9+
10+
## EKS Access
11+
As each Kubernetes cluster maintains its own cloud-agnostic rbac system to govern its own usage, extra steps
12+
must be taken on each cloud provider to reconcile the given cloud's IAM with the cluster's. For EKS, this is done
13+
via the `aws-auth` [configmap](https://kubernetes.io/docs/concepts/configuration/configmap/) stored in the `kube-system`
14+
namespace (see [here](https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html) for the official documentation).
15+
This configmap is essentially a mapping stating "AWS IAM user/role X is in group/has permissions A, B, C" in this cluster.
16+
An admin can view this configmap via this command `kubectl get cm -n kube-system aws-auth -o yaml` and these configmaps
17+
typically look like so:
18+
```yaml
19+
apiVersion: v1
20+
data: # NOTE there are separate sections for AWS IAM Users and AWS IAM roles.
21+
mapRoles: |
22+
- groups: ['system:bootstrappers', 'system:nodes']
23+
rolearn: arn:aws:iam::ACCOUNT_ID:role/opta-live-example-dev-eks-default-node-group
24+
username: system:node:{{EC2PrivateDNSName}}
25+
- groups: ['system:bootstrappers', 'system:nodes']
26+
rolearn: arn:aws:iam::ACCOUNT_ID:role/opta-live-example-dev-eks-nodegroup1-node-group
27+
username: system:node:{{EC2PrivateDNSName}}
28+
- groups: ['system:masters']
29+
rolearn: arn:aws:iam::ACCOUNT_ID:role/live-example-dev-live-example-dev-deployerrole
30+
username: opta-managed
31+
mapUsers: |
32+
- groups: ['system:masters']
33+
userarn: arn:aws:iam::ACCOUNT_ID:user/live-example-dev-live-example-dev-deployeruser
34+
username: opta-managed
35+
```
36+
37+
> Note: the IAM user/role who created the cluster is always considered root/admin and does not appear
38+
39+
As you can see, each entry has the following fields:
40+
* rolearn/userarn: the arn of the AWS IAM user/role to link.
41+
* username: the human-friendly distinct name/alias to recognize the rbac request from.
42+
* groups: the list of Kubernetes rbac groups to give the role/user access to.
43+
44+
Please refer to the [official docs](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) for full details, but
45+
note that if you want admin privileges, you simply need the `system:masters` group. For convenience, Opta has exposed a
46+
field in the `k8s-base` module for AWS known as `admin_arns`, which is where users can quickly add IAM users/roles to
47+
add as admins without dealing with Kubernetes directly.
48+
49+
```yaml
50+
name: staging
51+
org_name: my-org
52+
providers:
53+
aws:
54+
region: us-east-1
55+
account_id: XXXX # Your 12 digit AWS account id
56+
modules:
57+
- type: base
58+
- type: dns
59+
domain: staging.startup.com
60+
subdomains:
61+
- hello
62+
- type: k8s-cluster
63+
- type: k8s-base
64+
admin_arns:
65+
- "arn:aws:iam::XXXX:user/my-user"
66+
- "arn:aws:iam::XXXX:role/my-role"
67+
```
68+
69+
## K8s RBAC Groups
70+
Admittedly, Kubernetes rbac groups are
71+
[currently difficult to view](https://stackoverflow.com/questions/51612976/how-to-view-members-of-subject-with-group-kind),
72+
but you should be able to see details the current ones with the following command (you will need `jq` installed):
73+
`kubectl get clusterrolebindings -o json | jq -r '.items[] | select(.subjects[0].kind=="Group")` and
74+
`kubectl get rolebindings -A -o json | jq -r '.items[] | select(.subjects[0].kind=="Group")` (none for this by default).
75+
76+
Essentially an rbac group is created by creating a ClusterRoleBinding (or RoleBinding for namespace-limited permissions)
77+
between the CluterRole/Role whose permissions you want to give and a new or pre-existing Group to give it to. Take the
78+
following yaml for instace:
79+
80+
```yaml
81+
apiVersion: rbac.authorization.k8s.io/v1
82+
kind: ClusterRoleBinding
83+
metadata:
84+
name: my-cluster-role-binding
85+
roleRef:
86+
apiGroup: rbac.authorization.k8s.io
87+
kind: ClusterRole
88+
name: system:discovery
89+
subjects:
90+
- apiGroup: rbac.authorization.k8s.io
91+
kind: Group
92+
name: my-group
93+
```
94+
95+
For this case, this ClusterRoleBinding says "give all member of the Group named my-group all the permissions of the
96+
ClusterRole named system:discovery on all namespaces" (you can bind to ServiceAccounts as well, please see the docs for
97+
more details).
98+
99+
## Conclusion
100+
So, to summarize:
101+
102+
* If you wish to add an IAM role/user to be an admin in the K8s cluster, go ahead and use the `admin_arns` field for the
103+
AWS `k8s-base` module
104+
* If you wish to add an IAM role/user to a different set of K8s permissions already found in a pre-existing group, go
105+
ahead and manually add them in the `aws-auth` configmap on the `kube-system` namespace
106+
* If you wish to create a new K8s group to capture a new set of permissions, go ahead and do so with role binding/cluster role bindings.

content/en/Reference/aws/modules/aws-dns.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -91,6 +91,7 @@ new apply.
9191
| `domain` | The domain you want (you will also get the subdomains for your use) | `None` | True |
9292
| `delegated` | Set to true once the extra [dns setup is complete](/features/dns-and-cert/dns/) and it will add the ssl certs. | `False` | False |
9393
| `upload_cert` | Deprecated | `False` | False |
94+
| `linked_module` | The module type (or name if given) to automatically add root dns records for. | `` | False |
9495

9596
## Outputs
9697

content/en/Reference/aws/modules/aws-eks.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -29,6 +29,7 @@ For information about the default IAM permissions given to the node group please
2929
| `spot_instances` | A boolean specifying whether to use [spot instances](https://aws.amazon.com/ec2/spot/) for the default nodegroup or not. The spot instances will be configured to have the max price equal to the on-demand price (so no danger of overcharging). _WARNING_: By using spot instances you must accept the real risk of frequent abrupt node terminations and possibly (although extremely rarely) even full blackouts (all nodes die). The former is a small risk as containers of Opta services will be automatically restarted on surviving nodes. So just make sure to specify a minimum of more than 1 containers -- Opta by default attempts to spread them out amongst many nodes. The former is a graver concern which can be addressed by having multiple node groups of different instance types (see aws nodegroup module) and ideally at least one non-spot. | `False` | False |
3030
| `enable_metrics` | Enable autoscaling group cloudwatch metrics collection for the default nodegroup. | `False` | False |
3131
| `node_launch_template` | Custom launch template for the underlying ec2s. | `{}` | False |
32+
| `ami_type` | The AMI type to use for the nodes. For more information about this, please visit [here](https://docs.aws.amazon.com/eks/latest/APIReference/API_Nodegroup.html#AmazonEKS-Type-Nodegroup-amiType) Note: Currently, "CUSTOM" ami type is not supported. | `AL2_x86_64` | False |
3233

3334
## Outputs
3435

content/en/Reference/aws/modules/aws-k8s-base.md

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,6 @@ description: Creates base infrastructure for k8s environments
1212
This module is responsible for all the base infrastructure we package into the Opta K8s environments. This includes:
1313

1414
- [Autoscaler](https://github.com/kubernetes/autoscaler) for scaling up and down the ec2s as needed
15-
- [External DNS](https://github.com/kubernetes-sigs/external-dns) to automatically hook up the ingress to the hosted zone and its domain
1615
- [Ingress Nginx](https://github.com/kubernetes/ingress-nginx) to expose services to the public
1716
- [Metrics server](https://github.com/kubernetes-sigs/metrics-server) for scaling different deployments based on cpu/memory usage
1817
- [Linkerd](https://linkerd.io/) as our service mesh.
@@ -24,7 +23,6 @@ This module is responsible for all the base infrastructure we package into the O
2423

2524
| Name | Description | Default | Required |
2625
| ----------- | ----------- | ------- | -------- |
27-
| `cert_arn` | The arn of the ACM certificate to use for SSL. By default uses the one created by the DNS module if the module is found and delegation enabled. | `` | False |
2826
| `nginx_high_availability` | Deploy the nginx ingress in a high-availability configuration. | `False` | False |
2927
| `linkerd_high_availability` | Deploy the linkerd service mesh in a high-availability configuration for its control plane. | `False` | False |
3028
| `linkerd_enabled` | Enable the linkerd service mesh installation. | `True` | False |
@@ -36,10 +34,14 @@ This module is responsible for all the base infrastructure we package into the O
3634
| `cert_manager_values` | Certificate Manager helm chart additional values. [Available options](https://artifacthub.io/packages/helm/cert-manager/cert-manager?modal=values) | `{}` | False |
3735
| `linkerd_values` | Linkerd helm chart additional values. [Available options](https://artifacthub.io/packages/helm/linkerd2/linkerd2/2.10.2?modal=values) | `{}` | False |
3836
| `ingress_nginx_values` | Ingress Nginx helm chart additional values. [Available options](https://artifacthub.io/packages/helm/ingress-nginx/ingress-nginx/4.0.17?modal=values) | `{}` | False |
37+
| `domain` | Domain to setup the ingress with. By default uses the one specified in the DNS module if the module is found. | `` | False |
38+
| `zone_id` | ID of Route53 hosted zone to add a record for. By default uses the one created by the DNS module if the module is found. | `` | False |
39+
| `cert_arn` | The arn of the ACM certificate to use for SSL. By default uses the one created by the DNS module if the module is found and delegation enabled. | `` | False |
3940

4041
## Outputs
4142

4243

4344
| Name | Description |
4445
| ----------- | ----------- |
45-
| `load_balancer_raw_dns` | The dns of the network load balancer provisioned to handle ingress to your environment |
46+
| `load_balancer_raw_dns` | The dns of the network load balancer provisioned to handle ingress to your environment |
47+
| `load_balancer_arn` | The arn of the network load balancer provisioned to handle ingress to your environment |

content/en/Reference/aws/modules/aws-k8s-service.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -182,6 +182,8 @@ Cron Jobs are currently created outside the default linkerd service mesh.
182182
| `ingress_extra_annotations` | These are extra annotations to add to ingress objects | `{}` | False |
183183
| `tolerations` | Taint tolerations to add to the pods. | `[]` | False |
184184
| `cron_jobs` | A list of cronjobs to execute as part of this service | `[]` | False |
185+
| `pod_annotations` | These are extra annotations to add to k8s-service pod objects | `{}` | False |
186+
| `timeout` | Time in seconds to wait for deployment. | `300` | False |
185187

186188
## Outputs
187189

content/en/Reference/aws/modules/aws-mysql.md

Lines changed: 10 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,14 @@ Opta will provision your database with 7 days of automatic daily backups in the
1717
You can find them either programmatically via the aws cli, or through the AWS web console (they will be called
1818
system snapshots, and they have a different tab than the manual ones).
1919

20+
### Performance and Scaling
21+
22+
You can modify the DB instance class with the field `instance_class` in the module configuration.
23+
24+
Storage scaling is automatically managed by AWS Aurora, see the [official documentation](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Managing.Performance.html).
25+
26+
To add replicas to an existing cluser, follow the [official guide](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-replicas-adding.html).
27+
2028
### Linking
2129

2230
When linked to a k8s-service, it adds connection credentials to your container's environment variables as:
@@ -63,4 +71,5 @@ To those with the permissions, you can view it via the following command (MANIFE
6371
| `engine_version` | The version of the database to use. | `5.7.mysql_aurora.2.04.2` | False |
6472
| `multi_az` | Enable read-write replication across different availability zones on the same reason (doubles the cost, but needed for compliance). Can be added and updated at a later date without need to recreate. | `False` | False |
6573
| `backup_retention_days` | How many days to keep the backup retention | `7` | False |
66-
| `safety` | Add deletion protection to stop accidental db deletions | `False` | False |
74+
| `safety` | Add deletion protection to stop accidental db deletions | `False` | False |
75+
| `db_name` | The name of the database to create. Follow naming conventions [here](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Limits.html#RDS_Limits.Constraints) | `app` | False |

content/en/Reference/aws/modules/aws-nodegroup.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -70,6 +70,7 @@ daemonsets to run their agents in each node, so please be careful and read their
7070
| `min_nodes` | Min number of nodes to allow via autoscaling | `3` | False |
7171
| `node_disk_size` | The size of disk to give the nodes' ec2s in GB. | `20` | False |
7272
| `node_instance_type` | The [ec2 instance type](https://aws.amazon.com/ec2/instance-types/) for the nodes. | `t3.medium` | False |
73-
| `use_gpu` | Should we expect and use the gpus present in the ec2? | `False` | False |
7473
| `spot_instances` | A boolean specifying whether to use [spot instances](https://aws.amazon.com/ec2/spot/) for the default nodegroup or not. The spot instances will be configured to have the max price equal to the on-demand price (so no danger of overcharging). _WARNING_: By using spot instances you must accept the real risk of frequent abrupt node terminations and possibly (although extremely rarely) even full blackouts (all nodes die). The former is a small risk as containers of Opta services will be automatically restarted on surviving nodes. So just make sure to specify a minimum of more than 1 containers -- Opta by default attempts to spread them out amongst many nodes. The former is a graver concern which can be addressed by having multiple node groups of different instance types (see aws nodegroup module) and ideally at least one non-spot. | `False` | False |
75-
| `taints` | Taints to add to the nodes in this nodegroup. | `[]` | False |
74+
| `taints` | Taints to add to the nodes in this nodegroup. | `[]` | False |
75+
| `use_gpu` | Should we expect and use the gpus present in the ec2? Note: This input would be deprecated in the coming releases. Please switch to using `ami_type`. Usage: If using, `use_gpu: false`, just remove it. If using `use_gpu: true` replace it with `ami_type: AL2_x86_64_GPU` | `False` | False |
76+
| `ami_type` | The AMI type to use for the nodes. For more information about this, please visit [here](https://docs.aws.amazon.com/eks/latest/APIReference/API_Nodegroup.html#AmazonEKS-Type-Nodegroup-amiType) Note: Currently, "CUSTOM" ami type is not supported. | `AL2_x86_64` | False |

content/en/Reference/aws/modules/aws-postgres.md

Lines changed: 21 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,15 @@ Opta will provision your database with 7 days of automatic daily backups in the
1717
You can find them either programmatically via the aws cli, or through the AWS web console (they will be called
1818
system snapshots, and they have a different tab than the manual ones).
1919

20+
### Performance and Scaling
21+
22+
You can modify the DB instance class with the field `instance_class` in the module configuration.
23+
24+
Storage scaling is automatically managed by AWS Aurora, see the [official documentation](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Managing.Performance.html).
25+
26+
To add replicas to an existing cluser, follow the [official guide](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-replicas-adding.html).
27+
28+
2029
### Linking
2130

2231
When linked to a k8s-service, it adds connection credentials to your container's environment variables as:
@@ -63,4 +72,15 @@ To those with the permissions, you can view it via the following command (MANIFE
6372
| `engine_version` | The version of the database to use. | `11.9` | False |
6473
| `multi_az` | Enable read-write replication across different availability zones on the same reason (doubles the cost, but needed for compliance). Can be added and updated at a later date without need to recreate. | `False` | False |
6574
| `safety` | Add deletion protection to stop accidental db deletions | `False` | False |
66-
| `backup_retention_days` | How many days to keep the backup retention | `7` | False |
75+
| `backup_retention_days` | How many days to keep the backup retention | `7` | False |
76+
| `extra_security_groups_ids` | Ids of extra AWS security groups to add to the database | `[]` | False |
77+
| `create_global_database` | Create an Aurora Global database with this db as the master/writer | `False` | False |
78+
| `existing_global_database_id` | ID of the Aurora global database to attach | `None` | False |
79+
| `database_name` | The name of the database to create. Follow naming conventions [here](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Limits.html#RDS_Limits.Constraints) | `app` | False |
80+
81+
## Outputs
82+
83+
84+
| Name | Description |
85+
| ----------- | ----------- |
86+
| `global_database_id` | The id of the global database, if created |

0 commit comments

Comments
 (0)