Skip to content

Commit

Permalink
all (#178)
Browse files Browse the repository at this point in the history
  • Loading branch information
NitinAgg authored Apr 13, 2022
1 parent f668ad1 commit 4e2f0bc
Show file tree
Hide file tree
Showing 12 changed files with 254 additions and 54 deletions.
106 changes: 106 additions & 0 deletions content/en/Reference/aws/eks_access.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,106 @@
---
title: "EKS Access"
linkTitle: "EKS Access"
date: 2022-01-03
draft: false
weight: 1
description: How to access your Opta EKS Cluster
---

## EKS Access
As each Kubernetes cluster maintains its own cloud-agnostic rbac system to govern its own usage, extra steps
must be taken on each cloud provider to reconcile the given cloud's IAM with the cluster's. For EKS, this is done
via the `aws-auth` [configmap](https://kubernetes.io/docs/concepts/configuration/configmap/) stored in the `kube-system`
namespace (see [here](https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html) for the official documentation).
This configmap is essentially a mapping stating "AWS IAM user/role X is in group/has permissions A, B, C" in this cluster.
An admin can view this configmap via this command `kubectl get cm -n kube-system aws-auth -o yaml` and these configmaps
typically look like so:
```yaml
apiVersion: v1
data: # NOTE there are separate sections for AWS IAM Users and AWS IAM roles.
mapRoles: |
- groups: ['system:bootstrappers', 'system:nodes']
rolearn: arn:aws:iam::ACCOUNT_ID:role/opta-live-example-dev-eks-default-node-group
username: system:node:{{EC2PrivateDNSName}}
- groups: ['system:bootstrappers', 'system:nodes']
rolearn: arn:aws:iam::ACCOUNT_ID:role/opta-live-example-dev-eks-nodegroup1-node-group
username: system:node:{{EC2PrivateDNSName}}
- groups: ['system:masters']
rolearn: arn:aws:iam::ACCOUNT_ID:role/live-example-dev-live-example-dev-deployerrole
username: opta-managed
mapUsers: |
- groups: ['system:masters']
userarn: arn:aws:iam::ACCOUNT_ID:user/live-example-dev-live-example-dev-deployeruser
username: opta-managed
```
> Note: the IAM user/role who created the cluster is always considered root/admin and does not appear
As you can see, each entry has the following fields:
* rolearn/userarn: the arn of the AWS IAM user/role to link.
* username: the human-friendly distinct name/alias to recognize the rbac request from.
* groups: the list of Kubernetes rbac groups to give the role/user access to.
Please refer to the [official docs](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) for full details, but
note that if you want admin privileges, you simply need the `system:masters` group. For convenience, Opta has exposed a
field in the `k8s-base` module for AWS known as `admin_arns`, which is where users can quickly add IAM users/roles to
add as admins without dealing with Kubernetes directly.

```yaml
name: staging
org_name: my-org
providers:
aws:
region: us-east-1
account_id: XXXX # Your 12 digit AWS account id
modules:
- type: base
- type: dns
domain: staging.startup.com
subdomains:
- hello
- type: k8s-cluster
- type: k8s-base
admin_arns:
- "arn:aws:iam::XXXX:user/my-user"
- "arn:aws:iam::XXXX:role/my-role"
```

## K8s RBAC Groups
Admittedly, Kubernetes rbac groups are
[currently difficult to view](https://stackoverflow.com/questions/51612976/how-to-view-members-of-subject-with-group-kind),
but you should be able to see details the current ones with the following command (you will need `jq` installed):
`kubectl get clusterrolebindings -o json | jq -r '.items[] | select(.subjects[0].kind=="Group")` and
`kubectl get rolebindings -A -o json | jq -r '.items[] | select(.subjects[0].kind=="Group")` (none for this by default).

Essentially an rbac group is created by creating a ClusterRoleBinding (or RoleBinding for namespace-limited permissions)
between the CluterRole/Role whose permissions you want to give and a new or pre-existing Group to give it to. Take the
following yaml for instace:

```yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: my-cluster-role-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:discovery
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: my-group
```

For this case, this ClusterRoleBinding says "give all member of the Group named my-group all the permissions of the
ClusterRole named system:discovery on all namespaces" (you can bind to ServiceAccounts as well, please see the docs for
more details).

## Conclusion
So, to summarize:

* If you wish to add an IAM role/user to be an admin in the K8s cluster, go ahead and use the `admin_arns` field for the
AWS `k8s-base` module
* If you wish to add an IAM role/user to a different set of K8s permissions already found in a pre-existing group, go
ahead and manually add them in the `aws-auth` configmap on the `kube-system` namespace
* If you wish to create a new K8s group to capture a new set of permissions, go ahead and do so with role binding/cluster role bindings.
1 change: 1 addition & 0 deletions content/en/Reference/aws/modules/aws-dns.md
Original file line number Diff line number Diff line change
Expand Up @@ -91,6 +91,7 @@ new apply.
| `domain` | The domain you want (you will also get the subdomains for your use) | `None` | True |
| `delegated` | Set to true once the extra [dns setup is complete](/features/dns-and-cert/dns/) and it will add the ssl certs. | `False` | False |
| `upload_cert` | Deprecated | `False` | False |
| `linked_module` | The module type (or name if given) to automatically add root dns records for. | `` | False |

## Outputs

Expand Down
1 change: 1 addition & 0 deletions content/en/Reference/aws/modules/aws-eks.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,7 @@ For information about the default IAM permissions given to the node group please
| `spot_instances` | A boolean specifying whether to use [spot instances](https://aws.amazon.com/ec2/spot/) for the default nodegroup or not. The spot instances will be configured to have the max price equal to the on-demand price (so no danger of overcharging). _WARNING_: By using spot instances you must accept the real risk of frequent abrupt node terminations and possibly (although extremely rarely) even full blackouts (all nodes die). The former is a small risk as containers of Opta services will be automatically restarted on surviving nodes. So just make sure to specify a minimum of more than 1 containers -- Opta by default attempts to spread them out amongst many nodes. The former is a graver concern which can be addressed by having multiple node groups of different instance types (see aws nodegroup module) and ideally at least one non-spot. | `False` | False |
| `enable_metrics` | Enable autoscaling group cloudwatch metrics collection for the default nodegroup. | `False` | False |
| `node_launch_template` | Custom launch template for the underlying ec2s. | `{}` | False |
| `ami_type` | The AMI type to use for the nodes. For more information about this, please visit [here](https://docs.aws.amazon.com/eks/latest/APIReference/API_Nodegroup.html#AmazonEKS-Type-Nodegroup-amiType) Note: Currently, "CUSTOM" ami type is not supported. | `AL2_x86_64` | False |

## Outputs

Expand Down
8 changes: 5 additions & 3 deletions content/en/Reference/aws/modules/aws-k8s-base.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,6 @@ description: Creates base infrastructure for k8s environments
This module is responsible for all the base infrastructure we package into the Opta K8s environments. This includes:

- [Autoscaler](https://github.com/kubernetes/autoscaler) for scaling up and down the ec2s as needed
- [External DNS](https://github.com/kubernetes-sigs/external-dns) to automatically hook up the ingress to the hosted zone and its domain
- [Ingress Nginx](https://github.com/kubernetes/ingress-nginx) to expose services to the public
- [Metrics server](https://github.com/kubernetes-sigs/metrics-server) for scaling different deployments based on cpu/memory usage
- [Linkerd](https://linkerd.io/) as our service mesh.
Expand All @@ -24,7 +23,6 @@ This module is responsible for all the base infrastructure we package into the O

| Name | Description | Default | Required |
| ----------- | ----------- | ------- | -------- |
| `cert_arn` | The arn of the ACM certificate to use for SSL. By default uses the one created by the DNS module if the module is found and delegation enabled. | `` | False |
| `nginx_high_availability` | Deploy the nginx ingress in a high-availability configuration. | `False` | False |
| `linkerd_high_availability` | Deploy the linkerd service mesh in a high-availability configuration for its control plane. | `False` | False |
| `linkerd_enabled` | Enable the linkerd service mesh installation. | `True` | False |
Expand All @@ -36,10 +34,14 @@ This module is responsible for all the base infrastructure we package into the O
| `cert_manager_values` | Certificate Manager helm chart additional values. [Available options](https://artifacthub.io/packages/helm/cert-manager/cert-manager?modal=values) | `{}` | False |
| `linkerd_values` | Linkerd helm chart additional values. [Available options](https://artifacthub.io/packages/helm/linkerd2/linkerd2/2.10.2?modal=values) | `{}` | False |
| `ingress_nginx_values` | Ingress Nginx helm chart additional values. [Available options](https://artifacthub.io/packages/helm/ingress-nginx/ingress-nginx/4.0.17?modal=values) | `{}` | False |
| `domain` | Domain to setup the ingress with. By default uses the one specified in the DNS module if the module is found. | `` | False |
| `zone_id` | ID of Route53 hosted zone to add a record for. By default uses the one created by the DNS module if the module is found. | `` | False |
| `cert_arn` | The arn of the ACM certificate to use for SSL. By default uses the one created by the DNS module if the module is found and delegation enabled. | `` | False |

## Outputs


| Name | Description |
| ----------- | ----------- |
| `load_balancer_raw_dns` | The dns of the network load balancer provisioned to handle ingress to your environment |
| `load_balancer_raw_dns` | The dns of the network load balancer provisioned to handle ingress to your environment |
| `load_balancer_arn` | The arn of the network load balancer provisioned to handle ingress to your environment |
2 changes: 2 additions & 0 deletions content/en/Reference/aws/modules/aws-k8s-service.md
Original file line number Diff line number Diff line change
Expand Up @@ -182,6 +182,8 @@ Cron Jobs are currently created outside the default linkerd service mesh.
| `ingress_extra_annotations` | These are extra annotations to add to ingress objects | `{}` | False |
| `tolerations` | Taint tolerations to add to the pods. | `[]` | False |
| `cron_jobs` | A list of cronjobs to execute as part of this service | `[]` | False |
| `pod_annotations` | These are extra annotations to add to k8s-service pod objects | `{}` | False |
| `timeout` | Time in seconds to wait for deployment. | `300` | False |

## Outputs

Expand Down
11 changes: 10 additions & 1 deletion content/en/Reference/aws/modules/aws-mysql.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,14 @@ Opta will provision your database with 7 days of automatic daily backups in the
You can find them either programmatically via the aws cli, or through the AWS web console (they will be called
system snapshots, and they have a different tab than the manual ones).

### Performance and Scaling

You can modify the DB instance class with the field `instance_class` in the module configuration.

Storage scaling is automatically managed by AWS Aurora, see the [official documentation](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Managing.Performance.html).

To add replicas to an existing cluser, follow the [official guide](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-replicas-adding.html).

### Linking

When linked to a k8s-service, it adds connection credentials to your container's environment variables as:
Expand Down Expand Up @@ -63,4 +71,5 @@ To those with the permissions, you can view it via the following command (MANIFE
| `engine_version` | The version of the database to use. | `5.7.mysql_aurora.2.04.2` | False |
| `multi_az` | Enable read-write replication across different availability zones on the same reason (doubles the cost, but needed for compliance). Can be added and updated at a later date without need to recreate. | `False` | False |
| `backup_retention_days` | How many days to keep the backup retention | `7` | False |
| `safety` | Add deletion protection to stop accidental db deletions | `False` | False |
| `safety` | Add deletion protection to stop accidental db deletions | `False` | False |
| `db_name` | The name of the database to create. Follow naming conventions [here](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Limits.html#RDS_Limits.Constraints) | `app` | False |
5 changes: 3 additions & 2 deletions content/en/Reference/aws/modules/aws-nodegroup.md
Original file line number Diff line number Diff line change
Expand Up @@ -70,6 +70,7 @@ daemonsets to run their agents in each node, so please be careful and read their
| `min_nodes` | Min number of nodes to allow via autoscaling | `3` | False |
| `node_disk_size` | The size of disk to give the nodes' ec2s in GB. | `20` | False |
| `node_instance_type` | The [ec2 instance type](https://aws.amazon.com/ec2/instance-types/) for the nodes. | `t3.medium` | False |
| `use_gpu` | Should we expect and use the gpus present in the ec2? | `False` | False |
| `spot_instances` | A boolean specifying whether to use [spot instances](https://aws.amazon.com/ec2/spot/) for the default nodegroup or not. The spot instances will be configured to have the max price equal to the on-demand price (so no danger of overcharging). _WARNING_: By using spot instances you must accept the real risk of frequent abrupt node terminations and possibly (although extremely rarely) even full blackouts (all nodes die). The former is a small risk as containers of Opta services will be automatically restarted on surviving nodes. So just make sure to specify a minimum of more than 1 containers -- Opta by default attempts to spread them out amongst many nodes. The former is a graver concern which can be addressed by having multiple node groups of different instance types (see aws nodegroup module) and ideally at least one non-spot. | `False` | False |
| `taints` | Taints to add to the nodes in this nodegroup. | `[]` | False |
| `taints` | Taints to add to the nodes in this nodegroup. | `[]` | False |
| `use_gpu` | Should we expect and use the gpus present in the ec2? Note: This input would be deprecated in the coming releases. Please switch to using `ami_type`. Usage: If using, `use_gpu: false`, just remove it. If using `use_gpu: true` replace it with `ami_type: AL2_x86_64_GPU` | `False` | False |
| `ami_type` | The AMI type to use for the nodes. For more information about this, please visit [here](https://docs.aws.amazon.com/eks/latest/APIReference/API_Nodegroup.html#AmazonEKS-Type-Nodegroup-amiType) Note: Currently, "CUSTOM" ami type is not supported. | `AL2_x86_64` | False |
22 changes: 21 additions & 1 deletion content/en/Reference/aws/modules/aws-postgres.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,15 @@ Opta will provision your database with 7 days of automatic daily backups in the
You can find them either programmatically via the aws cli, or through the AWS web console (they will be called
system snapshots, and they have a different tab than the manual ones).

### Performance and Scaling

You can modify the DB instance class with the field `instance_class` in the module configuration.

Storage scaling is automatically managed by AWS Aurora, see the [official documentation](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Managing.Performance.html).

To add replicas to an existing cluser, follow the [official guide](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-replicas-adding.html).


### Linking

When linked to a k8s-service, it adds connection credentials to your container's environment variables as:
Expand Down Expand Up @@ -63,4 +72,15 @@ To those with the permissions, you can view it via the following command (MANIFE
| `engine_version` | The version of the database to use. | `11.9` | False |
| `multi_az` | Enable read-write replication across different availability zones on the same reason (doubles the cost, but needed for compliance). Can be added and updated at a later date without need to recreate. | `False` | False |
| `safety` | Add deletion protection to stop accidental db deletions | `False` | False |
| `backup_retention_days` | How many days to keep the backup retention | `7` | False |
| `backup_retention_days` | How many days to keep the backup retention | `7` | False |
| `extra_security_groups_ids` | Ids of extra AWS security groups to add to the database | `[]` | False |
| `create_global_database` | Create an Aurora Global database with this db as the master/writer | `False` | False |
| `existing_global_database_id` | ID of the Aurora global database to attach | `None` | False |
| `database_name` | The name of the database to create. Follow naming conventions [here](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Limits.html#RDS_Limits.Constraints) | `app` | False |

## Outputs


| Name | Description |
| ----------- | ----------- |
| `global_database_id` | The id of the global database, if created |
Loading

0 comments on commit 4e2f0bc

Please sign in to comment.