Skip to content
This repository has been archived by the owner on Jun 6, 2024. It is now read-only.

Deployment on Google GKE

Christian Hueller edited this page Jun 7, 2019 · 31 revisions

IAM Roles

Note that you need a GCP user account or a service account that needs the following roles:

container.admin compute.admin iam.serviceAccountUser

It may be possible to tighten this further but that would require creation of a custom role with exactly the permissions it needs from the union of the permissions these 3 roles have.

Cluster creation

First, you need to create a cluster that:

  • REQUIRED: does not contain "Alpha" features
  • REQUIRED: uses Ubuntu as the host OS (--image-type UBUNTU)
  • REQUIRED: allows access to all Cloud APIs (for storage to work correctly)
  • REQUIRED: has at least 30 GB local storage / node
  • REQUIRED: has at least 3 nodes with instance type n1-standard-4 (--machine-type=n1-standard-4)
  • OPTIONAL: has preemptible nodes (optional, but useful to keep costs low)

Update to support cgroup swap accounting


Note These steps are only required if you are using Diego. For Eirini based deployments you can skip this section.


First, make sure you've setup the cluster and that your gcloud CLI is configured correctly.

In the commands below, make sure to replace YOUR_CLUSTER_NAME and YOUR_CLUSTER_ZONE with the appropriate values.

Note, on MacOS use xargs -I {}.

export CLUSTER_NAME="YOUR_CLUSTER_NAME"

export CLUSTER_ZONE="YOUR_CLUSTER_ZONE"

instance_names=$(gcloud compute instances list --filter="metadata.items.cluster-name=${CLUSTER_NAME:?required}" --format='get(name)')

# Set correct zone
gcloud config set compute/zone ${CLUSTER_ZONE:?required}

# Update kernel command line
echo "$instance_names" | xargs -i{} gcloud compute ssh {} -- "sudo sed -i 's/GRUB_CMDLINE_LINUX_DEFAULT=\"console=ttyS0 net.ifnames=0\"/GRUB_CMDLINE_LINUX_DEFAULT=\"console=ttyS0 net.ifnames=0 swapaccount=1\"/g' /etc/default/grub.d/50-cloudimg-settings.cfg"

# Update grub
echo "$instance_names" | xargs -i{} gcloud compute ssh {} -- "sudo update-grub"

# Restart VMs
echo "$instance_names" | xargs gcloud compute instances reset

Get your kube config

Before doing this, you may want to backup your current ~/.kube/config. Alternatively, if you want to point to point this kube config to a separate location set the value of the env var KUBECONFIG.

gcloud container clusters get-credentials --zone ${CLUSTER_ZONE:?required} ${CLUSTER_NAME:?required}

Install helm

Save the following to a file named gke-helm-sa.yaml.

apiVersion: v1
kind: ServiceAccount
metadata:
  name: helm
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: helm
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: helm
    namespace: kube-system

Then, create the service account and install helm:

kubectl create -f gke-helm-sa.yaml

#Install Helm
https://docs.helm.sh/using_helm/#installing-helm

helm init --service-account helm

Install UAA and SCF

You'll deploy CAP using the usual procedure described here. Note that you'll need to set the value of the DOMAIN key to an FQDN when you are deploying UAA and SCF services with services.loadbalanced set to true.

Make the following changes in your values.yaml

  • use overlay-xfs for env.GARDEN_ROOTFS_DRIVER
  • set kube.storage_class.persistent to standard

Example values.yaml:

env:
    # Domain for SCF. DNS for *.DOMAIN must point to a kube node's (not master)
    # external ip address.
    DOMAIN: yourdomain.com
    #### The UAA hostname is hardcoded to uaa.$DOMAIN, so shouldn't be
    #### specified when deploying
    # UAA host/port that SCF will talk to. If you have a custom UAA
    # provide its host and port here. If you are using the UAA that comes
    # with the SCF distribution, simply use the two values below and
    # substitute the cf-dev.io for your DOMAIN used above.
    UAA_HOST: uaa.yourdomain.com
    UAA_PORT: 2793
    GARDEN_ROOTFS_DRIVER: overlay-xfs
kube:
    # The IP address assigned to the kube node pointed to by the domain.
    #### the external_ip setting changed to accept a list of IPs, and was
    #### renamed to external_ips
    storage_class:
        # Make sure to change the value in here to whatever storage class you use
        persistent: "standard"
    # The registry the images will be fetched from. No values below should work for
    # a default installation of opensuse based scf containers from dockerhub. If you
    # are going to deploy sle based cap containers, comment out the next five lines.
#    registry:
#      hostname: "registry.suse.com"
#      username: ""
#      password: ""
#    organization: "cap"
    auth: rbac
secrets:
    # Password for user 'admin' in the cluster
    CLUSTER_ADMIN_PASSWORD: changeme
    # Password for SCF to authenticate with UAA
    UAA_ADMIN_CLIENT_SECRET: uaa-admin-client-secret
services:
    loadbalanced: true

Once UAA deployment is complete a UAA service would have been exposed on a load balancer public IP. The name of these services end in -public. In the following example the uaa-uaa-public service is exposed on 35.197.11.229 and port 2793.

kubectl get svc -n uaa

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)

uaa-uaa-public LoadBalancer 10.23.254.105 35.197.11.229 2793:30206/TCP

Use the DNS service of your choice to set up DNS A records for the service from the previous step. Use the public load balancer IP associated with the service to create domain mappings:

For the uaa service, map the following domains:

uaa.DOMAIN Using the example values, an A record for uaa.yourdomain.com that points to 35.197.11.229

*.uaa.DOMAIN Using the example values, an A record for *.uaa.yourdomain.com that points to 35.197.11.229

Before deploying scf, ensure the DNS records for the uaa domains have been set up as specified in the previous section. Next, pass your uaa secret and certificate to scf, then use Helm to deploy scf (see the public docs)

Once the deployment completes, a number of public services will be setup using load balancers that have been configured with corresponding load balancing rules and probes as well as having the correct ports opened in the firewall settings.

For example, the list of services that would be made available on public IPs would be similar to:

diego-ssh-ssh-proxy-public LoadBalancer 10.23.249.196 35.197.32.244 2222:31626/TCP

router-gorouter-public LoadBalancer 10.23.248.85 35.197.18.22 80:31213/TCP,443:30823/TCP,4443:32200/TCP

tcp-router-tcp-router-public LoadBalancer 10.23.241.17 35.197.53.74 20000:30307/TCP,20001:30630/TCP,20002:32524/TCP,20003:32344/TCP,20004:31514/TCP,20005:30917/TCP,20006:31568/TCP,20007:30701/TCP,20008:31810/TCP

Use the DNS service of your choice to set up DNS A records for the services from the previous step. Use the public load balancer IP associated with the services to create domain mappings:

For the gorouter service, map the following domains:

DOMAIN Using the example values, an A record for yourdomain.com that points to 35.197.18.22 would be created.

*.DOMAIN Using the example values, an A record for *.yourdomain.com that points to 35.197.18.22 would be created.

For the diego-ssh service, map the following domain:

ssh.DOMAIN Using the example values, an A record for ssh.yourdomain.com that points to 35.197.32.244 would be created.

For the tcp-router service, map the following domain:

tcp.DOMAIN Using the example values, an A record for tcp.yourdomain.com that points to 35.197.53.74 would be created.

Your load balanced deployment of Cloud Application Platform on GKE is now complete. Verify you can access the API endpoint:

cf api --skip-ssl-validation https://api.yourdomain.com