-
Notifications
You must be signed in to change notification settings - Fork 41
Deployment on Google GKE
First, you need to create a cluster that:
-
REQUIRED:
does not contain "Alpha" features -
REQUIRED:
uses Ubuntu as the host OS (--image-type UBUNTU
) -
REQUIRED:
allows access to all Cloud APIs (for storage to work correctly) -
REQUIRED:
has at least 30 GB local storage / node -
REQUIRED:
has at least 3 nodes, each with 2 VCPUs and 7GB memory (--machine-type=n1-standard-2
) -
OPTIONAL:
has preemptible nodes (optional, but useful to keep costs low)
First, make sure you've setup the cluster and that your gcloud
CLI is configured correctly.
In the commands below, make sure to replace YOUR_CLUSTER_NAME
with the appropriate values.
Note, on MacOS use xargs -I {}
.
export CLUSTER_NAME="YOUR_CLUSTER_NAME"
export CLUSTER_ZONE="YOUR_CLUSTER_ZONE"
instance_names=$(gcloud compute instances list --filter=name~${CLUSTER_NAME:?required} --format json | jq --raw-output '.[].name')
# Set correct zone
gcloud config set compute/zone ${CLUSTER_ZONE:?required}
# Update kernel command line
echo "$instance_names" | xargs -i{} gcloud compute ssh {} -- "sudo sed -i 's/GRUB_CMDLINE_LINUX_DEFAULT=\"console=ttyS0 net.ifnames=0\"/GRUB_CMDLINE_LINUX_DEFAULT=\"console=ttyS0 net.ifnames=0 swapaccount=1\"/g' /etc/default/grub.d/50-cloudimg-settings.cfg"
# Update grub
echo "$instance_names" | xargs -i{} gcloud compute ssh {} -- "sudo update-grub"
# Restart VMs
echo "$instance_names" | xargs gcloud compute instances reset
Before doing this, you may want to backup your current
~/.kube/config
.
gcloud container clusters get-credentials --zone ${CLUSTER_ZONE:?required} ${CLUSTER_NAME:?required}
Save the following to a file named gke-helm-sa.yaml
.
apiVersion: v1
kind: ServiceAccount
metadata:
name: helm
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: helm
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: helm
namespace: kube-system
Then, create the service account and install helm:
kubectl create -f gke-helm-sa.yaml
helm init --service-account helm
Create the following rules to allow ingress traffic to the cluster:
- Action on match:
Allow
- IP ranges:
0.0.0.0/0
- Protocols and ports:
tcp:80
tcp:443
tcp:4443
tcp:2222
tcp:2793
For example:
gcloud compute firewall-rules create cfcontainers \
--description "https://github.com/SUSE/scf/wiki/Deployment-on-Google-GKE\#firewall-rules" \
--direction INGRESS \
--allow tcp:80,tcp:443,tcp:4443,tcp:2222,tcp:2793 \
--source-ranges=0.0.0.0/0
In your Compute Engine VM Instances list, find one of the nodes you've deployed.
Find and note its Internal IP.
Also note the External IP address. You'll need it for the DOMAIN
of the cluster.
You'll deploy CAP using the usual procedure described here.
Make the following changes in your values.yaml
:
- use
overlay-xfs
forenv.GARDEN_ROOTFS_DRIVER
- set
kube.storage_class.persistent
tostandard
Example values.yaml
:
env:
# Domain for SCF. DNS for *.DOMAIN must point to a kube node's (not master)
# external ip address.
DOMAIN: <EXTERNAL IP OF A NODE VM>.nip.io
#### The UAA hostname is hardcoded to uaa.$DOMAIN, so shouldn't be
#### specified when deploying
# UAA host/port that SCF will talk to. If you have a custom UAA
# provide its host and port here. If you are using the UAA that comes
# with the SCF distribution, simply use the two values below and
# substitute the cf-dev.io for your DOMAIN used above.
UAA_HOST: uaa.<EXTERNAL IP OF A NODE VM>.nip.io
UAA_PORT: 2793
GARDEN_ROOTFS_DRIVER: overlay-xfs
kube:
# The IP address assigned to the kube node pointed to by the domain.
#### the external_ip setting changed to accept a list of IPs, and was
#### renamed to external_ips
external_ips:
- <INTERNAL IP ADDRESS OF THE NODE VM>
storage_class:
# Make sure to change the value in here to whatever storage class you use
persistent: "standard"
# The registry the images will be fetched from. No values below should work for
# a default installation of opensuse based scf containers from dockerhub. If you
# are going to deploy sle based cap containers, comment out the next five lines.
# registry:
# hostname: "registry.suse.com"
# username: ""
# password: ""
# organization: "cap"
auth: rbac
secrets:
# Password for user 'admin' in the cluster
CLUSTER_ADMIN_PASSWORD: changeme
# Password for SCF to authenticate with UAA
UAA_ADMIN_CLIENT_SECRET: uaa-admin-client-secret