This is a proof of concept (not supported for production deployments) that uses terraform to launch a set of machines on GCE. It then uses kubeadm
to automatically boostrap a Kubernetes cluster. Simple networking is provided via a combination of routing configuration on GCE and using CNI to manage a bridge.
-
Download and install Terraform
-
Sign up for an account (project) on Google Cloud Platform. There is a free trial.
-
Install and initialize the
gcloud
CLI from the Cloud SDK -
Configure a service account for terraform to use.
SA_EMAIL=$(gcloud iam service-accounts --format='value(email)' create k8s-terraform)
gcloud iam service-accounts keys create account.json --iam-account=$SA_EMAIL
PROJECT=$(gcloud config list core/project --format='value(core.project)')
gcloud projects add-iam-policy-binding $PROJECT --member serviceAccount:$SA_EMAIL --role roles/editor
- Configure terraform modules
terraform get
- Configure terraform variables
-
Start with the provided template:
cp terraform.tfvars.sample terraform.tfvars
-
Generate a token:
python -c 'import random; print "%0x.%0x" % (random.SystemRandom().getrandbits(3*8), random.SystemRandom().getrandbits(8*8))'
-
Open
terraform.tfvars
in an editor and fill in the blanks
-
Run
terraform plan
to see what it is thinking of doing. By default it'll boot 4n1-standard-1
machines. 1 master and 3 nodes. -
Run
terraform apply
to actually launch stuff. -
Run
terraform destroy
to tear everything down.
The API server will be running an unsecured endpoint on port 8080 on the master node (only on localhost).
You can easily just ssh in to the cluster and run kubectl
there. That is probably easiest.
workstation$ gcloud compute ssh --zone=us-west1-a kube-master
kube-master$ kubectl get nodes
NAME STATUS AGE
kube-master Ready 1h
kube-node-0 Ready 1h
kube-node-1 Ready 1h
kube-node-2 Ready 1h
You can easily create an SSH tunnel to that machine and use your local kubectl. kubectl
should have been installed for you by the Google Cloud SDK.
# Launch the SSH tunnel in the background
gcloud compute ssh --zone=us-west1-a kube-master -- -L 8080:127.0.0.1:8080 -N &
# Set up and activate a "localhost" context for kubectl
kubectl config set-cluster localhost --server=127.0.0.1:8080 --insecure-skip-tls-verify
kubectl config set-context localhost --cluster=localhost
kubectl config use-context localhost
kubectl get nodes
TODO: Document how to copy /etc/kubernetes/admin.conf
down from kube-master
and modify/merge it in to local kubectl config. Also need to open up port in GCE firewall.