Skip to content
This repository has been archived by the owner on Jun 6, 2024. It is now read-only.

How to install SCF Beta1 Beta2

Aaron L edited this page Nov 30, 2017 · 1 revision

Table of Contents

Requirements for Kubernetes

The various machines (api, kube, and node) of the kubernetes cluster must be configured in a particular way to support the execution of SCF. These requirements are, in general:

  • Kubernetes API versions 1.5.x-1.6.x
  • Kernel parameters swapaccount=1
  • docker info must not show aufs as the storage driver.
  • kube-dns must be be running and be fully ready. See section Kube DNS.
  • Either ntp or systemd-timesyncd must be installed and active.
  • The kubernetes cluster must have a storage class SCF can refer to. See section Storage Classes.
  • Docker must be configured to allow privileged containers.
  • Privileged container must be enabled in kube-apiserver. See https://kubernetes.io/docs/admin/kube-apiserver
  • Privileged must be enabled in kubelet.
  • The TasksMax property of the containerd service definition must be set to infinity.
  • Helm's Tiller has to be installed and active.

An easy way of setting up a small single-machine kubernetes cluster with all the necessary properties is to use the Vagrant definition in the SCF repository. The details of this approach are explained in https://github.com/SUSE/scf/blob/develop/README.md#deploying-scf-on-vagrant

Verifying Kubernetes

For ease of verification of the above requirements a script (kube-ready-state-check.sh) is made available which contains the necessary checks.

To get help invoke this script via

kube-ready-state-check.sh -h

This will especially note the various machine categories. When invoked with the name of machine category (api, kube, and node), i.e. like

kube-ready-state-check.sh kube

the script will run the tests applicable to the named category. Positive results are prefixed with Verified: , whereas failed requirements are prefixed with Configuration problem detected:.

Category Explanation
api Requirements on the hosts for the kube master nodes (running apiserver)
kube Requirements of the cluster itself, via kubectl
node Requirements on the hosts for the kube worker nodes (running kubelet)

Kube DNS

The cluster must have an active kube-dns. If you are running CaaSP you can simply use the following command to install it:

kubectl apply \
  -f https://raw.githubusercontent.com/SUSE/caasp-services/b0cf20ca424c41fa8eaef6d84bc5b5147e6f8b70/contrib/addons/kubedns/dns.yaml

Storage Classes

The kubernetes cluster must have a storage class SCF can refer to so that its database components have a place for their persistent data.

This class may have any name, in the case of vagrant it uses persistent.

Important information on storage classes and how to create and configure them can be found here:

Note: while the distribution comes with an example storage-class persistent of type hostpath, for use with the vagrant box, this is a toy option and should not be used with anything but the vagrant box. It is actually quite likely that whatever kube setup is used will not even support the type hostpath for storage classes, automatically preventing its use.

To enable hostpath support for testing, the kube-controller-manager must be run with the --enable-hostpath-provisioner command line option.

Cloud Foundry Console UI (Stratos UI)

See https://github.com/SUSE/stratos-ui/releases for distributions of Stratos UI - the Cloud Foundry Console UI. It it also deployed using Helm. Please follow the steps below to see when to install it.

Helm installation

SCF uses Helm charts to deploy on kubernetes clusters. To install Helm see

SCF Installation

Downloading the archive

Get the distribution archive from https://github.com/SUSE/scf/releases. Create a directory and extract the archive into it.

wget  https://github.com/SUSE/scf/releases/download/scf-X.Y.Z.linux-amd64.zip  # example url
mkdir deploy
unzip scf-X.Y.Z.linux-amd64.zip -d deploy                                      # example zipfile
cd    deploy
> ls
cert-generator.sh*
helm/
kube/
kube-ready-state-check.sh*
scripts/

We now have the helm charts for SCF and UAA in a subdirectory helm. Additional configuration files are found under kube. The scripts directory contains helpers for cert generation.

Choosing a Storage Class

Choose the name of the kube storage class to use, and create the class if it doesn't exist. See section Storage Classes for important notes. To see if you have a storage class you can use for scf run the command: kubectl get storageclasses.

Note: The persistent class created below is of type hostpath which is only meant for toy examples and is not to be used in production deployments (it's use is disabled in Kubernetes by default).

Here we use the hostpath storage class for simplicity of setup. Note that the storageclass apiVersion used in the manifest should either be storage.k8s.io/v1beta1 (for kubernetes 1.5.x) or storage.k8s.io/v1 (for kubernetes 1.6.x)

Use kubectl to check your kubernetes server version:

kubectl version --short | grep "Server Version"

For kubernetes 1.5.x:

echo '{"kind":"StorageClass","apiVersion":"storage.k8s.io/v1beta1","metadata":{"name":"persistent"},"provisioner":"kubernetes.io/host-path"}' | kubectl create -f -

For kubernetes 1.6.x and 1.7.x:

echo '{"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"persistent"},"provisioner":"kubernetes.io/host-path"}' | kubectl create -f -

Create custom certificates

# From inside our deploy directory
mkdir certs

# Replace cf-dev.io with your DOMAIN
./cert-generator.sh -d cf-dev.io -n scf -o certs

Note: Choosing a different output directory (certs) here will require matching changes to the commands deploying the helm charts, below.

We now have the certificates required by the various components to talk to each other (SCF internals, UAA internals, as well as SCF to UAA).

Configuring the deployment

Next create a values.yaml file (the rest of the docs assume filename: scf-config-values.yaml) with the settings required for the install. Copy the below as a template for this file and modify the values to suit your installation.

env:
    # Password for the cluster
    CLUSTER_ADMIN_PASSWORD: changeme

    # Domain for SCF. DNS for *.DOMAIN must point to the a kube node's (not master)
    # external ip. This must match the value passed to the
    # cert-generator.sh script.
    DOMAIN: cf-dev.io

    # Password for SCF to authenticate with UAA
    UAA_ADMIN_CLIENT_SECRET: uaa-admin-client-secret

    # UAA host/port that SCF will talk to. If you have a custom UAA
    # provide its host and port here. If you are using the UAA that comes
    # with the SCF distribution, simply use the two values below and
    # substitute the cf-dev.io for your DOMAIN used above.
    UAA_HOST: uaa.cf-dev.io
    UAA_PORT: 2793
kube:
    # The IP address assigned to the kube node pointed to by the domain. The example value here
    # is what the vagrant setup assigns, you will likely need to change it.
    external_ip: 192.168.77.77
    storage_class:
        # Make sure to change the value in here to whatever storage class you use
        persistent: persistent
    # The next line is needed for CaaSP 2, but should _not_ be there for CaaSP 1
    auth: rbac

Deploy Using Helm

The previous section gave a reference to the Helm documentation explaining how to install Helm itself. Remember also that in the Vagrant-based setup helm is already installed and ready.

  • Deploy UAA

    helm install helm/uaa \
        --namespace uaa \
        --values certs/uaa-cert-values.yaml \
        --values scf-config-values.yaml
    
  • With UAA deployed, use Helm to deploy SCF.

    helm install helm/cf \
        --namespace scf \
        --values certs/scf-cert-values.yaml \
        --values scf-config-values.yaml
    
  • Wait for everything to be ready:

    watch -c 'kubectl get pods --all-namespaces'
    

    Stop watching when all pods show state Running and Ready is n/n (instead of k/n, k < n).

Installing the Cloud Foundry UI (Stratos UI)

Stratos UI is also deployed using Helm.

Add the Stratos UI Helm Repository with the command:

helm repo add stratos-ui https://suse.github.io/stratos-ui

Deploy Stratos UI: (do this from the folder where you created the scf-config-values.yaml configuration file)

helm install stratos-ui/console \
    --namespace stratos \
    --values scf-config-values.yaml

This will install Stratos UI using the configuration that you created in the scf-config-values.yaml previously.

The UI should be available via HTTPS on port 8443 of the domain that you configured. For example, if your domain was cf-dev.io, you should be able to access the UI in a browser at:

https://cf-dev.io:8443

You should be able to login with your Cloud Foundry credentials. If you see an upgrade message, please wait up to a minute for the installation to complete.

If you do not wish to use the SCF configuration values, then more information is available on deploying the UI in Kubernetes here - https://github.com/SUSE/stratos-ui/tree/master/deploy/kubernetes.

Note: If you deploy without the SCF configuration you will need to use the Setup UI to provider UAA configuration. Typical values are:

  • UAA URL: This is composed of https://NAMESPACE.uaa.DOMAIN:2793 (ie. https://scf.uaa.10.10.10.10.nip.io:2793)
  • Client ID: cf
  • Client Secret: EMPTY (do not fill in this box)
  • Admin Username: User provided value
  • Admin Password: User provided value

Using the Universal Service Broker

# Push the mysql sidecar
cf push msc -o splatform/cf-usb-sidecar-dev-mysql --no-start

# Use a secret key that will be used by the USB to talk to your sidecar
cf set-env msc SIDECAR_API_KEY secret-key

# Set the connection parameters for the mysql sidecar
# This example will connect to the MariaDB instance hosting the CCDB
cf set-env msc SERVICE_MYSQL_HOST mysql-proxy.cf.svc.cluster.local
cf set-env msc SERVICE_MYSQL_PORT 3306
cf set-env msc SERVICE_MYSQL_USER cf-mysql-broker
cf set-env msc SERVICE_MYSQL_PASS `k exec cf:broker -- env | grep CF_MYSQL_BROKER_DB_PASSWORD | awk -F'=' '{print $2}'`

# Start the sidecar
cf start msc

# Install cf-usb-plugin from https://github.com/SUSE/cf-usb-plugin/releases
# download the zip archive you need, unpack it, then
cf install-plugin ./cf-plugin-usb

# Check USB is OK
cf usb info

# Create a driver endpoint to the mysql sidecar
# Note that the -c ":" is required as a workaround to a known issue
cf usb create-driver-endpoint my-service https://msc.cf-dev.io secret-key -c ":"

# Check the service is available in the marketplace and use it
cf marketplace
cf create-service my-service default mydb
cf services

High Availability

To deploy an HA version of SCF, amend the values.yaml file you're using with helm install with the following:

sizing:
  consul:
    count: 3
  nats:
    count: 2
  mysql:
    count: 3
  diego_database:
    count: 2
  router:
    count: 2
  api:
    count: 2
  api_worker:
    count: 2
  etcd:
    count: 3
  diego_brain:
    count: 2
  diego_cc_bridge:
    count: 2
  diego_route_emitter:
    count: 2
  diego_cell:
    count: 2
  diego_access:
    count: 2
  loggregator:
    count: 2
  doppler:
    count: 2
  routing_api:
    count: 2
  cf_usb:
    count: 2
  clock_global:
    count: 2

The below role's HA pods will enter in passive state and won't show a ready state:

  • diego-brain
  • diego-database
  • routing-api
  • mysql

You can confirm this by looking at the logs inside the container. The logs will state .consul-lock.acquiring-lock.

Known Issues

  • upgrading from a basic deployment to an HA one is not currently possible, because secrets get rotated even though reuse-values is specified when doing helm upgrade
  • roles that cannot be scaled:
    • mysql-proxy (needs a proper active/passive configuration)
    • tcp-router (no strategy for exposing ports correctly)
    • blobstore (needs shared volume support and an active/passive configuration)
  • some roles follow an active/passive scaling model, meaning all pods except one (the active) will be shown as NOT READY by kubernetes; this is appropriate and expected behavior:
  • the resources required to run an HA deployment are considerably higher; for example, running HA in the vagrant box requires at least 24GB memory, 8 VCPUs and fast storage
  • when moving from a basic deployment to an HA one, the platform will be unavailable while the upgrade is happening

Testing the Deployment

  • Basic operation of the deployed SCF can be verified by running the CF smoke tests.

    To invoke the tests, you must first modify the kube/cf/bosh-task/smoke-tests.yml's DOMAIN parameter to match your config.

    Then run the command

    kubectl create \
       --namespace=scf \
       --filename="kube/cf/bosh-task/smoke-tests.yml"
    
    # Wait for completion
    kubectl logs --follow --namespace=scf smoke-tests
    
  • If the deployed SCF is not intended as a production system then its operation can be verified further by running the CF acceptance tests.

    CAUTION: tests are only meant for acceptance environments, and while they attempt to clean up after themselves, no guarantees are made that they won't change the state of the system in an undesirable way. -- https://github.com/cloudfoundry/cf-acceptance-tests/

    To invoke the tests, you must first modify the kube/cf/bosh-task/acceptance-tests.yaml's DOMAIN parameter to match your config.

    Then run the command

    kubectl create \
       --namespace=scf \
       --filename="kube/cf/bosh-task/acceptance-tests.yaml"
    
    # Wait for completion
    kubectl logs --follow --namespace=scf acceptance-tests
    

Notes on CaaSP

There are some slight changes when running SCF on CaaSP. Main difference in the configuration are domain, ip address, and storageclass. Related to that, there are additional commands to generate and feed CEPH secrets into the kube, for use by the storageclass:

cat > scf-config-values.yaml <<END
env:
    # Password for the cluster
    CLUSTER_ADMIN_PASSWORD: changeme

    # Domain for SCF. DNS for *.DOMAIN must point to the kube node's
    # external ip. This must match the value passed to the
    # cert-generator.sh script.
    DOMAIN: 10.0.0.154.nip.io

    # Password for SCF to authenticate with UAA
    UAA_ADMIN_CLIENT_SECRET: uaa-admin-client-secret

    # UAA host/port that SCF will talk to. The example values here are
    # for the UAA deployment included with the SCF distribution.
    UAA_HOST: uaa.10.0.0.154.nip.io
    UAA_PORT: 2793
kube:
    # The IP address assigned to the kube node. The example value here
    # is what the vagrant setup assigns
    external_ip: 10.0.0.154
    storage_class:
        persistent: persistent
END

mkdir certs
./cert-generator.sh -d 10.0.0.154.nip.io -n scf -o certs

kubectl create namespace uaa

# Use Ceph admin secret for now, until we determine how to grant appropriate permissions for non-admin client.
kubectl get secret ceph-secret-admin -o json --namespace default | jq ".metadata.namespace = \"uaa\"" | kubectl create -f -

helm install helm/uaa \
    --namespace uaa \
    --values certs/uaa-cert-values.yaml \
    --values scf-config-values.yaml

kubectl create namespace scf
kubectl get secret ceph-secret-admin -o json --namespace default | jq ".metadata.namespace = \"scf\"" | kubectl create -f -

helm install helm/cf \
    --namespace scf \
    --values certs/scf-cert-values.yaml \
    --values scf-config-values.yaml

Removal and Cleanup via helm

First delete the running system at the kube level

    kubectl delete namespace uaa
    kubectl delete namespace scf

This will especially remove all the associated volumes as well.

After that use helm list to locate the releases for the SCF and UAA charts and helm delete to remove them at helm's level as well.

CF documentation