SUSE Cloud Foundry (SCF) is a Cloud Foundry distribution based on the open source version but with several very key differences:
- Uses fissile to containerize the CF components, for running on top of Kubernetes (and Docker)
- CF Components run on an SUSE Linux Enterprise Stemcell
- CF Apps optionally can run on a preview of the SUSE Linux Enterprise Stack (rootfs + buildpacks)
Fissile has been around for a few years now and its containerization technology is fairly stable; however deploying directly to kubernetes is relatively new, as is the SLE stack and stemcell. This means that things are liable to break as we continue development. Specifically links and where things are hosted are still in flux and will most likely break.
For development testing we've mainly been targeting the following so they should be a known working quantity:
OS | Virtualization |
---|---|
SLE 15 | Libvirt |
Mac OSX Sierra | VirtualBox |
For more production-like deploys we've been targetting baremetal Kubernetes 1.6.1 (using only 1.5 features) though these deploys currently require the adventurer to be able to debug and problem solve which takes knowledge of the components this repo brings together currently.
- SUSE Cloud Foundry
- Disclaimer
- Table of Contents
- Deploying SCF on Vagrant
- Deploying SCF on Kubernetes
- Deployment Customizations
- Development FAQ
- Where do I find logs?
- How do I clear all data and begin anew without rebuilding everything?
- How do I tear down a cluster on a cloud provider?
- How do I run smoke and acceptance tests?
fissile
refuses to create images that already exist. How do I recreate images?- My vagrant box is frozen. What can I do?
- Can I target the cluster from the host using the
cf
CLI? - How do I connect to the Cloud Foundry database?
- How do I add a new BOSH release to SCF?
- How do I expose new settings via environment variables?
- How do I bump to a new cf-deployment version?
- How do I bump a BOSH release?
- Can I suspend or resume my vagrant VM?
- How do I develop an upstream PR?
- How do I publish SCF and BOSH images?
- How do I use an authenticated registry for my Docker images?
- Using Persi NFS
- How do I rotate the CCDB secrets?
- CCDB migration squashing
-
We recommend running on a machine with more than 16G of ram for now.
-
You must install vagrant (1.9.5+): https://www.vagrantup.com
-
Install the following vagrant plugins
- vagrant-libvirt (if using libvirt)
vagrant plugin install vagrant-libvirt
- vagrant-libvirt (if using libvirt)
Deploying on vagrant is highly scripted and so there should be very little to do to get a working system.
-
Initial repo check out
git clone --recurse-submodules https://github.com/SUSE/scf
-
Building the system
# Bring the vagrant box up vagrant up --provider X # Where X is libvirt | virtualbox. See next section for additional options. # Once the vagrant box is up, ssh into it vagrant ssh # The scf directory you cloned has been mounted into the guest OS, cd into it cd scf # This runs a combination of bosh & fissile in order to create the docker # images and helm charts you'll need. Once this step is done you can see # images available via "docker images" make vagrant-prep # This is the final step, where it will install the uaa helm chart into the 'uaa' namespace # and the scf helm chart into the 'cf' namespace. make run # Watch the status of the pods, when everything is fully ready it should be usable. pod-status --watch # Currently the api role takes a very long time to do its migrations (~20 mins), to see if it's # doing migrations check the logs, if you see messages about migrations please be patient, otherwise # see the Troubleshooting guide. k logs -f cf:^api-[0-9]
-
Changing the default STEMCELL and STACK
The default stemcell and stack are set to SUSE Linux Enterprise. The versions are defined in
bin/common/versions.sh
.The
FISSILE_DOCKER_REPOSITORY
environment variable will need to be set, and Docker configured to login to the repository.After changing the stemcell you have to remove the contents of
~vagrant/.fissile/compilation
and~vagrant/scf/.fissile/compilation
inside the vagrant box. Afterwards recompile scf (for details see section "2. Building the system").Example:
$ cd ~ $ export FISSILE_DOCKER_REPOSITORY=registry.example.com $ docker login ${FISSILE_DOCKER_REPOSITORY} -u username -p password $ cd scf
-
Environment variables to configure
vagrant up
(optional)VAGRANT_VBOX_BRIDGE
: Set this to the name of an interface to enable bridged networking when using the Virtualbox provider. Turning on bridged networking will allow your vagrant box to receive an IP accessible anywhere on the network. While Virtualbox is able to bridge over an interface without any special networking configuration (and may even do this on OSX), bridged networking may not be supported when the provided interface is a wireless interface.See the Virtualbox docs on bridged networking for more information.VAGRANT_KVM_BRIDGE
: Set this to the name of your host's linux bridge interface if you have one configured. If using Wicked as your network manager, you can configure one by setting the config files for your default interface and bridge interface as follows:#default interface: BOOTPROTO='none' STARTMODE='auto' DHCLIENT_SET_DEFAULT_ROUTE='yes'
For example, if your default interface is named#bridged interface: DHCCLIENT_SET_DEFAULT_ROUTE='yes' STARTMODE='auto' BOOTPROTO='dhcp' BRIDGE='yes' BRIDGE_STP='off' BRIDGE_FORWARDDELAY='0' BRIDGE_PORTS='eth0' BRIDGE_PORTPRIORITIES='-' BRIDGE_PATHCOSTS='-'
eth0
', you would edit/etc/sysconfig/network/ifcfg-eth0
and/etc/sysconfig/network/ifcfg-br0
with the above settings. Then, after the desired configuration is in place, runwicked ifreload all
and wait for wicked to apply the changes.VAGRANT_DHCP
: Set this to any value when using virtual networking (as opposed to bridged networking) in order to let your VM receive an IP via DHCP in the virtual network. If this environment variable is unset, the VM will instead obtain the IP cf-dev.io points to.
Note: If every role does not go green in pod-status --watch
refer to Troubleshooting
-
Pulling updates
When you want to pull the latest changes from the upstream you should:
# Pull the changes (or checkout the commit you want): git pull # Update all submodules to match the checked out commit git submodule update --init --recursive
Sometimes, when we bump the BOSH release submodules, they move to a different location and you need to run:
git submodule sync --recursive
You might have to run the
git submodule update --init --recursive
again after the last command.If there are untracked changes from submodule directories you can safely remove them.
E.g. A command that will update all submodules and drop any changed or untracked files in them is:
git submodule update --recursive --force && git submodule foreach --recursive 'git checkout . && git clean -fdx'
Make sure you understand what the
git clean
flags mean before you run thisNow you need to rebuild the images inside the vagrant box:
make stop # And wait until all pods are stopped and removed make vagrant-prep kube run
The vagrant box is set up with default certs, passwords, ips, etc. to make it easier to run and develop on. So to access it and try it out all you should need is to get the CF client and connect to it. Once you've connected with the CF cli you should be able to do anything you can do with a vanilla Cloud Foundry.
You can get the the cf client here: github.com/cloudfoundry/cli
The way the vagrant box is created is by making a network with a static IP on the host. This means that you cannot connect to it from some other box.
# Attach to the endpoint (self-signed certs in dev mode requires skipping validation).
# cf-dev.io resolves to the static IP that vagrant provisions.
# This DNS resolution may fail on certain DNS providers that block resolution to 192.168.0.0/16.
# Unless you changed the default credentials in the configuration, it is admin/changeme.
cf api --skip-ssl-validation https://api.cf-dev.io
cf login -u admin -p changeme
Typically Vagrant box deployments encounter one of few problems:
-
uaa
does not come up correctly (constantly not ready inpod-status
).In this case perform the following
# Delete everything in the uaa namespace helm delete --purge uaa kubectl delete namespace uaa # Delete the pv related to uaa/mysql-data-mysql-0 kubectl get pv # Find it kubectl delete pv pvc-63aab845-4fe7-11e7-9c8d-525400652dd8 make uaa-run
-
api-group
does not come up correctly and is not performing migrations (curl output in logs).uaa
is not functioning, try steps above -
Vagrant under VirtualBox freezing for no obvious reason.
Try enabling the Use Host I/O Cache option in
Settings->Storage->SATA Controller
. -
Volumes don't get mounted when suspending/resuming the box.
For now only
vagrant stop
and thenvagrant up
fixes it. -
When restarting the box with either
vagrant reload
orvagrant stop/up
some pods never come up automatically.You have to do a
make stop
and thenmake run
to bring this up. -
Pulling images during any of
vagrant up
ormake vagrant-prep
ormake docker-deps
fails.In order to have access to the internet inside the vagrant box and inside the containers (withing the box) you need to enable ip forwarding for both the host and the vagrant box (which is the host for containers).
To enable temporarily:
echo "1" | sudo tee /proc/sys/net/ipv4/ip_forward
or to do this permanently:
echo "net.ipv4.ip_forward = 1" | sudo tee /etc/sysctl.d/50-docker-ipv4-ipforward.conf
and restart your docker service (or run
vagrant up
again if changed on the host).
SCF is deployed via Helm on Kubernetes. Please see the wiki page for installation instructions if you have a running Kubernetes already.
Name | Effect |
---|---|
run |
Set up SCF on the current node |
stop |
Stop SCF on the current node |
vagrant-box |
Build the Vagrant box image using packer |
vagrant-prep |
Shortcut for building everything needed for make run |
In a standard installation, the domain used by applications pushed to CF is the same as the domain configured for CF.
This document describes how to change this behaviour so that CF and applications use separate domains.
-
Follow the basic steps for deploying UAA and SCF.
-
When deploying SCF, add a section like
bosh: instance_groups: - name: api-group jobs: - name: cloud_controller_ng properties: app_domains: - <APPDOMAIN>
to the
scf-config-values.yaml
override file. The placeholder<APPDOMAIN>
has to be replaced with whatever domain is desired to be used by the applications.
After deployment use
cf curl /v2/info | grep endpoint
to verify that the CF domain is not <APPDOMAIN>
.
Further, by pushing an application, verify that <APPDOMAIN>
is
printed as the domain used by the application.
There are two places to see logs. Monit's logs, and the actual log files of each process in the container.
-
Monit logs
# Normal form using kubectl kubectl logs --namespace cf router-3450916350-xb3kf # Short form using k k logs cf:^router-[0-9]
-
Container process logs
# Normal form kubectl exec -it --namespace cf nats-0 -- env LINES=$LINES COLS=$COLS TERM=$TERM bash # Short form k ssh :nats # After ssh'ing, the logs are all in this directory for each process: cd /var/vcap/sys/log
On the Vagrant box, run the following commands:
make stop
make run
The SCF secret generator creates secrets in the CF and UAA namespaces, and helm doesn't know about these, which means they won't be deleted if the release is deleted. The best way to remove everything is to run the following commands:
helm delete --purge ${CF_RELEASE_NAME}
kubectl delete namespace ${CF_NAMESPACE}
helm delete --purge ${UAA_RELEASE_NAME}
kubectl delete namespace ${UAA_NAMESPACE}
However, busy systems may encounter timeouts when the release is deleted:
$ helm delete --purge scf
E0622 02:27:17.555417 14014 portforward.go:178] lost connection to pod
Error: transport is closing
In this case, deleting the StatefulSets before anything else will make the operation more likely to succeed:
kubectl delete statefulsets --all --namespace ${CF_NAMESPACE}
helm delete --purge ${CF_RELEASE_NAME}
kubectl delete namespace ${CF_NAMESPACE}
kubectl delete statefulsets --all --namespace ${UAA_NAMESPACE}
helm delete --purge ${UAA_RELEASE_NAME}
kubectl delete namespace ${UAA_NAMESPACE}
Note that this needs kubectl v1.9.6 or newer for the delete statefulsets
command to work.
On the Vagrant box, when pod-status
reports all roles are running, enable diego_docker
support with
cf enable-feature-flag diego_docker
and execute the following commands:
make smoke # Cloud Foundry smoke tests
make brain # SCF-specific additional acceptance tests
make scaler-smoke # Auto-scaler smoke tests
make cats # Cloud Foundry acceptance tests
Deploy acceptance-tests-brain
as above, but first modify the environment to include INCLUDE=pattern
or
EXCLUDE=pattern
. For example to run just 005_sso_test.sh
and 014_sso_authenticated_passthrough_test.sh
, you
could add INCLUDE
with a value of sso
.
It is also possible to run custom tests by mounting them at the /tests
mountpoint inside the container. The
mounted tests will be combined with the bundled tests. However, to do so you will need to manually run it via docker.
To exclude the bundled tests match against names starting with 3 digits followed by an underscore (as in,
EXCLUDE=\b\d{3}_
) or explicitly select only the mounted tests with INCLUDE=^/tests/
.
Run make/tests acceptance-tests env.CATS_SUITES="-suite,+suite" env.CATS_FOCUS="regular expression"
directly. Each suite is separated by a comma. The modifiers apply until the next modifier is seen,
and have the following meanings:
Modifier | Meaning |
---|---|
+ |
Enable the following suites |
- |
Disable the following suites |
= |
Disable all suites, and enable the following suites |
The CATS_FOCUS
parameter is passed to ginkgo as a -focus
parameter.
On the Vagrant box, run the following commands:
cd ~/scf
# Stop gracefully.
make stop
# Delete all fissile images.
docker rmi $(fissile show image)
# Re-create the images and then run them.
make images run
Try each of the following solutions sequentially:
- Run the
vagrant reload
command. - Run
vagrant halt && vagrant reload
command. - Manually stop the virtual machine and then run the
vagrant reload
command. - Run the
vagrant destroy -f && vagrant up
command and then runmake vagrant-prep run
on the Vagrant box.
You can target the cluster on the hardcoded cf-dev.io
address
assigned to a host-only network adapter. You can access any URL or
endpoint that references this address from your host.
- Use the role manifest to expose the port for the mysql proxy role.
This is done by adding the key
public: true
to thepxc-mysql-proxy
port inproperties.bosh_containerization.ports
of jobproxy
in instance_groupmysql-proxy
. - With that the MySQL instance is exposed at
cf-dev.io:3306
. - The username is:
ccadmin
. - The password can be retrieved from the environment variable
CC_DATABASE_PASSWORD
found in theapi-group
pod.
Basic access is then achieved using
mysql --database ccdb --user=ccadmin --port=3306 --host=cf-dev.io --password=...
If mysqldump
is available the schema can be retrieved via
mysqldump (conn+auth-as-above) --no-data --single-transaction ccdb
or
mysqldump (conn+auth-as-above) --no-data --single-transaction ccdb | grep -v '^/\*'
to remove the comments holding dump action tracing.
- Edit the
role-manifest.yml
:- Add the BOSH release information to the
releases:
section - Add new roles or change existing ones
- Add exposed environment variables (
yaml path: /variables
). - Add configuration templates (
yaml path: /configuration/templates
andyaml path: /roles/*/configuration/templates
).
- Add the BOSH release information to the
- Add development defaults for your configuration settings to
~/scf/bin/settings/settings.env
. - Add any opinions (static defaults) and dark opinions (configuration that must be set by user) to
./container-host-files/etc/scf/config/opinions.yml
and./container-host-files/etc/scf/config/dark-opinions.yml
, respectively. - Test the changes.
- Run the
make compile images run
command.
- Run the
-
Edit
./container-host-files/etc/scf/config/role-manifest.yml
:-
Add the new exposed environment variables (
yaml path: /variables
). -
Add or change configuration templates:
yaml path: /configuration/templates
yaml path: /roles/*/configuration/templates
-
-
Add development defaults for your new settings in
~/scf/bin/settings/settings.env
. -
Rebuild the role images that need this new setting:
docker stop <role> docker rmi -f fissile-<role>:<tab-for-completion> make images run
Tip: If you do not know which roles require your new settings, you can use the following catch-all:
make stop docker rmi -f $(fissile show image) make images run
- Run
tooling/bin/import-bosh-releases <cf-deployment-version>
. - Update
bin/common/version.sh
to record the newCF_VERSION
. - Run
make diff-releases
to check the changed BOSH properties; see the next section for details. - Run
tooling/bin/check-uaa-clients
- Bump CloudFoundry Acceptance Tests(CATs) submodule to match new
CF_VERSION
, before testing.
Note: Because this process involves downloading and compiling release(s), it may take a long time.
-
In the manifest, update the version and SHA of the release(s)
-
Compare the BOSH releases
make diff-releases
This command will print all changes to releases, telling us what properties have changed (added, removed, changed descriptions and values, ...).
Note: don't commit the changes to the releases before you run the diff target.
-
Act on configuration changes:
Important: If you are not sure how to treat a configuration setting, discuss it with the SCF team.
For any configuration changes discovered in step the previous step, you can do one of the following:
* Keep the defaults in the new specification. * Add an opinion (static defaults) to `./container-host-files/etc/scf/config/opinions.yml`. * Add a template and an exposed environment variable to `./container-host-files/etc/scf/config/role-manifest.yml`.
Define any secrets in the dark opinions file
./container-host-files/etc/scf/config/dark-opinions.yml
and expose them as environment variables. -
Evaluate role changes:
- Consult the release notes of the new version of the release.
- If there are any role changes, discuss them with the SCF team, follow steps 3 and 4 from this guide.
-
Bump CloudFoundry Acceptance Tests(CATs) submodule to match new
CF_VERSION
.cd src/scf-release/src/github.com/cloudfoundry/cf-acceptance-tests
git checkout <new-cf-version>
Note:
- If remote branch for
new-cf-version
doesn't exist then bump it to closest previous version available. - Run
git branch -a --sort=-committerdate | grep /cf
to check available CF release branches.
-
Test the release by running the
make compile images run
command. -
Before committing the tested release update the line
export CF_VERSION=...
inbin/common/version.sh
to the new CF version. -
Cleanup the diff work dir (
/tmp/scf-releases-diff
)
- Run the
vagrant reload
command. - Run the
make run
command.
- If our submodules are close to the
HEAD
of upstream and no merge conflicts occur, follow the steps described here. - If merge conflicts occur, or if the component is referenced as a submodule, and it is not compatible with the parent release, work with the SCF team to resolve the issue on a case-by-case basis.
-
Ensure that the Vagrant box is running.
-
ssh
into the Vagrant box. -
To tag the images into the selected registry and to push them, run the
make tag publish
command. -
This target uses the
make
variables listed below to construct the image names and tags:Variable Default Meaning IMAGE_REGISTRY empty The name of the trusted registry to publish to IMAGE_PREFIX scf The prefix to use for image names (must not be empty) IMAGE_ORG splatform The organization in the image registry BRANCH current branch The tag to use for the images -
To publish to the standard trusted registry run the
make tag publish
command, for example:make tag publish IMAGE_REGISTRY=docker.example.com/
For testing purposes we can create an authenticated registry right inside the Vagrant box. But the instructions work just the same with a pre-existing local registry.
The environment variables must be exported before changing into the scf/
directory. Otherwise direnv
will remove the settings when switching to the
src/uaa-fissile-release/
dir and back:
vagrant ssh
export FISSILE_DOCKER_REGISTRY=registry.cf-dev.io:5000
export FISSILE_DOCKER_USERNAME=admin
export FISSILE_DOCKER_PASSWORD=changeme
cd scf
time make vagrant-prep
make secure-registries
will disallow access to insecure registries and register
the interal CA cert before restarting the docker daemon.
make registry
will create a local docker registry re-using the router_ssl certs
and using basic auth. make publish
will push all images to this registry:
make secure-registries
make registry
docker login -u $FISSILE_DOCKER_USERNAME -p $FISSILE_DOCKER_PASSWORD $FISSILE_DOCKER_REGISTRY
make publish
docker logout $FISSILE_DOCKER_REGISTRY
Log out to make sure that kube is using the registry credentials from the helm chart and not the cached docker session.
Now delete all the local copies of the images. direnv allow is required to call
fissile from the UAA directory, and FISSILE_REPOSITORY
needs to be overridden
from the scf
setting that is inherited:
fissile show image | xargs docker rmi
cd src/uaa-fissile-release/
direnv allow
FISSILE_REPOSITORY=uaa fissile show image | xargs docker rmi
docker images
cd -
Now create an SCF and UAA instance via the helm chart and confirm that all images are fetched correctly. Run smoke tests for final verification:
make run
pod-status --watch
docker images
make smoke
If the registry API needs to be accessed via curl, then it is easier to just use basic auth, which can be requested by setting:
...
export FISSILE_DOCKER_AUTH=basic
make registry
curl -u ${FISSILE_DOCKER_USERNAME}:${FISSILE_DOCKER_PASSWORD} https://registry.cf-dev.io:5000/v2/
# Enable NFS modules
sudo modprobe nfs
sudo modprobe nfsd
docker run -d --name nfs \
-v "[SOME_DIR_YOU_WANT_TO_SHARE_ON_YOUR_HOST]:/exports/foo" \
-p 111:111/tcp \
-p 111:111/udp \
-p 662:662/udp \
-p 662:662/tcp \
-p 875:875/udp \
-p 875:875/tcp \
-p 2049:2049/udp \
-p 2049:2049/tcp \
-p 32769:32769/udp \
-p 32803:32803/tcp \
-p 892:892/udp \
-p 892:892/tcp \
--privileged \
splatform/nfs-test-server /exports/foo
- Security group JSON file (nfs-sg.json). Replace
<destination_ip>
by the address returned from the commandgetent hosts "cf-dev.io" | awk 'NR=1{print $1}'
:
[
{
"destination": "<destination_ip>",
"protocol": "tcp",
"ports": "111,662,875,892,2049,32803"
},
{
"destination": "<destination_ip>",
"protocol": "udp",
"ports": "111,662,875,892,2049,32769"
}
]
# Create the security group - JSON above
cf create-security-group nfs-test nfs-sg.json
# Bind security groups for containers that run apps
cf bind-running-security-group nfs-test
# Bind security groups for containers that stage apps
cf bind-staging-security-group nfs-test
git clone https://github.com/cloudfoundry/persi-acceptance-tests.git
cd persi-acceptance-tests/assets/pora
cf push pora --no-start
# Enable the Persi NFS service
cf enable-service-access persi-nfs
# Create a service and bind it
EXTERNAL_IP=$(getent hosts "cf-dev.io" | awk 'NR=1{print $1}')
cf create-service persi-nfs Existing myVolume -c "{\"share\":\"${EXTERNAL_IP}/exports/foo\"}"
cf bind-service pora myVolume -c '{"uid":"1000","gid":"1000"}'
# Start the app
cf start pora
# Test the app is available
curl pora.cf-dev.io
# Test the app can write
curl pora.cf-dev.io/write
The Cloud Controller Database encrypts sensitive information like passwords. By default, the encryption key is generated by SCF. If it's compromised and needs to be rotated, new keys can be added. Note that existing encrypted information will not be updated. The encrypted information must be set again to have them reencrypted with the new key. The old key cannot be dropped until all references to it are removed from the database.
Updating these secrets is a manual process:
- Create a file
new-key-values.yaml
with content of the form:
env:
CC_DB_CURRENT_KEY_LABEL: new_key
secrets:
CC_DB_ENCRYPTION_KEYS:
new_key: "<new-key-value-goes-here>"
-
Use
helm upgrade "${CF_NAMESPACE}" "${CF_CHART}" ... --values new-key-values.yaml
to import the above data into the cluster. This restarts relevant pods with the new information from step 1.- The variable `CF_NAMESPACE` contains the name of the namespace the SCF chart was deployed into. - The variable `CF_CHART` contains the name of the SCF chart. - The `...` placeholder stands for the standard set of options needed to properly upgrade an SCF deployment, as per the main documentation.
-
Perform the actual rotation via
# Change the encryption key in the config file:
$ kubectl exec --namespace cf api-group-0 -- bash -c 'sed -i "/db_encryption_key:/c\\db_encryption_key: \"$(echo $CC_DB_ENCRYPTION_KEYS | jq -r .new_key)\"" /var/vcap/jobs/cloud_controller_ng/config/cloud_controller_ng.yml'
# Run the rotation for the encryption keys:
$ kubectl exec --namespace cf api-group-0 -- bash -c 'export PATH=/var/vcap/packages/ruby-2.4/bin:$PATH ; export CLOUD_CONTROLLER_NG_CONFIG=/var/vcap/jobs/cloud_controller_ng/config/cloud_controller_ng.yml ; cd /var/vcap/packages/cloud_controller_ng/cloud_controller_ng ; /var/vcap/packages/ruby-2.4/bin/bundle exec rake rotate_cc_database_key:perform'
When everything works correctly the first command will not generate any output, while the second command will dump a series of (json-formatted) log entries describing its progress in rotation the keys for the various CC models.
Note that keys should be appended to the existing secret to be sure existing
environment variables can be decoded. Any operator can check which keys are in
use by accessing the ccdb
. If the encryption_key_label
is empty, the
default generated key is still being used.
$ kubectl -n cf exec mysql-0 -t -i -- /bin/bash -c 'mysql -p${MYSQL_ADMIN_PASSWORD}'
MariaDB [(none)]> select name, encrypted_environment_variables, encryption_key_label from ccdb.apps;
+--------+--------------------------------------------------------------------------------------------------------------+----------------------+
| name | encrypted_environment_variables | encryption_key_label |
+--------+--------------------------------------------------------------------------------------------------------------+----------------------+
| go-env | XF08q9HFfDkfxTvzgRoAGp+oci2l4xDeosSlfHJUkZzn5yvr0U/+s5LrbQ2qKtET0ssbMm3L3OuSkBnudZLlaCpFWtEe5MhUe2kUn3A6rUY= | key0 |
+--------+--------------------------------------------------------------------------------------------------------------+----------------------+
1 row in set (0.00 sec)
For example, if keys were being rotated again, the secret would become:
SECRET_DATA=$(echo "{key0: abc-123, key1: def-456}" | base64)
and the CC_DB_CURRENT_KEY_LABEL
would be updated to match the new key.
The ccdb
database contains several tables with encrypted information:
- apps: environment variables
- buildpack_lifecycle_buildpacks: buildpack URLs may contain passwords
- buildpack_lifecycle_data: buildpack URLs may contain passwords
- droplets: may contain docker registry passwords
- env_groups: environment variables
- packages: may contain docker registry passwords
- service_bindings: contains service credentials
- service_brokers: contains service credentials
- service_instances: contains service credentials
- service_keys: contains service credentials
- tasks: environment variables
To ensure the encryption key is updated, the command (or its update-
equivalent) can be run again with the same parameters. Some commands need to be deleted / recreated to update the label.
- apps: Run
cf set-env
again. - buildpack_lifecycle_buildpacks, buildpack_lifecycle_data, droplets:
cf restage
the app - packages:
cf delete
, thencf push
the app (Docker apps with registry password) - env_groups: Run
cf set-staging-environment-variable-group
orcf set-running-environment-variable-group
again - service_bindings: Run
cf unbind-service
andcf bind-service
again - service_brokers: Run
cf update-service-broker
with the appropriate credentials - service_instances: Run
cf update-service
with the appropriate credentials - service_keys: Run
cf delete-service-key
andcf create-service-key
again. - tasks: While tasks have an encryption key label, they are generally meant to be a
one-off event, and left to run to completion. If there is a task still running, it
could be stopped with
cf terminate-task
, then run again withcf run-task
.
As we tend to develop using from-scratch databases, we run CCDB migrations more than typical. Additionally, it appears that (due to the use of MySQL and its lack of transactional support around schema changes) we have a high failure rate when doing the initial database migration from nothing. Given that we only support upgrades from the previous verison, we have implemented a patch to squash all initial database migrations. To update this patch:
- Remove the patch (so that we can correctly generate a new one).
- Deploy SCF until it is ready.
- Apply
0001-db-migration-add-script-to-squash-DB-migrations.patch
to theapi-group
container - Follow the instructions at the top of the patch file (updating timestamps)
- Run
rake db:squash
(as noted in the patch file) to generate the new squashing patch. - Update
0001-db-migration-add-script-to-squash-DB-migrations.patch
with the new timestamps.
It may be a good idea to use mysqldump
to confirm that the database schema
before and after the new patch match.