- Before you begin
- Provisioning infrastructure with Terraform
- Running services
- Tracing services
- Running migrations
- Configuring the server
This page explains how to build and deploy servers within the Exposure Notification Reference implementation.
The Exposure Notification Reference implementation includes multiple services.
Each service's main
package is located in the cmd
directory.
Each service is deployed in the same way, but may accept different configuration options. Configuration options are specified via environment variables.
Service | Folder | Description |
---|---|---|
exposure key server | cmd/export | Publishes exposure keys |
federation | cmd/federation | gRPC federation requests listener |
federation puller | cmd/federation-pull | Pulls federation results from federation partners |
exposure server | cmd/exposure | Stores infection keys |
exposure cleanup | cmd/cleanup-exposure | Deletes old exposure keys |
export cleanup | cmd/cleanup-export | Deletes old exported files published by the exposure key export service |
To build and deploy the Exposure Notification server services, you need to install and configure the following:
-
Download and install the Google Cloud SDK.
For more information on installation and to set up, see the Cloud SDK Quickstarts.
You can use Terraform to provision the initial infrastructure, database, service accounts, and first deployment of the services on Cloud Run. Terraform does not manage the Cloud Run services after their initial creation!
See Deploying with Terraform for more information.
While Terraform does an initial deployment of the services, it does not manage the Cloud Run services beyond their initial creation. If you make changes to the code, you will need to build, deploy, and promote new services. The general order of operations is:
-
Build - this is the phase where the code is bundled into a container image and pushed to a registry.
-
Deploy - this is the phase where the container image is deployed onto Cloud Run, but is not receiving any traffic.
-
Promote - this is the phase where a deployed container image begins receiving all or a percentage of traffic.
Build new services by using the script at ./scripts/build
, specifying the
following values:
-
PROJECT_ID
(required) - your Google Cloud project ID. -
SERVICES
(required) - comma-separated list of names of the services to build, or "all" to build all. See the list of services in the table above. -
TAG
(optional) - tag to use for the images. If not specified, it uses a datetime-based tag of the format YYYYMMDDhhmmss.
PROJECT_ID="my-project" \
SERVICES="export" \
./scripts/build
Expect this process to take 3-5 minutes.
Deploy already-built container using the script at ./scripts/deploy
,
specifying the following values:
-
PROJECT_ID
(required) - your Google Cloud project ID. -
REGION
(required) - region in which to deploy the services. -
SERVICES
(required) - comma-separated list of names of the services to deploy, or "all" to deploy all. Note, if you specify multiple services, they must use the same tag. -
TAG
(required) - tag of the deployed image (e.g. YYYYMMDDhhmmss).
PROJECT_ID="my-project" \
REGION="us-central1" \
SERVICES="export" \
TAG="20200521084829" \
./scripts/deploy
Expect this process to take 1-2 minutes.
Promote an already-deployed service to begin receiving production traffic using
the script at ./scripts/promote
, specifying the following values:
-
PROJECT_ID
(required) - your Google Cloud project ID. -
REGION
(required) - region in which to promote the services. -
SERVICES
(required) - comma-separated list of names of the services to promote, or "all" to deploy all. Note, if you specify multiple services, then the revision must be "LATEST". -
REVISION
(optional) - revision of the service to promote, usually the output of a deployment step. Defaults to "LATEST". -
PERCENTAGE
(optional) - percent of traffic to shift to the new revision. Defaults to "100".
PROJECT_ID="my-project" \
REGION="us-central1" \
SERVICES="export" \
./scripts/promote
Expect this process to take 1-2 minutes.
To enable distributed tracing, please ensure your environment has these variables
Variable | Values | Comment |
---|---|---|
OBSERVABILITY_EXPORTER | If unset, no exporting shall be done. Use any of "stackdriver" or "ocagent" otherwise | |
PROJECT_ID | The ProjectID of your associated Google Cloud Platform project on which this application shall be deployed | Required if you use "stackdrver" |
To migrate the production database, use the script in ./scripts/migrate
. This
script triggers a Cloud Build invocation which uses the Cloud SQL Proxy to run
the database migrations and uses the following environment variables:
-
PROJECT_ID
(required) - your Google Cloud project ID. -
DB_CONN
(required) - your Cloud SQL connection name. -
DB_PASS_SECRET
(required) - the reference to the secret where the database password is stored in Secret Manager. -
DB_NAME
(default: "main") - the name of the database against which to run migrations. -
DB_USER
(default: "notification") - the username with which to authenticate. -
COMMAND
(default: "up") - the migration command to run.
If you created the infrastructure using Terraform, you can get these values by
running terraform output
from inside the terraform/
directory:
PROJECT_ID=$(terraform output project)
DB_CONN=$(terraform output db_conn)
DB_PASS_SECRET=$(terraform output db_pass_secret)
If you did not use the Terraform configurations to provision your server, or if you are running your own Postgres server,
-
Download and install the
migrate
tool. -
Construct the database URL for your database. This is usually of the format:
postgres://DB_USER:DB_PASSWORD@DB_HOST:DB_PORT/DB_NAME?sslmode=require
-
Run the migrate command with this database URL:
migrate \ -database "YOUR_DB_URL" \ -path ./migrations \ up
This repository includes a configuration tool which provides a browser-based interface for manipulating the database-backed configuration. This admin tool does not have authentication / authorization and should not be deployed on the public Internet!
-
Export the database connection parameters:
export DB_CONN=... export DB_USER=... export DB_PASSWORD="secret://..." export DB_PORT=... export DB_NAME=...
If you used Terraform to provision the infrastructure:
cd terraform/ export DB_CONN=$(terraform output db_conn) export DB_USER=$(terraform output db_user) export DB_PASSWORD="secret://$(terraform output db_pass_secret)" export DB_PORT=5432 export DB_NAME=$(terraform output db_name) cd ../
-
Configure the Cloud SQL proxy:
If you are using Cloud SQL, start the proxy locally:
cloud_sql_proxy -instances=$DB_CONN=tcp:$DB_PORT &
And disable SSL verification:
# Cloud SQL uses a local proxy and handles TLS communication automatically export DB_SSLMODE=disable
-
Start the admin console:
go run ./tools/admin-console
-
Open a browser to localhost:8080.
Remember, you are editing the live configuration of the database!