|
| 1 | +# LINC | WebKNOSSOS Deployment |
| 2 | + |
| 3 | +This document is designed to help deploy a new version for LINC | WebKNOSSOS via AWS EC2 |
| 4 | + |
| 5 | +### Create an instance in AWS EC2 with at least 32GB of memory |
| 6 | + |
| 7 | +Proceed to AWS and create an AWS Linux instance |
| 8 | + |
| 9 | +• r5.2xlarge is suggested for instance type |
| 10 | +• x86_64 architecture is suggested |
| 11 | +• Ensure that ports 80 and 443 are available. |
| 12 | +• Ensure that the instance is reachable via Public IP address |
| 13 | + |
| 14 | +### Connect the instance to a Route 53 Domain Record |
| 15 | + |
| 16 | +Proceed to Route 53 and create an A Record with the desired domain that is pointing to the Public IP address of the EC2 Instance |
| 17 | + |
| 18 | +### Return to AWS EC2 and ssh onto the instance |
| 19 | + |
| 20 | +Once the instance is running, SSH onto the instance. |
| 21 | + |
| 22 | +First, install the appropriate dependencies -- you'll need docker, docker-compose (and most likely git and vim for file management) |
| 23 | + |
| 24 | +```shell |
| 25 | +sudo yum install docker git vim -y |
| 26 | + |
| 27 | +sudo service docker start |
| 28 | + |
| 29 | +sudo curl -L "https://github.com/docker/compose/releases/download/$(curl -s https://api.github.com/repos/docker/compose/releases/latest | grep -oP '"tag_name": "\K(.*)(?=")')/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose |
| 30 | + |
| 31 | +sudo chmod +x /usr/local/bin/docker-compose |
| 32 | +``` |
| 33 | + |
| 34 | +Next, proceed to do the following commands (These steps are mostly inline with https://docs.webknossos.org/webknossos/installation.html) |
| 35 | + |
| 36 | +```shell |
| 37 | +sudo mkdir opt && sudo cd opt |
| 38 | +sudo mkdir webknossos && sudo cd webknossos |
| 39 | + |
| 40 | +sudo mkdir certs && sudo mkdir certs-data |
| 41 | + |
| 42 | +sudo wget https://github.com/scalableminds/webknossos/raw/master/tools/hosting/docker-compose.yml |
| 43 | + |
| 44 | +sudo mkdir binaryData |
| 45 | + |
| 46 | +sudo chown -R 1000:1000 binaryData |
| 47 | + |
| 48 | +sudo touch nginx.conf |
| 49 | +``` |
| 50 | + |
| 51 | +Next, you'll need to issue an SSL certificate directly on the server -- `certbot` is used here: |
| 52 | + |
| 53 | +```shell |
| 54 | +sudo docker run --rm -p 80:80 -v $(pwd)/certs:/etc/letsencrypt -v $(pwd)/certs-data:/data/letsencrypt certbot/certbot certonly --standalone -d <enter-your-website-url > --email [email protected] --agree-tos --non-interactive |
| 55 | +``` |
| 56 | + |
| 57 | +You'll need to populate the `nginx.conf`. Replace `<enter-your-website-url>` with the `A` name record you used in Route 53 (e.g. `webknossos.lincbrain.org`). |
| 58 | + |
| 59 | +```shell |
| 60 | +events {} |
| 61 | + |
| 62 | +http { |
| 63 | + # Main server block for the webknossos application |
| 64 | + server { |
| 65 | + listen 80; |
| 66 | + server_name <enter-your-website-url>; |
| 67 | + |
| 68 | + location /.well-known/acme-challenge/ { |
| 69 | + root /data/letsencrypt; |
| 70 | + } |
| 71 | + |
| 72 | + location / { |
| 73 | + return 301 https://$host$request_uri; |
| 74 | + } |
| 75 | + } |
| 76 | + |
| 77 | + server { |
| 78 | + listen 443 ssl http2; |
| 79 | + server_name <enter-your-website-url>; |
| 80 | + |
| 81 | + ssl_certificate /etc/letsencrypt/live/<enter-your-website-url>/fullchain.pem; |
| 82 | + ssl_certificate_key /etc/letsencrypt/live/<enter-your-website-url>/privkey.pem; |
| 83 | + |
| 84 | + # webknossos-specific overrides |
| 85 | + client_max_body_size 0; |
| 86 | + proxy_read_timeout 3600s; |
| 87 | + |
| 88 | + location / { |
| 89 | + set $cors ''; |
| 90 | + if ($http_origin ~* (https://staging--lincbrain-org\.netlify\.app|https://.*\.lincbrain\.org|https://lincbrain\.org)) { |
| 91 | + set $cors 'true'; |
| 92 | + } |
| 93 | + |
| 94 | + if ($cors = 'true') { |
| 95 | + add_header 'Access-Control-Allow-Origin' "$http_origin" always; |
| 96 | + add_header 'Access-Control-Allow-Credentials' 'true' always; |
| 97 | + add_header 'Access-Control-Allow-Methods' 'GET, POST, PUT, DELETE, OPTIONS' always; |
| 98 | + add_header 'Access-Control-Allow-Headers' 'Accept, Content-Type, X-Requested-With, Authorization, Cookie' always; |
| 99 | + } |
| 100 | + |
| 101 | + if ($request_method = 'OPTIONS') { |
| 102 | + add_header 'Access-Control-Allow-Origin' "$http_origin" always; |
| 103 | + add_header 'Access-Control-Allow-Credentials' 'true' always; |
| 104 | + add_header 'Access-Control-Allow-Methods' 'GET, POST, PUT, DELETE, OPTIONS' always; |
| 105 | + add_header 'Access-Control-Allow-Headers' 'Accept, Content-Type, X-Requested-With, Authorization, Cookie' always; |
| 106 | + add_header 'Content-Length' 0 always; |
| 107 | + add_header 'Content-Type' 'text/plain' always; |
| 108 | + return 204; |
| 109 | + } |
| 110 | + |
| 111 | + proxy_pass http://webknossos-webknossos-1:9000; |
| 112 | + proxy_http_version 1.1; |
| 113 | + proxy_set_header Host $host; |
| 114 | + proxy_set_header X-Real-IP $remote_addr; |
| 115 | + proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; |
| 116 | + proxy_set_header X-Forwarded-Proto $scheme; |
| 117 | + proxy_set_header Cookie $http_cookie; |
| 118 | + proxy_set_header Transfer-Encoding ""; |
| 119 | + proxy_buffering off; |
| 120 | + |
| 121 | + proxy_hide_header Access-Control-Allow-Origin; |
| 122 | + proxy_hide_header Access-Control-Allow-Credentials; |
| 123 | + proxy_hide_header Access-Control-Allow-Methods; |
| 124 | + proxy_hide_header Access-Control-Allow-Headers; |
| 125 | + } |
| 126 | + } |
| 127 | + |
| 128 | + # Separate server block for serving the binaryData directory |
| 129 | + server { |
| 130 | + listen 8080; |
| 131 | + server_name <enter-your-website-url>; |
| 132 | + |
| 133 | + location /binaryData/ { |
| 134 | + alias /home/ec2-user/opt/webknossos/binaryData/; |
| 135 | + autoindex on; |
| 136 | + autoindex_exact_size off; |
| 137 | + autoindex_localtime on; |
| 138 | + allow all; |
| 139 | + } |
| 140 | + } |
| 141 | +} |
| 142 | +``` |
| 143 | +
|
| 144 | +You'll next want to alter the `docker-compose` pulled earlier via `wget` |
| 145 | +
|
| 146 | +Remove the `nginx-letsencrypt` service, and alter the `nginx` as such: |
| 147 | +
|
| 148 | +``` |
| 149 | +nginx-proxy: |
| 150 | + image: nginx:latest |
| 151 | + container_name: nginx-proxy |
| 152 | + ports: |
| 153 | + - "8080:8080" |
| 154 | + - "80:80" |
| 155 | + - "443:443" |
| 156 | + volumes: |
| 157 | + - ./nginx.conf:/etc/nginx/nginx.conf:ro |
| 158 | + - ./certs:/etc/letsencrypt |
| 159 | + - /home/ec2-user/opt/webknossos/binaryData:/home/ec2-user/opt/webknossos/binaryData:ro |
| 160 | + depends_on: |
| 161 | + - webknossos |
| 162 | +``` |
| 163 | +
|
| 164 | +`nginx` should now be able to be called appropriately via HTTPS once `webknossos` API is running |
| 165 | +
|
| 166 | +Lastly, you'll want to start the API and supporting containers: |
| 167 | +
|
| 168 | +```shell |
| 169 | +DOCKER_TAG=xx.yy.z PUBLIC_HOST=webknossos.example.com [email protected] \ |
| 170 | +docker compose up -d webknossos nginx |
| 171 | +``` |
| 172 | +
|
| 173 | +You can check the health of the containers via: |
| 174 | +
|
| 175 | +``` |
| 176 | +docker ps |
| 177 | + |
| 178 | +# or |
| 179 | + |
| 180 | +docker logs -f <container-id> |
| 181 | +``` |
| 182 | +
|
| 183 | +## Backups |
| 184 | +
|
| 185 | +### FossilDB |
| 186 | +
|
| 187 | +FossilDB is a scalableminds database that extends from the open-source RocksDB. |
| 188 | +
|
| 189 | +Temp steps / commands for FossilDB backup: |
| 190 | +
|
| 191 | +1. Exec into EC2 instance |
| 192 | +2. Grab `fossildb-client` via `docker pull scalableminds/fossildb-client:master` |
| 193 | +3. Determine the appropriate internal network that the `fossildb` instance is running in within the Dockerized setup on EC2: `docker inspect -f '{{range .NetworkSettings.Networks}}{{.NetworkID}} {{end}}' webknossos-fossildb-1` |
| 194 | +4. `docker run --network <network-id> scalableminds/fossildb-client:master webknossos-fossildb-1 backup` should create the backup |
| 195 | +5. The backup will be stored via `/home/ec2-user/opt/webknossos/persistent/fossildb/backup` |
| 196 | +
|
| 197 | +## Creating a new WebKNOSSOS with pre-existing backups |
| 198 | +
|
| 199 | +There are three different components that must be taken into account for a WebKNOSSOS clone: |
| 200 | +
|
| 201 | +• mounted Docker volumes -- represented by the `binaryData` and `persistent` directories in the WebKNOSSOS file structure |
| 202 | + - exported to AWS S3 via the `docker_volumes_backup.sh` cronjob script |
| 203 | +• FossilDB data (managed via `fossildb-client restore` commands) |
| 204 | + - exported to AWS S3 via the `fossil_db_backup.sh` cronjob script |
| 205 | +• PostgresDB data (managed via `pg_dump` and `pg_restore` commands) |
| 206 | + - exported to AWS S3 via the `postgres_backup.sh` cronjob script |
| 207 | +
|
| 208 | +When setting up a new clone, first follow the standard deployment steps above, **however** do not create the `binraryData` folder |
| 209 | +
|
| 210 | +You'll first want to restore the Docker volumes -- contained in the `webknosos_backups/` S3 subdirectory for wherever your cron jobs send the compressed backups |
| 211 | +
|
| 212 | +Copy the appropriate assets from S3 to the EC2 instance via the `aws cp <backup-bucket> <current destination>` |
| 213 | +
|
| 214 | +For example: |
| 215 | +
|
| 216 | +``` |
| 217 | +aws s3 cp s3://linc-brain-mit-staging-us-east-2/fossildb_backups/backup_2024-08-20_02-00-02.tar.gz ./backup_2024-08-20_02-00-02.tar.gz |
| 218 | +``` |
| 219 | +
|
| 220 | +Once you decompress (can use a tool like `gunzip`) and then extract the files -- (e.g. `tar -cvzf /home/ec2-user/opt/webknossos/webknossos_backup.tar.gz .`) |
| 221 | +you are ready to proceed; however, ensure that `persistent` and `binaryData` folders from the extracted files are in the same directory as your `docker-compose.yml` file |
| 222 | +
|
| 223 | +Next, you want to restore the `fossildb` instance -- this can simply be done via the `docker-compose run fossil-db-restore` command |
| 224 | +
|
| 225 | +Almost there! You'll next want to bring up the remainder of the WebKNOSSOS API (along with the nginx-proxy, postgres, etc.) via `docker-compose --env-file env.txt webknossos nginx-proxy` |
| 226 | +
|
| 227 | +Notably, this will bring up the `postgres` container (however, we've yet to restore the container!). Thus you'll want to: |
| 228 | + - Mount the decompressed, unpacked backup (should be something like `<backup_timestamp>.sql`). The mount command should be something similar to: `docker cp /local/path/to/postgres_backup.sql <container_id>:/tmp/postgres_backup.sql` |
| 229 | + - Exec into the `postgres` container and open a `psql` shell via `psql -U postgres` |
| 230 | + - Next, drop the `webknossos` database -- e.g. `DROP DATABASE webknossos` |
| 231 | + - Create the database `webknossos` -- e.g. `CREATE DATABASE webknossos` |
| 232 | + - Restore the database's state via psql -- e.g. `psql -U postgres -d webknossos -f /tmp/webknossos_backup.sql` |
| 233 | +
|
| 234 | +Your clone should be all set now! |
| 235 | +
|
| 236 | +
|
| 237 | +
|
| 238 | +
|
| 239 | +
|
| 240 | +
|
| 241 | +
|
0 commit comments