darst jump box
- First create the EC2 instance via:
AWS_PROFILE=lfproduct-test ./ec2/create_ec2_instance.sh - Wait for instance to be up and then:
./ec2/ssh_into_ec2_pem.sh(it will use Key PEM file stored locally asDaRstKey.pem- not checked into the git repository - gitignred). - When inside the instance, allow password login, create
darstuser that cansudoand logout. - Login as
darstuser via:./ec2/ssh_into_ec2.sh, here you go. Passwords are stored inpasswords.secretthat is also gitignored. - Finally if you have ssh keys defined on you machine, you can add them to the darst box, so you won't need to enter passwords anymore, run:
./ec2/add_ssh_keys.sh. - Suggest running:
AWS_PROFILE=... aws ec2 describe-instances | grep PublicIpAddress, get server's IP address and add a line to/etc/hosts, like this one:X.Y.Z.V dars. - After than you can login password-less via:
ssh darst@darstorroot@darstand that configuration is assumed later.
All next commands are assumed to be run on the darst jump box.
- Use
mfa.sh(frommfa/mfa.sh) to renew your AWS access keys for the next 36 hours.
- Use
testaws.sh(installed fromaws/testaws.sh) instead of the plainawscommand, it just prependsAWS_PROFILE=lfproduct-test. - Similarly with
devaws.sh,stgaws.shandprodaws.sh.
It uses darst local kubectl with a specific environment selected:
- Use
testk.sh(installed fromk8s/testk.sh) instead of the plainkubectlcommand, it just prependsKUBECONFIG=/root/.kube/kubeconfig_test AWS_PROFILE=lfproduct-test. - Similarly with
devk.sh,stgk.shandprodk.sh.
It uses darst local helm with a specific environment selected:
- Use
testh.sh(installed fromhelm/testh.sh) instead of the plainhelmcommand, it just prependsKUBECONFIG=/root/.kube/kubeconfig_test AWS_PROFILE=lfproduct-test. - Similarly with
devh.sh,stgh.shandprodh.sh. - You can prepend with
V2=1to useHelm 2instead ofHelm 3- but this is only valid until old clusters are still alive (for example dev and stg:V2=1 devh.sh listorV2=1 stgh.sh list).
- Use
testeksctl.sh(installed fromeksctl/testeksctl.sh) instead of the plaineksctlcommand, it just prependsKUBECONFIG=/root/.kube/kubeconfig_test AWS_PROFILE=lfproduct-test. - Similarly with
deveksctl.sh,stgeksctl.shandprodeksctl.sh. - Use
eksctl/create_cluster.sh {{env}}to create EKS v1.13 cluster, replace{{env}}withdevorstgortestorprod. Example./eksctl/create_cluster.sh test. - Use
eksctl/get_cluster.sh {{env}}to get current cluster info, replace{{env}}withdevorstgortestorprod. Example:./eksctl/get_cluster.sh test. - Use
eksctl/delete_cluster.sh {{env}}to delete cluster, replace{{env}}withdevorstgortestorprod. Example:./eksctl/delete_cluster.sh test.
Those scripts are installed in /usr/local/bin from ./utils/ directory.
- Use
change_namespace.sh test namespace-nameto change current namespace intestenv tonamespace-name. - Use
pod_shell.sh env namespace-name pod-nameto bash into the pod.
For each envs (test, dev, staging, prod), example for the test env:
- Create EKS v1.13 cluster:
./eksctl/create_cluster test. you can drop the cluster via./eksctl/delete_cluster.sh test. - Create cluster roles:
./cluster-setup/setup.sh test. To delete./cluster-setup/delete.sh test. - Init Helm on the cluster:
testh.sh init.
- Create local-storage storage class and mount NVMe volumes in
devstats,elasticandgrimoirenode groups:./local-storage/setup.sh test. You can delete via./local-storage/delete.sh test. - Install OpenEBS and NFS provisioner:
./openebs/setup.sh test. You can delete via./openebs/delete.sh test.
Note that current setup uses external ElasticSearch, deploying own ES instance in Kubernetes is now optional.
- Install ElasticSearch Helm Chart:
./es/setup.sh test. You can delete via./es/delete.sh test. - When ES is up and running (all 5 ES pods should be in
Runningstate:testk.sh get po -n dev-analytics-elasticsearch), test it via:./es/test.sh test. - You can examine ES contents via
./es/get_*.shscripts. For example:./es/get_es_indexes.sh test. - For more complex queries you can use:
./es/query_es_index.sh test ...and/or./es/search_es_index.sh test ....
- Clone
cncf/da-patronirepo and change directory to that repo. - Run
./setup.sh testto deploy ontestenv. - Run
./test.sh testto test database (should list databases). - Run
./config.sh testto configure Patroni once it is up & running, check for for3/3Ready fromtestk.sh get sts -n devstats devstats-postgres. - To delete run entire patroni installation do
./delete.sh test.
- Init
dev-analytics-apiDB users, roles, permissions:./dev_analytics/init.sh test. - You can delete
dev-analytics-apidatabase via./dev_analytics/delete.sh test.
To do the same for the external RDS:
- Init:
PG_HOST=url PG_USER=postgres PGPASSWORD=rds_pwd PG_PASS=new_da_pwd ./dev_analytics/init_external.sh test. - You can delete via:
PG_HOST=url PG_USER=postgres PGPASSWORD=rds_pwd ./dev_analytics/delete_external.sh test.
Optional (this will be done automatically by dev-analytics-api app deployment):
- Deploy
dev-analytics-apiDB structure:./dev_analytics/structure.sh test. - Deploy populated
dev-analytics-apiDB structure:./dev_analytics/populate.sh test. You will needdev_analytics/dev_analytics_env.sql.secretfile which is gitignored due to sensitive data (env=testorprod). - You can see database details from the patroni stateful pod:
pod_shell.sh test devstats devstats-postgres-0, thenpsql dev_analytics, finally:select id, name, slug from projects;.
Note that now we're using `SDS which do not require Redis. It replaces entire Mordred orchestration stack.
- You need special node setup for Redis:
./redis-node/setup.sh test. To remove special node configuration do: ./redis-node/delete.sh test`. - Run
./redis/setup.sh testto deploy Redis ontestenv. - Run
./redis/test.sh testto test Redis installation. - Run
./redis/list_dbs.sh testto list Redis databases. - To delete Redis run
./redis/delete.sh test.
- Clone
cncf/json2hat-helmrepo and change directory to that repo. - Run
./setup.sh testto deploy ontestenv. - To delete run
./delete.sh test.
Current DA V1 is not using DevStats, installing DevStats is optional.
- Clone
cncf/devstats-helm-lfrepo and change directory to that repo. - Run
./setup.sh testto deploy ontestenv. Note that this currently deploys only 4 projects (just a demo), all 65 projects will take days to provision. - Run
./add_projects.sh test 4 8to add 4 new projects with index 4, 5, 6, 7 (seedevstats-helm/values.yamlfor project indices. - To delete run
./delete.sh test.
- For each file in
mariadb/secrets/*.secret.examplecreate correspondingmariadb/secrets/*.secretfile.*.secretfiles are not checked in the gitgub repository. - Each file must be saved without new line at the end.
vimautomatically add one, to removetruncate -s -a filename. - Install MariaDB database:
./mariadb/setup.sh test. You can delete via./mariadb/delete.sh test. - Once installed test if MariaDB works (should list databases):
./mariadb/test.sh test. - Provision Sorting Hat structure:
./mariadb/structure.sh test. - Popoulate merged
devandstagingSorting Hat data:./mariadb/populate.sh test. You will needcncf/merge-sh-dbsrepo cloned in../merge-sh-dbsand actual merged data generated (that merged SQL is checked in the repo). - Run
./mariadb/backups.sh testto setup daily automatic backups. - Run
./mariadb/shell.sh testto get into mariadb shell. - Run
./mariadb/bash.sh testto get into bash shell having access to DB. - Run
./mariadb/external.sh testto make MariaDB available externally. Wait for ELB to be created and get its address via:testk.sh get svc --all-namespaces | grep mariadb-service-rw | awk '{ print $5 }'. - Run
[SHHOST=...] [ALL=1] ./mariadb/external_access.sh testto access MariaDB using its ELB.[SHHOST=...]is needed when you have no access totestk.sh,prodk.sh(for example outside darst box). - If you do not specify
SHHOST=...it will try to usetestk.sh get svc --all-namespaces | grep mariadb-service-rw | awk '{ print $5 }'to locate external ELB for you (which obviously only works from darst jumpbox). - If you specify
ALL=1it will use possible read-only connection (it will not try to reach master node which allows write, but will do load-balancing between master node and slave node(s)). - Run
./mariadb/external_delete.sh testto delete service exposing MariaDB externally.
- Use
DOCKER_USER=... ./mariadb/backups_image.shto build MariaDB backups docker image.
- Run
./backups-page/setup.sh testto setup static page allowing to see generated backups. (NFS shared RWX volume access). - Run
./backups-page/elbs.sh testto see the final URLs where MariaDB and Postgres backups are available, give AWS ELBs some time to be created first. - Use
./backups-page/delete.sh testto delete backups static page.
- Clone
dev-analytics-sortinghat-apirepo:git clone https://github.com/LF-Engineering/dev-analytics-sortinghat-api.gitand change directory to that repo. - Use
docker build -f Dockerfile -t "docker-user/dev-analytics-sortinghat-api" .to builddev-analytics-sortinghat-apiimage, replacedocker-userwith your docker user. - Run
docker push "docker-user/dev-analytics-sortinghat-api".
- Clone
dev-analytics-uirepo:git clone https://github.com/LF-Engineering/dev-analytics-ui.gitand change directory to that repo. - Use
docker build -f Dockerfile -t "docker-user/dev-analytics-ui" .to builddev-analytics-uiimage, replacedocker-userwith your docker user. - Run
docker push "docker-user/dev-analytics-ui".
You should build the minimal image, refer to dev-analytics-sortinghat-api repo README for details.
- Clone
dev-analytics-grimoire-dockerrepo:git clone https://github.com/LF-Engineering/dev-analytics-grimoire-docker.gitand change directory to that repo. - Run
MINIMAL=1 ./collect_and_build.sh - Use
docker build -f Dockerfile.minimal -t "docker-user/dev-analytics-grimoire-docker-minimal" .to builddev-analytics-grimoire-docker-minimalimage, replacedocker-userwith your docker user. - Run
docker push "docker-user/dev-analytics-grimoire-docker-minimal".
For your own user:
- Clone
dev-analytics-apirepo:git clone https://github.com/LF-Engineering/dev-analytics-api.gitand change directory to that repo. - If you changed any datasources, please use:
LF-Engineering/dev-analytics-api/scripts/check_addresses.shbefore building API image to make sure all datasources are present. - Make sure you are on the
testorprodbranch. Usertestorprodinstead ofenv. - Use
docker build -f Dockerfile -t "docker-user/dev-analytics-api-env" .to builddev-analytics-api-envimage, replacedocker-userwith your docker user andenvwithtestorprod. - Run
docker push "docker-user/dev-analytics-api-env".
Using AWS account:
- Run:
./dev-analytics-api/build-image.sh test.
- Clone
dev-analytics-circle-docker-build-baserepo:git clone https://github.com/LF-Engineering/dev-analytics-circle-docker-build-base.gitand change directory to that repo. - Use
docker build -f Dockerfile -t "docker-user/dev-analytics-circle-docker-build-base" .to builddev-analytics-circle-docker-build-baseimage, replacedocker-userwith your docker user. - Run
docker push "docker-user/dev-analytics-circle-docker-build-base".
Note that now DA V1 uses external Kibana by default, so building own Kibana and installing in Kubernetes is optional.
- Clone
dev-analytics-kibanarepo:git clone https://github.com/LF-Engineering/dev-analytics-kibana.gitand change directory to that repo. - Run
./package_plugins_for_version.sh - Use
docker build -f Dockerfile -t "docker-user/dev-analytics-kibana" .to builddev-analytics-kibanaimage, replacedocker-userwith your docker user. - Run
docker push "docker-user/dev-analytics-kibana".
- Make sure that you have
dev-analytics-api-envimage built (seedev-analytics-api imagesection). Currently we're using image built outside of AWS:lukaszgryglicki/dev-analytics-api-env. - Run
[ES_EXTERNAL=1] [KIBANA_INTERNAL=1] [NO_DNS=1] DOCKER_USER=... ./dev-analytics-api/setup.sh testto deploy. You can delete via./dev-analytics-api/delete.sh test. Currently image is already built forDOCKER_USER=lukaszgryglicki. - Note that during the deployment
.circleci/deployments/test/secrets.ejsonfile is regenerated with new key values. You may want to go todev-analytics-apirepo and commit that changes (secrets.ejson is encrypted and can be committed into the repo). - You can query given project config via
[NO_DNS=1] ./dev-analytics-api/project_config.sh test project-name [config-option], replaceproject-namewith for examplelinux-kernel. To see all projects use./grimoire/projects.sh test- useSlugcolumn. - Optional
[config-option]allows returning only selected subset of project's configuration, allowed values are:mordred environment aliases projects credentials. All those sections are retuned if[config-option]is not specified. - You can query any API call via via
./dev-analytics-api/query.sh test .... - You can deploy populated
dev-analytics-apiDB structure:./dev_analytics/populate.sh test. You will needdev_analytics/dev_analytics.sql.secretfile which is gitignored due to sensitive data. Without this step you will have no projects configured - Once API server is up and running, you should add permissions to affiliations edit in projects, go to
LF-Engineering/dev-analytics-api:permissions, run./add_permissions.sh testscript.
- Make sure that you have
dev-analytics-uiimage built (seedev-analytics-ui imagesection). Currently we're using image built outside of AWS:lukaszgryglicki/dev-analytics-ui. - For each file in
dev-analytics-ui/secrets/*.secret.exampleprovide the corresponding*.secretfile. Each file must be saved without new line at the end.vimautomatically add one, to removetruncate -s -a filename. - If you want to skip setting external DNS, prepend
setup.shcall withNO_DNS=1. - Run
[API_INTERNAL=1] [NO_DNS=1] DOCKER_USER=... ./dev-analytics-ui/setup.sh testto deploy. You can delete via./dev-analytics-ui/delete.sh test. Currently image is already built forDOCKER_USER=lukaszgryglicki.
- Make sure that you have
dev-analytics-sortinghat-apiimage built (seedev-analytics-sortinghat-api imagesection). Currently we're using image built outside of AWS:lukaszgryglicki/dev-analytics-sortinghat-api. - Run
DOCKER_USER=... ./dev-analytics-sortinghat-api/setup.sh testto deploy. You can delete via./dev-analytics-sortinghat-api/delete.sh test. Currently image is already built forDOCKER_USER=lukaszgryglicki.
Note that DA V1 now uses SDS for orchestrating entire Grimoire stack, so Mordred deployments described below are reduntant and not needed at all.
- Use
[SORT=sort_order] ./grimoire/projects.sh testto list deployments for all projects. - Use
DOCKER_USER=... LIST=install ./grimoire/projects.sh testto show install commands. - Use
DOCKER_USER=... LIST=upgrade ./grimoire/projects.sh testto show upgrade commands. - Use
LIST=uninstall ./grimoire/projects.sh testto show uninstall commands. - Use
LIST=slug ./grimoire/projects.sh testto show only porjects uniqueslugvalues. - Use command(s) generated to deploy given project, for example:
[WORKERS=n] [NODE=selector|-] DOCKER_USER=user-name ./grimoire/grimoire.sh test install none linux-kernel. - Use command(s) to delete any project:
./grimoire/delete.sh test none linux-kernel. - Use example command to manually debug deployment:
WORKERS=1 NODE=- DOCKER_USER=lukaszgryglicki DRY=1 NS=grimoire-debug DEBUG=1 ./grimoire/grimoire.sh test install none yocto. More details here. - To redeploy existing project (for example after API server updates (remember to update the API DB and to check
project_config.sh) or after project config changes) use:./grimoire/redeploy.sh test unique-proj-search-keyword.
- Use
DOCKER_USER=user-name ./sortinghat-cronjob/setup.sh test installto install ID maintenance cronjob. - Use
DOCKER_USER=user-name ./sortinghat-cronjob/setup.sh test upgradeto upgrade ID maintenance cronjob. - Use
./sortinghat-cronjob/delete.shto delete.
Note that now DA V1 uses external Kibana by default, so building own Kibana and installing in Kubernetes is optional.
- Make sure that you have
dev-analytics-kibanaimage built (seedev-analytics-kibana imagesection). Currently we're using image built outside of AWS:lukaszgryglicki/dev-analytics-kibana. - Run
[DRY=1] [ES_EXTERNAL=1] DOCKER_USER=... ./kibana/setup.sh test installto deploy. You can delete via./kibana/delete.sh test. Currently image is already built forDOCKER_USER=lukaszgryglicki.
Replace test occurences with other env eventually:
SSL and hostname configuration:
- Use
ARN_ONLY=1 ./dnsssl/dnsssl.sh testto get SSL certificate ARN for thetestenv. - Use
./dnsssl/dnsssl.sh test kibana dev-analytics-kibana-elb kibana.test.lfanalytics.ioto configure SSL/hostname fortestenvironment Kibana load balancer. - Use
./dnsssl/dnsssl.sh test dev-analytics-elasticsearch elasticsearch-master-elb elastic.test.lfanalytics.ioto configure SSL/hostname fortestenvironment ElasticSearch load balancer. - Use
./dnsssl/dnsssl.sh test dev-analytics-api-test dev-analytics-api-lb api.test.lfanalytics.ioto configure SSL/hostname fortestenvironment API load balancer. - Use
./dnsssl/dnsssl.sh test dev-analytics-ui dev-analytics-ui-lb ui.test.lfanalytics.ioto configure SSL/hostanme fortestenvironment UI load balancer.
Route 53 DNS configuration:
- Use
./route53/setup.sh test kibana dev-analytics-kibana-elb kibanato configure DNS fortestenvironment Kibana load balancer. - Use
./route53/setup.sh test dev-analytics-elasticsearch elasticsearch-master-elb elasticto configure DNS fortestenvironment ElasticSearch load balancer. - Use
./route53/setup.sh test dev-analytics-api-test dev-analytics-api-lb apito configure DNS fortestenvironment load balancer. - Use
./route53/setup.sh test dev-analytics-ui dev-analytics-ui-lb uito configure DNS for fortestenvironment load balancer.
- Do changes to
dev-analytics-apirepo, commit changes totestorprodbranch. - Build new API image as described here.
Test cluster:
- Edit API deployment:
testk.sh get deployment --all-namespaces | grep analytics-apiand thentestk.sh edit deployment -n dev-analytics-api-test dev-analytics-api. - Add or remove
:latesttag on all images:lukaszgryglicki/dev-analytics-api-test<->lukaszgryglicki/dev-analytics-api-test:latestto inform Kubernetes that it need to do rolling update for API. - Once Kubernetes recreate API pod, shell into it:
testk.sh get po --all-namespaces | grep analytics-apiand thenpod_shell.sh test dev-analytics-api-test dev-analytics-api-58d95497fb-hm8gq /bin/sh. - While inside the API pod run:
bundle exec rake db:drop; bundle exec rake db:create; bundle exec rake db:setup.exit. - Eventually Run (instead of
bundle exec rake db:drop):pod_shell.sh test devstats devstats-postgres-0:psql,select pg_terminate_backend(pid) from pg_stat_activity where datname = 'dev_analytics_test'; drop database dev_analytics_test;,\q. cd ../dev-analytics-api/permissions/ && ./add_permissions.sh test && cd ../../darst/.- Now confirm new projects configuration on the API database:
pod_shell.sh test devstats devstats-postgres-0, then inside the pod:psql dev_analytics_test,select id, name, slug from projects order by slug;. - See changes via:
./grimoire/projects.sh test | grep projname, see specific project configuration:./dev-analytics-api/project_config.sh test proj-slug.
Prod cluster:
- Edit API deployment:
prodk.sh get deployment --all-namespaces | grep analytics-apiand thenprodk.sh edit deployment -n dev-analytics-api-prod dev-analytics-api. - Add or remove
:latesttag on all images:lukaszgryglicki/dev-analytics-api-prod<->lukaszgryglicki/dev-analytics-api-prod:latestto inform Kubernetes that it need to do rolling update for API. - Once Kubernetes recreate API pod, shell into it:
prodk.sh get po --all-namespaces | grep analytics-api. - Run:
pod_shell.sh test devstats devstats-postgres-0:pg_dump -Fc dev_analytics_test -f dev_analytics.dump && pg_dump dev_analytics_test -f dev_analytics.sql && exit. - Run:
testk.sh -n devstats cp devstats-postgres-0:dev_analytics.dump dev_analytics.dump && testk.sh -n devstats cp devstats-postgres-0:dev_analytics.sql dev_analytics.sql - Run
pod_shell.sh test devstats devstats-postgres-0,rm dev_analytics.* && exit,mv dev_analytics.sql dev_analytics/dev_analytics.sql.secret. - Run
prodk.sh -n devstats cp dev_analytics.dump devstats-postgres-0:dev_analytics.dump && mv dev_analytics.dump ~. - Run:
pod_shell.sh prod devstats devstats-postgres-0:psql,select pg_terminate_backend(pid) from pg_stat_activity where datname = 'dev_analytics'; drop database dev_analytics;,\q. - Run:
createdb dev_analytics; pg_restore -d dev_analytics dev_analytics.dump; rm dev_analytics.dump; exit. - Now confirm new projects configuration on the API database:
pod_shell.sh prod devstats devstats-postgres-0, then inside the pod:psql dev_analytics,select id, name, slug from projects order by slug. - See changes via:
./grimoire/projects.sh prod | grep projname, see specific project configuration:./dev-analytics-api/project_config.sh prod proj-slug.
- Use script:
sources_check/check.sh, see comments inside this script. - Also use:
LF-Engineering/dev-analytics-api/scripts/check_addresses.shbefore building API image to make sure all datasources are present.
If you want to merge dev and staging sorting hat databases:
- Clone
cncf/merge-sh-dbs. - Follow
README.mdinstructions.