First make sure the following is installed:
- Docker & docker-compose or Docker Desktop
- Nodejs v14 or greater
- Go v1.17 or greater
In ./client you need to run npm install
If you want automatic reloading in go (server development):
go install github.com/pilu/fresh
Setting ENV:
Make sure env files in the following directories are set:
In each directory you'll find a .env.example. You need to create a file called .env
- ./ (root)
- ./client
- ./server
- ./docker_db
- ./swarm
In ./ run docker compose up -d (only if you need logging and the other services)
In ./docker_db run docker compose up -d
In ./client run npm start
If you have installed fresh:
- In
./serverrunfresh
else:
- In
./serverrungo run main.go
Floating IP (static ip): 138.68.125.155
brew install virtualboxbrew install virtualbox-extension-packbrew install vagrantbrew install vagrant-completion
If no SSH key, generate one with ssh-keygen -t rsa
Check if installed correctly with vagrant --version
Documentation: https://www.vagrantup.com/docs
After you have installed Vagrant and created your Vagrantfile, you can start it with vagrant up
To delete the vm again write vagrant destroy.
To run the provision scripts again, on an existing vm run vagrant provision --provision-with shell
See more here: https://www.vagrantup.com/docs/cli/up
vagrant plugin install vagrant-digitaloceanenable digitaloceanvagrant plugin install vagrant-scpvagrant plugin install vagrant-vbguestvagrant plugin install vagrant-reloadvagrant plugin install vagrant-envload .env files- VMWARE:
vagrant plugin install vagrant-vmware-desktop
https://discuss.hashicorp.com/t/vagrant-2-2-18-osx-11-6-cannot-create-private-network/30984/22
Create the file /etc/vbox/networks.conf if it doesnt exist
Add * 0.0.0.0/0 ::/0 to the file and save.
Now we can create networks in the Vagrant file:
#...
config.vm.network "private_network", type: "dhcp" # <---
#...
config.vm.define "webserver", primary: true do |server|
server.vm.network "private_network", ip: "192.168.62.1" # <---
server.vm.network "forwarded_port", guest: 8080, host: 8080To sync folders in into the VM add following line
config.vm.synced_folder ".", "/vagrant", type: "rsync" # "rsync" for digitalocean. "virtualbox" for virtualboxHidden files (dot files) are not synced. So we need to manually fetch them:
config.vm.define "webserver", primary: true do |server|
server.vm.provision "file", source: ".env", destination: ".env"Create .env in root folder and copy the contents of .env.example. Fill out the fields with your information. The keys DIGITAL_OCEAN_TOKEN and SSH_KEY_NAME are configured from digital ocean.
Make sure the requestslibrary is installed. You can install it using pip install requests.
Also make sure the library is added to your path afterwards.
in /python_api_tests: python3 minitwit_simulator.py "http://localhost:8080/api"
run command in vagrant tail -f -n100 out.log
Run the command cat schema.sql | sqlite3 db.db in bash
run cmd travis encrypt-file minitwit_dd --com -r aske-w/itu-minitwit
Run command mysql -u $MYSQL_USER -p$MYSQL_PASSWORD $MYSQL_DATABASE
docker-compose -d will run prometheus and grafana. If you are running the go server locally, configure the filed targets in the file prometheues/prometheues.yml to
- targets:
- host.docker.internal:8080else specify localhost://8080.
In order to add data source in Grafana, in Grafana navigate to configuration/Data sources and press "Add Data Source". Select prometheues and set the url to the host of promeheues (http://localhost:9090). Then it's possible to create Dashboard with the different metrics.
If you get errors along the lines of unsequential data in the wal or chunks_head directory, retain the lowest sequence of file names in wal and do rm -Rf chunks_head to remove the chunks_head directory. No data will be lost. But remember to back it up anyway.
Once promotheues has been added as a data source we can under dashboard/new add a new panel. We can for example monitor the different container memory usage in bytes by this query sum(container_memory_usage_bytes{name=~".+"}) by(name). It only shows containers with a name.
On each pr or push to the development branch, the github action static_analysis is run. The action checks code formatting, constructs and potientiel security issues in source code. If a check fails the pipleline will fail.
Checks the code formatting. In case the code is not formatted correctly, it will throw an error and stop the pipeline.
Reports suspicious constructs in the source code. For example if Printf function arguments align with format string.
Scan source code for security problems and reports them. Gosec can be configured to run a set of rules. In our case we run all rules except for G104 ./bin/gosec -exclude=G104 ./.... G104 audits errors not checked and since we do not handle all errors, the check will fail. However this should be handled in the future.
After building the docker image https://github.com/Azure/container-scan this scans it for vulnerabilites. (cicd.yml)
If you have issues with the container crashing, increase the container's max memory.
Run the script in .htpasswd.example and replace username with the username used for kibana and PASSWORD with a password
Navigate to port 5601. Under the logs tab type following command to query minitwit-server logs:
container.image.name: "itu-minitwit_server"
On port 5601 navigate to Stack management at the bottom of the sidebar. Then check if the index itu-minitwit-* index has been registered under data/Index Management.
Under Kibana/Index Patterns do the following
-Create new index pattern.
-
Type
itu-minitwit-*as the name. -
Select the
@timestamp.
Once the index pattern is created navigate Analytics/Discover in the sidebar. The select the newly created index pattern.
We can the filter by fields. For example if we only want to see the message of the logs we can under the Avaliable fields press the plus icon to filter by messages.
- In swarm directory run the following files in their respective order:
prov.shprovisioner.sh
- Copy the
worker_provision.shinto your worker nodes andmanager_provision.shinto your manager nodes. - SSH into your manager nodes and run their provisioner scripts.
- SSH into your worker nodes and run their provisioner scripts when the provisioner scripts on the managers tell you to.
- Finished. The Docker compose file should be deployed into the swarm.
When adding the shh_keys to github secret they need to be 64-base encoded:
-
XSSH_PRIVATE_KEY is not encoded
-
PROV_SSH_PUBLIC and PROV_SSH_PRIVATE are encoded
On each push to master the deploy pipeline is started, which starts a new stack with the following command docker stack deploy --compose-file docker-compose.yml $STACK_NAME. Before deploying the stack we need to copy a set of files to the manager and worker node, which are needed for the docker-compose.yml.
Manager node ip: 165.22.67.59
Worker node ip: 165.22.81.184