-
Notifications
You must be signed in to change notification settings - Fork 4
Finishing set up
It would be really nice if this can be set up on AWS because it makes starting new machines really easy and the networking setup is simple. This section assumes you have an account set up and have the ability to launch EC2 instances. Furthermore, you will need a domain that you can use in order to ease the process of routing users to their core and HMI.
I will still only launch one server agent of Consul and Nomad, but you should have 3 or 5 of each in production. The server agents will be on one EC2 instance and the client agents will be on another.
In case you are unsure of a starting point, here are the following I've used for development:
- The image I will use is the Amazon Linux AMI
- Both will have a custom security group with the following inbound rules:
- SSH on port 22
- Custom TCP on ports 8400, 8500, 8600, 4646-4648, 8300-8302, and 20000-60000 (Nomad allocates containers on these ports), and only traffic inside the network can communicate over these ports
- Custom TCP on port 3000, and traffic from anywhere can access this port. This is how users access Manticore's web page.
- Custom TCP on a port range that the machine does not use. This range will be used to open up ports for TCP connections from the SDL app to core
Most of the ports are for Consul and Nomad to communicate with each other across machines in the same network.
Once the two machines have started up SSH into the smaller instance and download the binaries here and here. Then, run the consul and nomad server agents.
# Send the stream logs to a file and run the processes in the background.
sudo consul agent -server -data-dir="/tmp/consul" -node=consul-server -bind=<machine's ip> -bootstrap-expect 1 >> /var/log/consul/output.log &
sudo nomad agent -config server.hcl >> /var/log/nomad/output.log &
Here's the server.hcl file:
log_level = "DEBUG"
data_dir = "/tmp/nomad"
bind_addr = "<ip of host>"
server {
enabled = true
bootstrap_expect = 1
}
SSH into the other machine and set up client agents to connect to the server agents!
# Send the stream logs to a file and run the processes in the background.
sudo consul agent -data-dir="/tmp/consul" -node=consul-client-A -bind=<machine's ip> -client=<machine's ip> >> /var/log/consul/output.log &
sudo nomad agent -config client.hcl >> /var/log/nomad/output.log &
We need some extra configuration for the Manticore webapp. Manticore is designed to run on just a couple machines. In order to determine which machines to run on, Manticore expects the client agent that is compatible to have a meta tag of "manticore" with the value "true". The core and hmi jobs expect the client agent to have a meta tag of "core". Because we are running everything on one machine this client agent should have both tags.
The client.hcl file:
bind_addr="<machine's ip>"
log_level="DEBUG"
data_dir="/tmp/nomad"
consul {
address="<ip of consul client agent, this machine>:8500"
}
client {
enabled = true
servers = ["<ip of consul server agent>:4647"]
meta {
"manticore" = true
"core" = true
}
}
The consul agent needs to join the cluster
consul join -rpc-addr=<machine's ip>:8400 <ip of consul server agent>
As long as you have Docker installed on the machine then you should be ready to start running jobs as usual.
sudo yum install docker
sudo service docker start
HAProxy makes setup very simple. Install HAProxy like so:
yum install haproxy
You need Nodejs and NPM in order to install Nodejs packages and start Manticore. Here are the instructions for installing NVM, which can easily install different version of Nodejs for you.
# See https://www.digitalocean.com/community/tutorials/how-to-install-node-js-on-a-centos-7-server
curl -sL https://raw.githubusercontent.com/creationix/nvm/v0.31.0/install.sh -o install_nvm.sh
bash install_nvm.sh
source ~/.bash_profile
Now do nvm ls-remote
to see available versions and run
nvm install <version string>
nvm use <version string>
nvm alias default <version string>
This is the last component you need. You can download it here. consul-template has the responsibility of using Consul to generate the HAProxy configuration file. You will need consul-template running on every machine with HAProxy and Manticore. The template for HAProxy is already set up for you in /consul-template/haproxy.tmpl. In order to run it you should make an HCL file similar to the following.
consul = "<IP of your local client agent>:8500"
template {
# source is the location of Manticore's haproxy template
source = "/home/ec2-user/manticore/consul-template/haproxy.tmpl"
# destination is where the template will render. you will likely use the location below
destination = "/etc/haproxy/haproxy.cfg"
# the command to execute when the configuration file is updated. reload haproxy
command = "sudo service haproxy reload"
# this is REALLY important. you want to limit the number of times haproxy reloads
# or else race conditions with processes can start happening which lead to processes staying around forever
# and messing up everything. a minimum limit of 1 second should be good enough.
wait = "1s:3s"
}
Run consul-template in the background:
sudo consul-template -config <location of the HCL file you made above> &
You may need to give ownership of /etc/haproxy/ and the configuration file within depending on how you run consul-template
You need a domain name that Manticore can use in order to set up custom external urls. This may involve using Route 53 to get a domain name. This guide will not cover how to get one.