Skip to content

Commit a73d081

Browse files
update
1 parent c563c7a commit a73d081

File tree

49 files changed

+169
-676
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

49 files changed

+169
-676
lines changed

.gitignore

Lines changed: 1 addition & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1 @@
1-
.vagrant/
2-
.vscode/
3-
inventory
4-
polkadot_debug.yml
1+
inventory.ini

README.md

Lines changed: 15 additions & 79 deletions
Original file line numberDiff line numberDiff line change
@@ -4,19 +4,15 @@ This repo is to set up the Polkadot Validation node. This repo is heavily influe
44

55
## Motivation
66

7-
While the official setup is very comprehensive, it can be overwhelming for "small" validators (myself included) who do not care much about using Terraform on the infrastructure layer. I took the Ansible part of the script and updated it:
8-
9-
1. The setup is more opinionated, thus the script is simpler by avoiding many "if" statements. It is tailored for Ubuntu only, but you should be able to get it working on other Linux distributions with some revisions.
10-
2. It is more opinionated about node monitoring by recommending Node Exporter, Processor Exporter, and Promtail (for centralized log monitoring). I also have a companion Ansible script (https://github.com/polkachu/server-monitoring) that installs Prometheus, Grafana, and Loki to set up such a centralized monitoring server. This setup will make your life easier if you eventually move from a "small" validator to running a cluster of Polkadot/Kusama nodes.
11-
3. The setup assumes that you will start from an archived node snapshot provided by https://polkashots.io. It is much simpler and less error-prone than Rust compiling. Highly recommended. In fact, we at Polkachu are currently planning to offer such archived node snapshots to provide redundancy to the community.
12-
4. Since it has happened twice already, I have included a configuration to help you roll back to version `0.8.30` in the `group_vars/polkadot.yml` file.
7+
While the official setup is very comprehensive, it can be overwhelming for "small" validators who do not care much about using Terraform on the infrastructure layer.
138

149
## Summary
1510

16-
You run one playbook and set up a Kusama/Polkadot node. Boom!
11+
You run one playbook to prepare a node with Node Exporter and Promtail, and run one more playbook to launch a Kusama/Polkadot node. Boom!
1712

1813
```bash
19-
ansible-playbook -i inventory polkadot_full_setup.yml -e "target=VALIDATOR_TARGET"
14+
ansible-playbook prepare.yml -e "target=VALIDATOR_TARGET"
15+
ansible-playbook polkadot.yml -e "target=VALIDATOR_TARGET"
2016
```
2117

2218
But before you rush with this easy setup, you probably want to read on so you understand the structure of this Ansible program and all the features it offers.
@@ -28,105 +24,45 @@ First of all, some preparation is in order.
2824
Make sure that you have a production inventory file with your confidential server info. You will start by copying the sample inventory file (included in the repo). The sample file gives you a good idea on how to define the inventory.
2925

3026
```bash
31-
cp inventory.sample inventory
27+
cp inventory.sample.ini inventory.ini
3228
```
3329

3430
Needless to say, you need to update the dummy values in the inventory file. For each Kusama/Polkadot node, you need to update:
3531

36-
1. Server IP: Your server public IP
37-
2. validator_name: This is the node name that will show up on telemetry monitoring board. It is especially important if you want to participate in the Thousand Validators Program. For us, we use something like `polkachu-kusama-01` and `polkachu-polkadot-02` to keep it unique and organized.
38-
3. log_name: This is for your internal central monitoring server. We just use something like `kusama1` and `polkadot2` to keep it simple.
39-
4. telemetryUrl: Most likely you will use `wss://telemetry-backend.w3f.community/submit/`
40-
5. archive_node (optional): Set this to true if you want to run an archive node. An archive node is not required for a validator. An archive node has the complete chain data and requires much larger storage space. Most validators do not need an archive node.
41-
6. chain_path (optional): You can set an alternative path to store chain data. This is especially useful when you run an archive node and want to store chain data on a mounted disk. A mounted disk offers more flexibility when you want to wrap disk, increase or decrease disk size, etc.
42-
7. parity_db (optional): You can specify if you prefer to use the experimental ParityDB option in stead of the default RocksDB.
43-
44-
You will also need to update:
45-
46-
1. ansible_user: The sample file assumes `ansible`, but you might have another username. Make sure that the user has `sudo` privilege.
47-
2. ansible_port: The sample file assumes `22`. But if you are like me, you will have a different ssh port other than `22` to avoid port sniffing.
48-
3. ansible_ssh_private_key_file: The sample file assumes `~/.ssh/id_rsa`, but you might have a different key location.
49-
4. log_monitor: Enter your monitor server IP. It is most likely a private IP address if you use a firewall around your private virtual cloud (VPC).
50-
51-
It is beyond the scope of this guide to help you create a sudo user, alternate ssh port, create a private key, install Ansible on your machine, etc. You can do a quick online search and find the answers. In my experience, Digital Ocean have some quality guides on these topics. Stack Overflow can help you trouble-shoot if you are stuck.
32+
1. ansible_host: Your server public IP
33+
1. validator_name: This is the node name that will show up on telemetry monitoring board. It is especially important if you want to participate in the Thousand Validators Program. For us, we use something like `polkachu-kusama-01` and `polkachu-polkadot-02` to keep it unique and organized.
34+
1. port_prefix: This allows you to install multiple nodes on the same server without port conflict
5235

5336
## Basic Cluster Structure
5437

5538
The basic cluster structure is:
5639

5740
1. Name each Kusama node as `kusama1`, `kusama2`, etc. Group all Kusama nodes into `kusama` group.
5841
2. Name each Polkadot node as `polkadot1`, `polkadot2`, etc. Group all Polkadot nodes into `polkadot` group.
59-
3. Group all nodes into a `validators` group.
6042

6143
The structure allows you to target `vars` to each node, or either Kusama or Polkadot cluster, or the whole cluster.
6244

63-
Make sure that you are familiar with the files in the `group_vars` folder. They follow this clustered structure closely. The files in this folder often need to be changed to stay up to date with the latest releases. I, for one, bump these program versions religiously so I live on the cutting edge!
64-
6545
## Main Playbook to Set Up a Kusama/Polkadot Validator (Pruned Node)
6646

67-
The key Ansible playbook is `polkadot_full_setup.yml`. It will set up a fresh validator from scratch. Notice that it will restore from a snapshot from https://polkashots.io. It is very possible that you will get an error on the checksum of data to restore in your first attempt because the snapshot is updated regularly. When this happens, update the files accordingly.
68-
6947
The main setup playbook is:
7048

7149
```bash
72-
ansible-playbook -i inventory polkadot_full_setup.yml -e "target=VALIDATOR_TARGET"
50+
ansible-playbook -i inventory polkadot.yml -e "target=VALIDATOR_TARGET"
7351
```
7452

75-
Notice that you need to specify a target when you run this playbook (and other playbooks in this repo, as described in the next section). `VALIDATOR_TARGET` is a placeholder that could be a host (`kusama1`, `kusama2`, `polkadot1`, `polkadot2`, etc), a group (`kusama`, `polkadot`), or all validators (`validators`). This is intentionally designed to:
53+
Notice that you need to specify a target when you run this playbook (and other playbooks in this repo, as described in the next section). `VALIDATOR_TARGET` is a placeholder that could be a host (`kusama1`, `kusama2`, `polkadot1`, `polkadot2`, etc), or a group (`kusama`, `polkadot`). This is intentionally designed to:
7654

7755
1. Prevent you from updating all nodes by mistake
7856
2. Allow you to experiment a move on a low-risk node before rolling out to the whole cluster
7957

80-
## Main Playbook to Set Up a Kusama/Polkadot Archive Node
81-
82-
The main setup playbook is:
83-
84-
```bash
85-
ansible-playbook -i inventory polkadot_full_archive_node_setup.yml -e "target=VALIDATOR_TARGET"
86-
```
87-
88-
Most validators DO NOT need archive node.
89-
90-
## A Pitfall
91-
92-
We introduced pruned node / archive node toggle in the version 0.2.0 release. The database for pruned node and archive node is not compatible. If you have trouble start your `polkadot` service, a simple trouble-shooting method is just to delete the whole polkadot `db` directory.
93-
9458
## Other Playbooks for Different Purposes
9559

9660
The most commonly used playbooks are:
9761

98-
| Playbook | Description |
99-
| ------------------------- | ------------------------------------------------------------------------------------------ |
100-
| `polkadot_full_setup.yml` | Run the initial full setup |
101-
| `polkadot_prepare.yml ` | Do the prep work, such as firewall, set up a proxy, copy service files, create users, etc. |
102-
| `polkadot_update.yml` | Update the Polkadot binary and restart the service. You probably need to use it regularly |
103-
| `polkadot_restore.yml` | Restore the Polkadot database with a screenshot. Only useful for initial setup |
104-
| `node_exporter.yml` | Update Node Exporter |
105-
| `process_exporter.yml` | Update Process Exporter |
106-
| `promtail.yml` | Update Promtail |
107-
108-
The less commonly used playbooks are:
109-
110-
| Playbook | Description |
111-
| ------------------------------ | ------------------------------------------------------------------------------------- |
112-
| `polkadot_backup_keystore.yml` | Backup Keystore (Not sure about use case) |
113-
| `polkadot_clean_logs.yml` | Clean journal logs (Probably useful when the disk is full) |
114-
| `polkadot_restart.yml` | Restart Polkadot ad hoc (Probably useful when server runs wild for no obvious reason) |
115-
| `polkadot_stop.yml` | Stop Polkadot ad hoc |
116-
| `polkadot_rotate_key.yml` | Rotate session keys the easy way without you ssh into the server yourself |
117-
| `snapshot_script.yml` | If you intend to use the node to take snapshot, then this script is for you |
118-
119-
## Update All Servers
120-
121-
One more thing! Sometimes you want to install all apt patches on all machines. I provide you with a simple playbook. Just run:
122-
123-
```bash
124-
ansible-playbook -i inventory all_apt_update.yml
125-
```
62+
| Playbook | Description |
63+
| ------------------ | ------------------------------------------------------------------------- |
64+
| `prepare.yml ` | Do the prep work, such as ufw, node_exporter and promtail |
65+
| `polkadot.yml` | Install Kusama/Polkadot node |
66+
| `key_rotation.yml` | Rotate session keys the easy way without you ssh into the server yourself |
12667

12768
That's it, folks!
128-
129-
## Tips/Nominations Accepted
130-
131-
- DOT: `15ym3MDSG4WPABNoEtx2rAzBB1EYWJDWbWYpNg1BwuWRAQcY`
132-
- KSM: `CsKvJ4fdesaRALc5swo5iknFDpop7YUwKPJHdmUvBsUcMGb`

all_apt_update.yml

Lines changed: 0 additions & 10 deletions
This file was deleted.

ansible.cfg

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,2 +1,2 @@
11
[defaults]
2-
inventory = inventory
2+
inventory = inventory.ini

group_vars/all.yml

Lines changed: 4 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -1,17 +1,4 @@
1-
# Node Exporter
2-
node_exporter_enabled: true
3-
node_exporter_version: '1.1.2'
4-
node_exporter_checksum: '8c1f6a317457a658e0ae68ad710f6b4098db2cad10204649b51e3c043aa3e70d'
5-
6-
# Process Exporter
7-
process_exporter_enabled: true
8-
process_exporter_version: '0.7.5'
9-
process_exporter_checksum: '27f133596205654a67b4a3e3af11db640f7d4609a457f48c155901835bd349c6'
10-
11-
# Promtail
12-
promtail_version: 2.2.1
13-
promtail_checksum: 40d8d414b44baa78c5010cb7575d74eea035b6b00adb78e9676a045d6730a16f
14-
15-
# Digital Ocean Space for Snapshots (You can ignore if you do not plan to take snapshot from the node)
16-
snapshot_endpoint: 'https://fra1.digitaloceanspaces.com'
17-
snapshot_space: 'polkachu'
1+
---
2+
node_exporter_version: '1.5.0'
3+
promtail_version: '2.7.0'
4+
polkadot_version: '0.9.36'

group_vars/kusama.yml

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,3 @@
11
---
22
polkadot_network_id: ksmcc3
33
chain: kusama
4-
polkadot_db_snapshot_url: 'https://substrate-snapshots.polkachu.xyz/kusama/kusama_12160135.tar.lz4'

group_vars/polkadot.yml

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,3 @@
11
---
22
polkadot_network_id: polkadot
33
chain: polkadot
4-
polkadot_db_snapshot_url: 'https://substrate-snapshots.polkachu.xyz/polkadot_paritydb/polkadot_9494121.tar.lz4'

group_vars/validators.yml

Lines changed: 0 additions & 3 deletions
This file was deleted.

inventory.sample

Lines changed: 0 additions & 25 deletions
This file was deleted.

inventory.sample.ini

Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,17 @@
1+
[kusama]
2+
kusama01 ansible_host=10.0.0.1 validator_name=kusama01 port_prefix="100"
3+
kusama02 ansible_host=10.0.0.1 validator_name=kusama01 port_prefix="101"
4+
5+
[polkadot]
6+
polkadot01 ansible_host=10.0.0.2 validator_name=polkadot01 port_prefix="102"
7+
polkadot02 ansible_host=10.0.0.2 validator_name=polkadot01 port_prefix="103"
8+
9+
[all:vars]
10+
ansible_user=ubuntu
11+
ansible_port=22
12+
ansible_ssh_private_key_file="~/.ssh/id_rsa"
13+
log_monitor='YOUR_MONITOR_SERVER'
14+
telemetryUrl=wss://telemetry-backend.w3f.community/submit/
15+
user_dir='/home/{{ ansible_user }}'
16+
base_path='{{ user_dir}}/.{{ inventory_hostname }}'
17+
log_name='{{ inventory_hostname }}'

0 commit comments

Comments
 (0)