Skip to content

π–’» Build and run a self-hosted Ethereum node with reproducible Ansible playbooks and Taskfile automation.

Notifications You must be signed in to change notification settings

0xPuncker/ethereum-node

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

29 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Ethereum Testnet Validator

Ethereum Hoodi Testnet

This repository is a one-click deployment solution for running an Ethereum node (testnet) on a VM (GCP).
Current Implementation: Ansible-based deployment with Nethermind (execution) + Nimbus (consensus)

Docker Ansible MIT License

Table of Contents

Current Architecture

Deployment: GCP Compute Engine VM with Ansible automation

GCP Architecture

Prerequisites

Target System Requirements

  • OS: Ubuntu 22.04/24.04 LTS
  • CPU: 8+ cores
  • RAM: 32GB+ (minimum for Testnet)
  • Disk: 1TB+ SSD (persistent storage)
  • Network: Stable connection, ports 22, 30303, 9000 open

Control Machine Requirements

  • Ansible: 2.14+ installed locally
  • SSH: Key-based authentication to target VM
  • Git: For cloning repository

Quick Start

These are the ONLY 3 commands needed from a clean environment:

Command 1: Provision Infrastructure

sudo ./scripts/provision.sh

What it does:

  • Validates system requirements (CPU, RAM, disk, ports)
  • Runs Ansible playbook to deploy full validator stack
  • Sets up disk partitioning and mounting
  • Creates system users (execution, consensus, validator)
  • Configures SSH hardening and firewall
  • Installs Nethermind (execution client)
  • Installs Nimbus (consensus + validator clients)
  • Configures JWT authentication between clients
  • Generate validator keys and store it on KMS
  • Creates systemd services for all components

Duration: ~10-30 minutes Output: Infrastructure ready, services configured


Command 2: Start Validator

sudo ./scripts/start-validator.sh
#NOTE: The *provision.sh* already start everything automatically.

What it does:

  • Deploys Ethereum validator stack via Ansible
  • Starts execution.service (Nethermind)
  • Starts consensus.service (Nimbus beacon)
  • Waits for execution layer to be responsive
  • Initializes consensus client with checkpoint sync
  • Verifies all services are running

Duration: ~2-3 minutes Output: All services running, beginning sync to Hoodi testnet


Command 3: Check Health

./scripts/check-health.sh

What it does:

  • Queries execution client RPC (port 8545)
  • Queries consensus client REST API (port 5052)
  • Displays sync status (syncing vs synced)
  • Shows peer connections
  • Shows serivices status
  • Reports validator status (if keys imported)
  • Displays resource usage (disk, memory)
  • Returns exit code 0 (healthy) or 1 (unhealthy)

Example Output:

============================================================
__________ __  .__
\_   _____//  |_|  |__   ___________   ____  __ __  _____
 |    __)_\   __\  |  \_/__ \_  __ \_/ __ \|  |  \/     \
 |        \|  | |   Y  \  ___/|  | \/  ___/|  |  /  Y Y  /
/_______  /|__| |___|  /\___  >__|    \___  >____/|__|_|  /
        \/           \/     \/            \/            \/
____   ____      .__  .__    .___       __
\   \ /   /____  |  | |__| __| _/____ _/  |_  ___________
 \   Y   /\__  \ |  | |  |/ __ |\__  \   __\/  _ \_  __ \
  \     /  / __ \|  |_|  / /_/ | / __ \|  | (  <_> )  | \/
   \___/  (____  /____/__\____ |(____  /__|  \____/|__|
               \/             \/     \/
============================================================

[INFO] Script: Validator Health Check
[INFO] Wed Oct 22 13:52:17 CEST 2025
[INFO] Server: 34.44.182.36

━━━ System Services ━━━

[βœ“] Execution Layer (Nethermind) is running
[βœ“] Consensus Layer (Nimbus Beacon) is running
[βœ“] Validator Client (Nimbus Validator) is running

━━━ Execution Layer Status ━━━

[βœ“] Execution client is fully synced
[INFO]   Current block: 1466445
[βœ“] Connected to 50 execution peers

━━━ Consensus Layer Status ━━━

[⚠] Consensus client is syncing...
[βœ“] Connected to 47 consensus peers

━━━ Validator Status ━━━
[βœ“] Found        1 validator key(s)

[⚠] Chain is still syncing
[INFO]   Validator status will be available once chain is synced
[INFO]   Current slot: 1576713
[INFO]   Current epoch: 49272
[INFO]   Validator 1: 0x8a5fb247...b97784
━━━ Resource Usage ━━━

[INFO] Disk usage: 59G/984G (7%)
[INFO] Memory usage: 5.3Gi/31Gi

━━━ Health Summary ━━━

[⚠] Status: SYNCING

  ⏳ Clients are syncing to network
  ℹ️  This is normal for new deployments

[INFO] Estimated sync time:
[INFO]   β€’ Execution client: 2-4 hours (depending on network)
[INFO]   β€’ Consensus client: 15-30 minutes (with checkpoint sync)
[INFO] Re-run this check periodically to monitor progress.

Duration: ~5-10 seconds Output: Human-readable health status

Monitoring and Management

Service Logs

# View execution client logs
ssh <vm-ip> 'sudo journalctl -fu execution'

# View consensus client logs
ssh <vm-ip> 'sudo journalctl -fu consensus'

# View validator client logs
ssh <vm-ip> 'sudo journalctl -fu validator'

Sync Status Queries

# Execution sync status (via RPC)
curl -X POST -H "Content-Type: application/json" \
  --data '{"jsonrpc":"2.0","method":"eth_syncing","params":[],"id":1}' \
  http://<vm-ip>:8545

# Consensus sync status (via REST API)
curl http://<vm-ip>:5052/eth/v1/node/syncing | jq

# Peer counts
curl http://<vm-ip>:5052/eth/v1/node/peer_count | jq

Metrics Endpoints

  • Nethermind: http://<vm-ip>:9090/metrics (Prometheus format)
  • Nimbus Beacon: http://<vm-ip>:8008/metrics (Prometheus format)
  • Nimbus Validator: http://<vm-ip>:8009/metrics (Prometheus format)

Configuration

Inventory Configuration

Edit ansible/inventory/hosts.yml to customize:

all:
  children:
    validator:
      hosts:
        eth_validator_vm:
          ansible_host: <YOUR_VM_IP> # Change this
          ansible_user: <YOUR_USERNAME> # Change this

  vars:
    # Testnet selection
    testnet_network: "hoodi" # or "holesky"

    # Fee recipient (CHANGE THIS!)
    fee_recipient_address: "0xYourEthereumAddress"

    # Client versions (auto-download latest)
    nethermind_version: "latest"
    nimbus_version: "latest"

    # System requirements
    min_cpu_cores: 4
    min_memory_gb: 30
    min_disk_gb: 500

Testnet Selection

  • Hoodi (default): Recommended, stable, active validator set
  • Holesky: Alternative testnet, smaller validator set

Change in ansible/vars/common.yml or inventory

Project Structure

β”œβ”€β”€ scripts/
β”‚   β”œβ”€β”€ provision.sh                # Command 1: Infrastructure setup
β”‚   β”œβ”€β”€ start-validator.sh          # Command 2: Start validator
β”‚   └── check-health.sh             # Command 3: Health check
β”‚
β”œβ”€β”€ ansible/                        # Ansible automation
β”‚   β”œβ”€β”€ playbooks/
β”‚   β”‚   β”œβ”€β”€ deploy_validator.yml        # Main deployment playbook
β”‚   β”‚   β”œβ”€β”€ preflight.yml                # System validation
β”‚   β”‚   └── validate.yml                # Post-deployment checks
β”‚   β”œβ”€β”€ secure/                         # Save mnemonic and validator files
β”‚   β”œβ”€β”€ roles/
β”‚   β”‚   β”œβ”€β”€ disk_setup/                 # Disk partitioning
β”‚   β”‚   β”œβ”€β”€ system_users/               # User creation
β”‚   β”‚   β”œβ”€β”€ security_hardening/         # SSH + firewall
β”‚   β”‚   β”œβ”€β”€ jwt_secret/                 # JWT generation
β”‚   β”‚   β”œβ”€β”€ nethermind/                 # Execution client
β”‚   β”‚   β”œβ”€β”€ nimbus/                     # Consensus + validator
β”‚   β”‚   └── validator_orchestration/    # Service startup
β”‚   β”œβ”€β”€ inventory/
β”‚   β”‚   └── hosts.yml                   # Target hosts config
β”‚   └── vars/
β”‚       └── common.yml                  # Common node variables
β”‚       └── holesky.yml                 # Testnet holesky variables
β”‚       └── hoodi.yml                   # Testnet hoodi variables
β”‚
β”œβ”€β”€ terraform/                 # GCP infrastructure (optional)
β”‚   β”œβ”€β”€ main.tf                # VM provisioning
β”‚   β”œβ”€β”€ modules/               # Reusable modules
β”‚   └── variables.tf           # Configuration
β”‚
└── docs/                      # Documentation
    β”œβ”€β”€ CHALLENGE.MD           # Original challenge
    β”œβ”€β”€ TODO.md                # Implementation checklist
    └── ARCHITECTURE.md        # Design details

Documentation

Security & Key Management

KMS-Encrypted Validator Keys (Bonus Feature)

Production-grade key encryption using Google Cloud KMS ensures validator private keys are never stored in plaintext:

Key Lifecycle:
1. Generate keys locally (EthStaker deposit-cli)
2. Encrypt with Cloud KMS β†’ ./ansible/scripts/encrypt-and-upload-keys.sh
3. Store encrypted in GCS bucket
4. On validator start: decrypt to tmpfs (memory-only)
5. On validator stop: securely wipe from memory

Security Features:

  • AES-256 Encryption - Industry-standard Cloud KMS encryption
  • Automatic Key Rotation - 30-day rotation policy
  • Memory-Only Storage - Decrypted keys never touch disk (tmpfs)
  • Secure Deletion - Keys wiped with shred on service stop
  • Audit Logging - All KMS operations logged to Cloud Logging
  • IAM Access Control - Minimal permissions (decrypt only)

Quick Start:

# 1. Encrypt local validator keystores
./scripts/encrypt-and-upload-keys.sh

# 2. Deploy validator (keys auto-decrypted on start)
sudo ./scripts/start-validator.sh

# 3. Verify key decryption
ssh <vm-ip> 'sudo journalctl -t validator-keys -f'

Documentation:

License

MIT

Acknowledgments

Support

For support, please open an issue on GitHub Issues.

About

π–’» Build and run a self-hosted Ethereum node with reproducible Ansible playbooks and Taskfile automation.

Topics

Resources

Stars

Watchers

Forks

Packages

No packages published