This repository is a one-click deployment solution for running an Ethereum node (testnet) on a VM (GCP).
Current Implementation: Ansible-based deployment with Nethermind (execution) + Nimbus (consensus)
- Current Architecture
- Prerequisites
- Quick Start
- Monitoring and Management
- Configuration
- Project Structure
- Documentation
- Security & Key Management
- License
- Acknowledgments
- Support
Deployment: GCP Compute Engine VM with Ansible automation
- OS: Ubuntu 22.04/24.04 LTS
- CPU: 8+ cores
- RAM: 32GB+ (minimum for Testnet)
- Disk: 1TB+ SSD (persistent storage)
- Network: Stable connection, ports 22, 30303, 9000 open
- Ansible: 2.14+ installed locally
- SSH: Key-based authentication to target VM
- Git: For cloning repository
These are the ONLY 3 commands needed from a clean environment:
sudo ./scripts/provision.shWhat it does:
- Validates system requirements (CPU, RAM, disk, ports)
- Runs Ansible playbook to deploy full validator stack
- Sets up disk partitioning and mounting
- Creates system users (execution, consensus, validator)
- Configures SSH hardening and firewall
- Installs Nethermind (execution client)
- Installs Nimbus (consensus + validator clients)
- Configures JWT authentication between clients
- Generate validator keys and store it on KMS
- Creates systemd services for all components
Duration: ~10-30 minutes Output: Infrastructure ready, services configured
sudo ./scripts/start-validator.sh
#NOTE: The *provision.sh* already start everything automatically.What it does:
- Deploys Ethereum validator stack via Ansible
- Starts execution.service (Nethermind)
- Starts consensus.service (Nimbus beacon)
- Waits for execution layer to be responsive
- Initializes consensus client with checkpoint sync
- Verifies all services are running
Duration: ~2-3 minutes Output: All services running, beginning sync to Hoodi testnet
./scripts/check-health.shWhat it does:
- Queries execution client RPC (port 8545)
- Queries consensus client REST API (port 5052)
- Displays sync status (syncing vs synced)
- Shows peer connections
- Shows serivices status
- Reports validator status (if keys imported)
- Displays resource usage (disk, memory)
- Returns exit code 0 (healthy) or 1 (unhealthy)
Example Output:
============================================================
__________ __ .__
\_ _____// |_| |__ ___________ ____ __ __ _____
| __)_\ __\ | \_/__ \_ __ \_/ __ \| | \/ \
| \| | | Y \ ___/| | \/ ___/| | / Y Y /
/_______ /|__| |___| /\___ >__| \___ >____/|__|_| /
\/ \/ \/ \/ \/
____ ____ .__ .__ .___ __
\ \ / /____ | | |__| __| _/____ _/ |_ ___________
\ Y /\__ \ | | | |/ __ |\__ \ __\/ _ \_ __ \
\ / / __ \| |_| / /_/ | / __ \| | ( <_> ) | \/
\___/ (____ /____/__\____ |(____ /__| \____/|__|
\/ \/ \/
============================================================
[INFO] Script: Validator Health Check
[INFO] Wed Oct 22 13:52:17 CEST 2025
[INFO] Server: 34.44.182.36
βββ System Services βββ
[β] Execution Layer (Nethermind) is running
[β] Consensus Layer (Nimbus Beacon) is running
[β] Validator Client (Nimbus Validator) is running
βββ Execution Layer Status βββ
[β] Execution client is fully synced
[INFO] Current block: 1466445
[β] Connected to 50 execution peers
βββ Consensus Layer Status βββ
[β ] Consensus client is syncing...
[β] Connected to 47 consensus peers
βββ Validator Status βββ
[β] Found 1 validator key(s)
[β ] Chain is still syncing
[INFO] Validator status will be available once chain is synced
[INFO] Current slot: 1576713
[INFO] Current epoch: 49272
[INFO] Validator 1: 0x8a5fb247...b97784
βββ Resource Usage βββ
[INFO] Disk usage: 59G/984G (7%)
[INFO] Memory usage: 5.3Gi/31Gi
βββ Health Summary βββ
[β ] Status: SYNCING
β³ Clients are syncing to network
βΉοΈ This is normal for new deployments
[INFO] Estimated sync time:
[INFO] β’ Execution client: 2-4 hours (depending on network)
[INFO] β’ Consensus client: 15-30 minutes (with checkpoint sync)
[INFO] Re-run this check periodically to monitor progress.
Duration: ~5-10 seconds Output: Human-readable health status
# View execution client logs
ssh <vm-ip> 'sudo journalctl -fu execution'
# View consensus client logs
ssh <vm-ip> 'sudo journalctl -fu consensus'
# View validator client logs
ssh <vm-ip> 'sudo journalctl -fu validator'# Execution sync status (via RPC)
curl -X POST -H "Content-Type: application/json" \
--data '{"jsonrpc":"2.0","method":"eth_syncing","params":[],"id":1}' \
http://<vm-ip>:8545
# Consensus sync status (via REST API)
curl http://<vm-ip>:5052/eth/v1/node/syncing | jq
# Peer counts
curl http://<vm-ip>:5052/eth/v1/node/peer_count | jq- Nethermind:
http://<vm-ip>:9090/metrics(Prometheus format) - Nimbus Beacon:
http://<vm-ip>:8008/metrics(Prometheus format) - Nimbus Validator:
http://<vm-ip>:8009/metrics(Prometheus format)
Edit ansible/inventory/hosts.yml to customize:
all:
children:
validator:
hosts:
eth_validator_vm:
ansible_host: <YOUR_VM_IP> # Change this
ansible_user: <YOUR_USERNAME> # Change this
vars:
# Testnet selection
testnet_network: "hoodi" # or "holesky"
# Fee recipient (CHANGE THIS!)
fee_recipient_address: "0xYourEthereumAddress"
# Client versions (auto-download latest)
nethermind_version: "latest"
nimbus_version: "latest"
# System requirements
min_cpu_cores: 4
min_memory_gb: 30
min_disk_gb: 500- Hoodi (default): Recommended, stable, active validator set
- Holesky: Alternative testnet, smaller validator set
Change in ansible/vars/common.yml or inventory
βββ scripts/
β βββ provision.sh # Command 1: Infrastructure setup
β βββ start-validator.sh # Command 2: Start validator
β βββ check-health.sh # Command 3: Health check
β
βββ ansible/ # Ansible automation
β βββ playbooks/
β β βββ deploy_validator.yml # Main deployment playbook
β β βββ preflight.yml # System validation
β β βββ validate.yml # Post-deployment checks
β βββ secure/ # Save mnemonic and validator files
β βββ roles/
β β βββ disk_setup/ # Disk partitioning
β β βββ system_users/ # User creation
β β βββ security_hardening/ # SSH + firewall
β β βββ jwt_secret/ # JWT generation
β β βββ nethermind/ # Execution client
β β βββ nimbus/ # Consensus + validator
β β βββ validator_orchestration/ # Service startup
β βββ inventory/
β β βββ hosts.yml # Target hosts config
β βββ vars/
β βββ common.yml # Common node variables
β βββ holesky.yml # Testnet holesky variables
β βββ hoodi.yml # Testnet hoodi variables
β
βββ terraform/ # GCP infrastructure (optional)
β βββ main.tf # VM provisioning
β βββ modules/ # Reusable modules
β βββ variables.tf # Configuration
β
βββ docs/ # Documentation
βββ CHALLENGE.MD # Original challenge
βββ TODO.md # Implementation checklist
βββ ARCHITECTURE.md # Design details
- Architecture - System architecture and design decisions
- KMS Key Management - KMS keys and management
- Ansible Guide - Detailed Ansible automation guide
- Hoodi Validator Registration - Detailed Hoodi validator registration
- Runbook - Runbook
Production-grade key encryption using Google Cloud KMS ensures validator private keys are never stored in plaintext:
Key Lifecycle:
1. Generate keys locally (EthStaker deposit-cli)
2. Encrypt with Cloud KMS β ./ansible/scripts/encrypt-and-upload-keys.sh
3. Store encrypted in GCS bucket
4. On validator start: decrypt to tmpfs (memory-only)
5. On validator stop: securely wipe from memory
Security Features:
- AES-256 Encryption - Industry-standard Cloud KMS encryption
- Automatic Key Rotation - 30-day rotation policy
- Memory-Only Storage - Decrypted keys never touch disk (tmpfs)
- Secure Deletion - Keys wiped with shred on service stop
- Audit Logging - All KMS operations logged to Cloud Logging
- IAM Access Control - Minimal permissions (decrypt only)
Quick Start:
# 1. Encrypt local validator keystores
./scripts/encrypt-and-upload-keys.sh
# 2. Deploy validator (keys auto-decrypted on start)
sudo ./scripts/start-validator.sh
# 3. Verify key decryption
ssh <vm-ip> 'sudo journalctl -t validator-keys -f'Documentation:
- Ethereum Foundation
- Hoodi Ethereum Explorer
- Validator Cheklist
- Becoming a Hoodi Validator
- Hoodi Faucet
- Validator's List
For support, please open an issue on GitHub Issues.

