Automated log upload daemon for Autoware vehicles in China. Monitors log directories, queues files, uploads to AWS S3 China region on schedule, and manages disk space through smart deletion policies.
For complete vehicle deployment instructions, see:
📘 Deployment Guide - Complete step-by-step deployment process
The deployment guide includes:
- Prerequisites and S3 bucket setup
- AWS credentials configuration with S3-specific settings
- Automated validation scripts
- Installation with health checks
- Troubleshooting and maintenance
Quick overview:
# 1. Configure AWS credentials
aws configure --profile china
aws configure set s3.endpoint_url https://s3.cn-north-1.amazonaws.com.cn --profile china
# (+ 2 more S3 settings - see deployment guide)
# 2. Configure and validate
cp config/config.yaml.example config/config.yaml
nano config/config.yaml # Set vehicle_id
./scripts/deployment/verify_deployment.sh
# 3. Install
sudo ./scripts/deployment/install.sh
# 4. Verify
sudo ./scripts/deployment/health_check.sh# Quick setup with Makefile
make dev-setup # Install deps + create config
make test-fast # Run unit tests (~5s)
make test # Run unit + integration (~30s)
# Or traditional setup
python3 -m venv venv && source venv/bin/activate
pip3 install -e ".[test]"
./scripts/testing/run_tests.sh fast
# Run locally
make run # Using Makefile
# OR
python3 src/main.py --config config/config.yaml --log-level DEBUG💡 Tip: Use make help to see all available commands!
📦 Dependency Management:
All dependencies are defined in pyproject.toml:
- Production (
pip3 install -e .): watchdog, boto3, pyyaml - Testing (
pip3 install -e ".[test]"): + pytest, pytest-cov, pytest-mock - Development (
pip3 install -e ".[dev]"): + black, flake8, pylint, isort, pre-commit
- Automated Detection - File system monitoring with 60s stability check
- Flexible Scheduling - Daily uploads or interval-based (every N hours)
- Smart Disk Management - Three-tier deletion: deferred, age-based, emergency
- Retry Logic - Exponential backoff up to 10 attempts per file
- Queue Persistence - Survives daemon restarts and system reboots
- Duplicate Prevention - SHA256-based file registry prevents re-uploads
- CloudWatch Integration - Metrics and alarms for monitoring
- Pattern Matching - Wildcard support for selective file uploads
- Recursive Monitoring - Automatically watches subdirectories
- Configuration Validation - SIGHUP signal validates config (restart required to apply changes)
| Document | Description | Audience |
|---|---|---|
| Deployment Guide | START HERE - Complete vehicle deployment | Operators, DevOps |
| Complete Reference | All features, configuration, examples | All Users |
| Testing Guide | Running 426+ automated tests (unit/integration/E2E/manual) | Developers |
| GitHub Actions OIDC | CI/CD setup without stored credentials | DevOps |
┌─────────────────────────────────────────────────────────┐
│ TVM Upload Daemon │
│ │
│ ┌────────────┐ ┌──────────────┐ │
│ │ Config │─────>│ File Monitor │ │
│ │ Manager │ │ (watchdog) │ │
│ └────────────┘ └──────┬───────┘ │
│ │ file stable (60s) │
│ ↓ │
│ ┌──────────────┐ │
│ │ Queue │ │
│ │ Manager │ │
│ └──────┬───────┘ │
│ │ scheduled upload │
│ ↓ │
│ ┌────────────┐ ┌──────────────┐ │
│ │ Disk │<─────│ Upload │ │
│ │ Manager │ │ Manager │ │
│ └────────────┘ │ (boto3) │ │
│ │ └──────┬───────┘ │
│ │ cleanup │ metrics │
│ ↓ ↓ │
│ ┌─────────────────────────────────┐ │
│ │ CloudWatch Manager │ │
│ └─────────────────────────────────┘ │
└─────────────────────────────────────────────────────────┘
Core Components:
- File Monitor - Detects new files with stability check
- Queue Manager - Persistent JSON queue survives restarts
- Upload Manager - S3 uploads with retry and multipart support
- Disk Manager - Smart deletion policies (deferred, age-based, emergency)
- CloudWatch Manager - Metrics publishing and alarms
See Complete Reference for detailed architecture.
# Unique identifier for this vehicle
vehicle_id: "vehicle-CN-001"
# Directories to monitor
log_directories:
- path: ${HOME}/.parcel/log/terminal
source: terminal
recursive: true
pattern: "*.log"
# S3 configuration
s3:
bucket: t01logs
region: cn-north-1
profile: china
# Upload schedule
upload:
schedule:
mode: "interval" # "daily" or "interval"
interval_hours: 4 # Upload every 4 hours
operational_hours:
enabled: true
start: "09:00"
end: "16:00"
# Disk management
deletion:
after_upload:
enabled: true
keep_days: 14 # Keep uploaded files for 14 days
age_based:
enabled: true
max_age_days: 7 # Delete all files older than 7 days
emergency:
enabled: true
threshold_percent: 90 # Emergency cleanup at 90% diskSee config/config.yaml.example for full configuration options.
Filter which files to upload using glob patterns. This is crucial for preventing infinite upload loops with active log files.
log_directories:
- path: /var/log
source: syslog
pattern: "syslog.[1-9]*" # Only rotated files
recursive: falseSupported Wildcards:
*- Matches any characters (e.g.,*.logmatchesapp.log,system.log)?- Matches single character (e.g.,log.?matcheslog.1,log.2)[1-9]- Matches character range (e.g.,syslog.[1-9]*matchessyslog.1,syslog.2.gz)
Problem: Uploading /var/log/syslog creates an infinite loop:
- Service uploads
/var/log/syslog - Upload writes to syslog: "Uploaded file X"
- Service detects change, uploads again
- Repeat forever...
Solution: Use pattern to skip active file:
log_directories:
- path: /var/log
source: syslog
pattern: "syslog.[1-9]*" # ✅ Uploads: syslog.1, syslog.2.gz
# ❌ Skips: /var/log/syslog (active)# Upload only .log files
log_directories:
- path: ~/.parcel/log/terminal
source: terminal
pattern: "*.log"
# Upload compressed logs only
log_directories:
- path: /var/log
source: archived
pattern: "*.gz"
# Upload specific date pattern
log_directories:
- path: ~/logs
source: daily
pattern: "2025-11-*.log"Important: If pattern is omitted, ALL files in the directory are uploaded.
See Configuration Reference for detailed pattern syntax and examples.
420+ automated tests covering all functionality:
# Fast local tests (unit + integration)
make test-fast # Unit tests (~5s)
make test # Unit + integration (~40s)
# E2E tests (requires AWS)
make test-e2e # E2E tests (~7.5 min)
# Manual test scenarios
make test-manual # 17 core manual tests (~2.5 hours)
make test-gap # 5 gap tests (~30 min)
make test-all-manual # All manual tests (~3 hours)
# Or use scripts directly
./scripts/testing/run_tests.sh fast
./scripts/testing/run_tests.sh all --coverage
./scripts/testing/run_manual_tests.sh
./scripts/testing/gap-tests/run_gap_tests.shTest Coverage:
- ✅ 249 unit tests (fast, fully mocked, ~5s)
- ✅ 90 integration tests (mocked AWS, ~35s)
- ✅ 60 E2E tests (real AWS S3, ~7.5min)
- ✅ 17 core manual test scenarios (~2.5 hours)
- ✅ 5 gap test scenarios (~30 min)
- ✅ 90%+ code coverage
See Testing Guide for details.
sudo systemctl status tvm-upload
sudo journalctl -u tvm-upload -f # Follow logscat /var/lib/tvm-upload/queue.json # Pending uploads
cat /var/lib/tvm-upload/processed_files.json # Upload history# Edit config
sudo nano /etc/tvm-upload/config.yaml
# Validate configuration (sends SIGHUP)
sudo systemctl reload tvm-upload
# Note: This only validates the config. To apply changes, restart is required:
sudo systemctl restart tvm-upload./scripts/deployment/health_check.sh # Verify service health
./scripts/deployment/verify_deployment.sh # Pre-install validationsudo ./scripts/deployment/uninstall.sh # Clean removalCheck operational hours:
grep -A 5 "operational_hours" /etc/tvm-upload/config.yaml
date +"%H:%M" # Current timeImmediate uploads only happen within operational hours. Scheduled uploads always run.
Check queue:
cat /var/lib/tvm-upload/queue.json
# Wait 60s after file creation for stability checkCheck service:
sudo systemctl status tvm-upload
sudo journalctl -u tvm-upload -n 50 # Last 50 log lines# Verify credentials
./scripts/diagnostics/verify_aws_credentials.sh
# Check profile
aws sts get-caller-identity --profile china
aws s3 ls s3://your-bucket --profile china --region cn-north-1See Troubleshooting Guide for more solutions.
tvm-upload/
├── src/ # Application source code
│ ├── main.py # Main coordinator
│ ├── config_manager.py # Configuration
│ ├── file_monitor.py # File detection
│ ├── upload_manager.py # S3 uploads
│ ├── disk_manager.py # Disk management
│ ├── queue_manager.py # Queue persistence
│ └── cloudwatch_manager.py
├── tests/ # Test suite (399+ automated tests)
│ ├── unit/ # 249 fast unit tests
│ ├── integration/ # 90 integration tests
│ └── e2e/ # 60 end-to-end tests
├── scripts/
│ ├── deployment/ # install.sh, uninstall.sh, verify_deployment.sh
│ ├── testing/ # run_tests.sh, manual-tests/ (17 tests)
│ │ └── gap-tests/ # run_gap_tests.sh (5 tests + 6 advanced tests = 11 tests)
│ ├── diagnostics/ # Troubleshooting tools
│ └── lib/ # Shared libraries
├── docs/ # Documentation
├── config/ # Configuration templates
├── systemd/ # systemd service definition
├── .github/ # GitHub templates
│ ├── ISSUE_TEMPLATE/ # Bug report, feature request templates
│ └── pull_request_template.md
├── CHANGELOG.md # Version history and release notes
├── CONTRIBUTING.md # Contribution guidelines
├── LICENSE # Proprietary license
├── Makefile # Development automation (make help)
├── pyproject.toml # Modern Python project config
├── .editorconfig # Code style configuration
├── .flake8 # Flake8 linter configuration
├── .pylintrc # Pylint configuration
└── .pre-commit-config.yaml # Pre-commit hooks for code quality
We welcome contributions! Please see our Contributing Guide for details.
Quick Start:
make dev-setup # Setup development environment
make install-dev-tools # Install linters, formatters & pre-commit hooks (optional)
make test-fast # Run tests
make lint # Check code qualityNote: make lint will skip tools that aren't installed. Run make install-dev-tools to install all linters and enable pre-commit hooks.
Before submitting a PR:
- ✅ Write tests (TDD approach)
- ✅ Run
make test- all tests must pass - ✅ Run
make lint- no linting errors - ✅ Update documentation
- ✅ Follow commit message guidelines
See CONTRIBUTING.md for complete guidelines.
- OS: Linux (Ubuntu 20.04+, Debian 11+)
- Python: 3.10 or higher
- AWS: Credentials for China region (cn-north-1 or cn-northwest-1)
- Disk: Minimum 100GB recommended for log storage
This project is proprietary software developed by Futu-reADS for internal use.
Copyright © 2025 Futu-reADS. All rights reserved.
- Documentation: docs/
- Issues: GitHub Issues
- CI/CD: GitHub Actions
For detailed feature documentation, configuration examples, and troubleshooting, see Complete Reference.