A comprehensive tool for analyzing Bazel build performance bottlenecks with an interactive web dashboard and upload-based analysis. This tool helps identify critical path actions, dependency bottlenecks, and parallelism issues in your Bazel builds.
The Bazel Bottleneck Analyzer provides:
- Upload-Based Analysis: Web interface for uploading build profiles and dependency graphs
- Critical Path Analysis: Identifies the longest dependency chain affecting build time
- Bottleneck Scoring: Multi-factor analysis combining duration, centrality, and blocking impact
- Timeline Analysis: Analyzes worker utilization and parallelism periods with filtering
- Build Comparison: Compare performance between different builds with detailed metrics
- Interactive Dashboard: Modern web-based UI with real-time analysis and filtering
- Go: Version 1.19 or later
- Web Browser: Modern browser with JavaScript support
- Bazel: For generating input files
You need to provide two input files (not included in this repository):
- Bazel Profile: JSON trace file from
bazel build --profile=build.profile
- Dependency Graph: DOT format file from
bazel query --output=graph
Archive Support: The tool automatically detects and extracts gzipped profile files (.gz
)
- Start the server:
./startup.sh <bin_ms> [port]
# Example: ./startup.sh 1000 8080
-
Open the dashboard: http://localhost:8080/dashboard.html
-
Upload your files through the web interface:
- Build profile (.json or .gz files)
- Dependency graph (.out files)
-
View results in real-time as analysis completes
- Prepare input files in the project directory:
# Example filenames (adjust to match your files)
build.profile # Bazel profile JSON
dependency_graph.out # Bazel dependency graph
# Or gzipped profile (automatically extracted)
build.profile.gz # Gzipped profile
dependency_graph.out # Dependency graph (uncompressed)
- Run analysis directly:
go run ./cmd/analyzer \
-profile "build.profile" \
-deps "dependency_graph.out" \
-bin_ms 1000 \
-output analysis_results
- View results: Navigate to http://localhost:8080/dashboard.html
This is a Go module (github.com/bitrise-io/bazel-bottleneck-analyser
) with standard structure:
├── cmd/ # Executable entry points
│ ├── analyzer/ # Command-line analyzer
│ └── server/ # Web server
├── internal/ # Private Go packages
│ ├── analyzer/ # Analysis logic
│ │ ├── analyzer.go # Main analyzer interface
│ │ ├── loader.go # File loading functions
│ │ ├── processor.go # Critical path & bottleneck analysis
│ │ ├── timeline.go # Timeline correlation analysis
│ │ └── output.go # Result formatting & export
│ ├── server/ # HTTP server implementation
│ │ ├── server.go # Main server struct
│ │ ├── handlers.go # HTTP endpoint handlers
│ │ └── analysis.go # Background analysis execution
│ └── types/ # Shared data structures
│ └── types.go # All type definitions
├── dashboard.html # Interactive web dashboard
├── startup.sh # Server startup script
└── go.mod # Go module definition
# Build executables
go build -o analyzer ./cmd/analyzer
go build -o server ./cmd/server
# Or run directly
go run ./cmd/server -port 8080 -bin_ms 1000
go run ./cmd/analyzer -profile build.profile -deps deps.out
- Visual step-by-step breakdown of the build's critical path
- Duration and execution mode (local/remote) for each action
- Identifies bottlenecks that directly impact total build time
- Side-by-side comparison between different builds showing improvements
- Multi-factor scoring: Duration (40%) + Centrality (30%) + Blocking Impact (30%)
- Critical path bonus: 1.5x multiplier for actions on the critical path
- Interactive exploration: Click rows to see dependent actions
- Filtering: Filter by target name, score threshold, or critical path status
- Comparison analysis: Track bottleneck improvements between builds with impact indicators
// Normalize duration against average
durationScore := float64(action.Duration) / avgDuration
// Centrality based on number of dependents
centralityScore := float64(dependentCount) / 10.0
// Blocking impact combines duration and dependency count
blockingImpact := float64(action.Duration) * float64(dependentCount) / 1000000.0
// Final weighted score
score := durationScore*0.4 + centralityScore*0.3 + blockingImpact*0.3
// Boost critical path actions
if onCriticalPath {
score *= 1.5
}
- Time Slice Visualization: Configurable time granularity (bin_ms parameter)
- Worker Utilization: Tracks local and remote worker usage over time
- Mnemonic Filtering: Filter time periods by action types (SwiftCompile, CppCompile, etc.)
- Target Count Filtering: Minimum number of targets of selected type per time slice
- Action Correlation: View actual actions running during specific time periods
- Side-by-Side Analysis: Compare baseline vs optimized builds
- Critical Path Comparison: Visualize changes in build critical path
- Bottleneck Analysis: Track improvements in top bottlenecks with impact indicators
- Upload Options: Upload raw profiles for backend analysis or pre-generated results
- Smart Duration Correction: Automatically uses correct timing data from multiple sources
- Upload Interface: Drag-and-drop file upload with progress tracking
- Tabbed Interface: Overview, Critical Path, Bottlenecks, Timeline, Comparison
- Real-time Analysis: Live progress updates during analysis
- Advanced Filtering: Search, score thresholds, mnemonic types, critical path status
- Dependency Visualization: Expandable dependency trees with real dependency data
- Responsive Design: Works on desktop and mobile devices
The analyzer generates these files in analysis_results/
or comparison_results/
:
analysis_results.json
- Complete structured analysis data (used by dashboard)analysis_report.txt
- Human-readable summary reportcritical_path.txt
- Detailed critical path breakdownbottleneck_scores.txt
- Ranked bottlenecks with explanationsbottleneck_scores.csv
- Bottleneck data in CSV formattime_slices.csv
- Timeline data for all time periodstimeline_correlation.txt
- Timeline analysis summary
# Start server with custom parameters
./startup.sh <bin_ms> [port]
# Examples:
./startup.sh 1000 # 1-second time granularity, port 8080
./startup.sh 500 8081 # 0.5-second granularity, port 8081
POST /api/upload-main
- Upload files for main analysisPOST /api/upload-comparison
- Upload files for comparison analysisGET /api/status?task=[main|comparison]
- Check analysis progressGET /api/analysis
- Get main analysis resultsGET /api/comparison-data
- Get comparison analysis resultsGET /analysis_results/*
- Static file serving for CSV and other outputs
go run bazel_bottleneck_analyzer.go [options]
Options:
-profile string
Bazel profile JSON file (required)
-deps string
Bazel dependency graph file (required)
-output string
Output directory for analysis results (default "analysis_results")
-bin_ms int
Time granularity in milliseconds for timeline analysis (default 100)
-minparallel int
Minimum parallel workers threshold (default 1)
- Start server:
./startup.sh 1000
- Upload baseline: Upload first build's profile and dependencies
- Upload comparison: Use "Option 1" in Comparison tab to upload second build
- View results: Automatic comparison with critical path and bottleneck analysis
# Generate results manually for comparison:
go run bazel_bottleneck_analyzer.go -profile baseline.profile -deps deps.out -output baseline_results
go run bazel_bottleneck_analyzer.go -profile optimized.profile -deps deps.out -output optimized_results
# Then upload the generated JSON files in the dashboard
- High scores (>50): Priority optimization targets
- Critical path actions: Have 1.5x score multiplier
- High dependent count: Actions blocking many others
- Time Slices: Each row represents a time period of
bin_ms
duration - Worker Usage: Local/remote worker counts show parallelization efficiency
- Action Correlation: Click periods to see what actions were actually running
- Filtering: Use mnemonic and target count filters to find specific bottleneck patterns
- Longest chain: Sequence of dependent actions determining build time
- Optimization target: Reducing any critical path action reduces total build time
- Comparison: Shows side-by-side critical path changes between builds
- 🚀 Major improvement: >10 second reduction
- ✅ Good improvement: 5-10 second reduction
- → Minor change: <5 second change
- ❌ Regression: 5-10 second increase
⚠️ Major regression: >10 second increase
- Time Granularity: Adjust
bin_ms
parameter for timeline analysis precision - Dashboard UI: Edit
dashboard.html
for custom styling or features - Server Configuration: Modify files in
internal/server/
for custom endpoints or processing - Analysis Parameters: Adjust command-line options for different analysis focuses
- Scoring Weights: Modify the scoring algorithm in
internal/analyzer/processor.go
# Start server with 1-second time granularity
./startup.sh 1000
# Open http://localhost:8080/dashboard.html
# Upload your build.profile and dependency_graph.out files
# Analysis runs automatically with progress tracking
# Generate Bazel profile and dependency graph
bazel build //your/target:name --profile=build.profile
bazel query 'deps(//your/target:name)' --output=graph > deps.out
# Manual analysis + server startup
go run ./cmd/analyzer -profile build.profile -deps deps.out
./startup.sh 1000
# Use web interface for real-time comparison
./startup.sh 1000
# Upload baseline build, then use Comparison tab for second build
This tool analyzes your specific Bazel build data. Input files (profiles and dependency graphs) are not included in the repository as they are build-specific and potentially contain sensitive information.