Use bazel to build ebpf parts#47810
Use bazel to build ebpf parts#47810gh-worker-dd-mergequeue-cf854d[bot] merged 36 commits intomainfrom
Conversation
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 4cc7a7008a
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
|
|
||
|
|
||
| def clean_object_files(ctx): | ||
| run_ninja(ctx, task="clean") |
There was a problem hiding this comment.
Clean Bazel-generated eBPF objects in system-probe clean
clean_object_files now only calls ninja -t clean, but this commit removed eBPF object compilation from the generated ninja graph, so Bazel-copied .o files are left behind in pkg/ebpf/bytecode/build/* (and in-place outputs). This leaves stale bytecode after inv system-probe.clean, which can be reused or packaged accidentally in later builds.
Useful? React with 👍 / 👎.
| arch = Arch.local() | ||
| build_dir = get_ebpf_build_dir(arch) |
There was a problem hiding this comment.
Package Bazel outputs from the selected build arch
save_build_outputs now hardcodes Arch.local() when collecting Bazel-produced eBPF objects. In cross-arch workflows (for example building arm64 artifacts on x86), objects are emitted under pkg/ebpf/bytecode/build/<target-arch>, so this logic can miss freshly built files and archive the wrong architecture’s objects instead.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
We don't cross-compile yet so it's not a big deal. We will come to that later
Files inventory check summaryFile checks results against ancestor 83102242: Results for datadog-agent_7.78.0~devel.git.676.6cad557.pipeline.102957034-1_amd64.deb:No change detected |
Static quality checks✅ Please find below the results from static quality gates Successful checksInfo
17 successful checks with minimal change (< 2 KiB)
On-wire sizes (compressed)
|
Regression DetectorRegression Detector ResultsMetrics dashboard Baseline: eb09c31 Optimization Goals: ✅ No significant changes detected
|
| perf | experiment | goal | Δ mean % | Δ mean % CI | trials | links |
|---|---|---|---|---|---|---|
| ➖ | docker_containers_cpu | % cpu utilization | -2.68 | [-5.65, +0.28] | 1 | Logs |
Fine details of change detection per experiment
| perf | experiment | goal | Δ mean % | Δ mean % CI | trials | links |
|---|---|---|---|---|---|---|
| ➖ | quality_gate_logs | % cpu utilization | +2.53 | [+0.92, +4.14] | 1 | Logs bounds checks dashboard |
| ➖ | tcp_syslog_to_blackhole | ingress throughput | +1.02 | [+0.89, +1.15] | 1 | Logs |
| ➖ | ddot_metrics | memory utilization | +0.46 | [+0.29, +0.64] | 1 | Logs |
| ➖ | uds_dogstatsd_20mb_12k_contexts_20_senders | memory utilization | +0.22 | [+0.16, +0.28] | 1 | Logs |
| ➖ | otlp_ingest_metrics | memory utilization | +0.19 | [+0.03, +0.34] | 1 | Logs |
| ➖ | docker_containers_memory | memory utilization | +0.16 | [+0.09, +0.23] | 1 | Logs |
| ➖ | ddot_metrics_sum_delta | memory utilization | +0.07 | [-0.09, +0.24] | 1 | Logs |
| ➖ | file_to_blackhole_500ms_latency | egress throughput | +0.07 | [-0.31, +0.45] | 1 | Logs |
| ➖ | ddot_logs | memory utilization | +0.07 | [+0.00, +0.13] | 1 | Logs |
| ➖ | ddot_metrics_sum_cumulative | memory utilization | +0.06 | [-0.08, +0.21] | 1 | Logs |
| ➖ | file_to_blackhole_0ms_latency | egress throughput | +0.04 | [-0.42, +0.50] | 1 | Logs |
| ➖ | file_to_blackhole_100ms_latency | egress throughput | +0.01 | [-0.06, +0.09] | 1 | Logs |
| ➖ | tcp_dd_logs_filter_exclude | ingress throughput | +0.01 | [-0.10, +0.11] | 1 | Logs |
| ➖ | uds_dogstatsd_to_api | ingress throughput | +0.01 | [-0.19, +0.21] | 1 | Logs |
| ➖ | uds_dogstatsd_to_api_v3 | ingress throughput | +0.00 | [-0.19, +0.19] | 1 | Logs |
| ➖ | ddot_metrics_sum_cumulativetodelta_exporter | memory utilization | -0.01 | [-0.24, +0.22] | 1 | Logs |
| ➖ | file_to_blackhole_1000ms_latency | egress throughput | -0.03 | [-0.46, +0.41] | 1 | Logs |
| ➖ | quality_gate_idle | memory utilization | -0.07 | [-0.13, -0.02] | 1 | Logs bounds checks dashboard |
| ➖ | quality_gate_idle_all_features | memory utilization | -0.16 | [-0.20, -0.13] | 1 | Logs bounds checks dashboard |
| ➖ | file_tree | memory utilization | -0.23 | [-0.29, -0.18] | 1 | Logs |
| ➖ | otlp_ingest_logs | memory utilization | -0.51 | [-0.61, -0.41] | 1 | Logs |
| ➖ | quality_gate_metrics_logs | memory utilization | -1.72 | [-1.95, -1.49] | 1 | Logs bounds checks dashboard |
| ➖ | docker_containers_cpu | % cpu utilization | -2.68 | [-5.65, +0.28] | 1 | Logs |
Bounds Checks: ✅ Passed
| perf | experiment | bounds_check_name | replicates_passed | observed_value | links |
|---|---|---|---|---|---|
| ✅ | docker_containers_cpu | simple_check_run | 10/10 | 674 ≥ 26 | |
| ✅ | docker_containers_memory | memory_usage | 10/10 | 272.64MiB ≤ 370MiB | |
| ✅ | docker_containers_memory | simple_check_run | 10/10 | 706 ≥ 26 | |
| ✅ | file_to_blackhole_0ms_latency | memory_usage | 10/10 | 0.19GiB ≤ 1.20GiB | |
| ✅ | file_to_blackhole_0ms_latency | missed_bytes | 10/10 | 0B = 0B | |
| ✅ | file_to_blackhole_1000ms_latency | memory_usage | 10/10 | 0.23GiB ≤ 1.20GiB | |
| ✅ | file_to_blackhole_1000ms_latency | missed_bytes | 10/10 | 0B = 0B | |
| ✅ | file_to_blackhole_100ms_latency | memory_usage | 10/10 | 0.19GiB ≤ 1.20GiB | |
| ✅ | file_to_blackhole_100ms_latency | missed_bytes | 10/10 | 0B = 0B | |
| ✅ | file_to_blackhole_500ms_latency | memory_usage | 10/10 | 0.21GiB ≤ 1.20GiB | |
| ✅ | file_to_blackhole_500ms_latency | missed_bytes | 10/10 | 0B = 0B | |
| ✅ | quality_gate_idle | intake_connections | 10/10 | 3 = 3 | bounds checks dashboard |
| ✅ | quality_gate_idle | memory_usage | 10/10 | 174.61MiB ≤ 175MiB | bounds checks dashboard |
| ✅ | quality_gate_idle_all_features | intake_connections | 10/10 | 2 ≤ 3 | bounds checks dashboard |
| ✅ | quality_gate_idle_all_features | memory_usage | 10/10 | 491.23MiB ≤ 550MiB | bounds checks dashboard |
| ✅ | quality_gate_logs | intake_connections | 10/10 | 4 ≤ 6 | bounds checks dashboard |
| ✅ | quality_gate_logs | memory_usage | 10/10 | 202.13MiB ≤ 220MiB | bounds checks dashboard |
| ✅ | quality_gate_logs | missed_bytes | 10/10 | 0B = 0B | bounds checks dashboard |
| ✅ | quality_gate_metrics_logs | cpu_usage | 10/10 | 338.54 ≤ 2000 | bounds checks dashboard |
| ✅ | quality_gate_metrics_logs | intake_connections | 10/10 | 4 ≤ 6 | bounds checks dashboard |
| ✅ | quality_gate_metrics_logs | memory_usage | 10/10 | 417.04MiB ≤ 475MiB | bounds checks dashboard |
| ✅ | quality_gate_metrics_logs | missed_bytes | 10/10 | 0B = 0B | bounds checks dashboard |
Explanation
Confidence level: 90.00%
Effect size tolerance: |Δ mean %| ≥ 5.00%
Performance changes are noted in the perf column of each table:
- ✅ = significantly better comparison variant performance
- ❌ = significantly worse comparison variant performance
- ➖ = no significant change in performance
A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".
For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:
-
Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.
-
Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.
-
Its configuration does not mark it "erratic".
CI Pass/Fail Decision
✅ Passed. All Quality Gates passed.
- quality_gate_idle_all_features, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_idle_all_features, bounds check intake_connections: 10/10 replicas passed. Gate passed.
- quality_gate_idle, bounds check intake_connections: 10/10 replicas passed. Gate passed.
- quality_gate_idle, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_metrics_logs, bounds check missed_bytes: 10/10 replicas passed. Gate passed.
- quality_gate_metrics_logs, bounds check cpu_usage: 10/10 replicas passed. Gate passed.
- quality_gate_metrics_logs, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_metrics_logs, bounds check intake_connections: 10/10 replicas passed. Gate passed.
- quality_gate_logs, bounds check missed_bytes: 10/10 replicas passed. Gate passed.
- quality_gate_logs, bounds check intake_connections: 10/10 replicas passed. Gate passed.
- quality_gate_logs, bounds check memory_usage: 10/10 replicas passed. Gate passed.
| # Remove Bazel-copied eBPF .o files that ninja no longer tracks. | ||
| build_root = Path("pkg/ebpf/bytecode/build") | ||
| if build_root.exists(): | ||
| shutil.rmtree(build_root) |
There was a problem hiding this comment.
shutil doesn't seem to be imported in this context - better import it globally, one of the linters will probably tell when it's no longer used.
There was a problem hiding this comment.
This whole chunk will go away once I am done with other ninja parts so I wouldn't bother
There was a problem hiding this comment.
fine, but there will be an ImportError in some cases
| for inc in cc_info.compilation_context.includes.to_list(): | ||
| dirs.append(inc) | ||
| for inc in cc_info.compilation_context.system_includes.to_list(): | ||
| dirs.append(inc) | ||
| for inc in cc_info.compilation_context.quote_includes.to_list(): | ||
| dirs.append(inc) |
There was a problem hiding this comment.
shouldn't we segregate -I/-isystem/-iquote instead of treating everything as -I? In particular, -isystem has a specific precedence and warnings are not treated as errors.
There was a problem hiding this comment.
I would say it doesn't matter in this case as we completely ignore host headers and explicitly include what we need.
| # Install Bazel-managed LLVM BPF tools (needed for stripping and runtime compilation). | ||
| sudo = "" if is_root() else "sudo" | ||
| ctx.run(f"{sudo} mkdir -p /opt/datadog-agent/embedded/bin") | ||
| ctx.run(f"{sudo} bazelisk run -- @llvm_bpf//:install --destdir=/opt/datadog-agent") |
There was a problem hiding this comment.
Wouldn't sudo bazel imply a different output user root? (output_base, repository_cache, etc.)
There was a problem hiding this comment.
I remember having an issue in one of the build jobs without sudo (unable to install this tool into /opt/datadog-agent/...) so no way around it as long as we use omnibus
There was a problem hiding this comment.
Or we need to follow the whole chain and identify why we have no permissions for the install location
What does this PR do?
This a second attempt for #47400.
This work is based on previous @chouquette and @rdesgroppes work to migrate
system-probeandebpfbuild to bazel. In this PR we combine both of those and aim to buildebpfparts only excludingsystem-probego code for now. Thus, this is one of the series of PRs to gradually migratesystem-probeto bazelChanges in particular:
ebpfas well as the toolchain itself. Usage ofrules_foreign_ccwas not possible to wrapninjaas we already have a registered GCC toolchain that cannot be used to buildebpfebpf.CO-RE parts depend on
vmlinux.hwhereas prebuilt components require a full layout of Linux kernel headers that are currently taken directly from the host. We need to make them downloadable for hermeticity.tasks/system_probe.pytask to replaceninjabuild ofebpfwithbazel. The rest of the logic remains. This allows us to increment and also test changes fully relying on the current setup and infrastructure.Describe how you validated your changes
Executed
dda inv system-probe.buildlocally and ensured that it worked. Also, triggered the entire pipeline to force execute all kinds of tests