-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add separators for large numbers #35100
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, just calling out the TestSpan()
removal to make sure it was intentional, and understand why.
@@ -393,26 +393,3 @@ func GetTestSpan() *pb.Span { | |||
traceutil.ComputeTopLevel(trace) | |||
return trace[0] | |||
} | |||
|
|||
// TestSpan returns a fix span with hardcoded info, useful for reproducible tests | |||
func TestSpan() *pb.Span { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any reason to remove this? Unused?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah I'll update the description - yeah in modifying this code I noticed that this function was unused so removed it.
Test changes on VMUse this command from test-infra-definitions to manually test this PR changes on a VM: dda inv aws.create-vm --pipeline-id=58751173 --os-family=ubuntu Note: This applies to commit 7fc73a0 |
Uncompressed package size comparisonComparison with ancestor Diff per package
Decision✅ Passed |
Static quality checks ✅Please find below the results from static quality gates Successful checksInfo
|
Regression DetectorRegression Detector ResultsMetrics dashboard Baseline: 0bf3ea9 Optimization Goals: ✅ No significant changes detected
|
perf | experiment | goal | Δ mean % | Δ mean % CI | trials | links |
---|---|---|---|---|---|---|
➖ | uds_dogstatsd_to_api_cpu | % cpu utilization | +1.41 | [+0.57, +2.25] | 1 | Logs |
➖ | file_to_blackhole_300ms_latency | egress throughput | +0.03 | [-0.60, +0.67] | 1 | Logs |
➖ | file_to_blackhole_500ms_latency | egress throughput | +0.02 | [-0.75, +0.80] | 1 | Logs |
➖ | file_to_blackhole_100ms_latency | egress throughput | +0.02 | [-0.63, +0.67] | 1 | Logs |
➖ | tcp_dd_logs_filter_exclude | ingress throughput | -0.00 | [-0.01, +0.01] | 1 | Logs |
➖ | uds_dogstatsd_to_api | ingress throughput | -0.00 | [-0.29, +0.29] | 1 | Logs |
➖ | file_to_blackhole_0ms_latency_http2 | egress throughput | -0.04 | [-0.83, +0.74] | 1 | Logs |
➖ | file_to_blackhole_0ms_latency | egress throughput | -0.05 | [-0.90, +0.80] | 1 | Logs |
➖ | file_to_blackhole_0ms_latency_http1 | egress throughput | -0.05 | [-0.96, +0.86] | 1 | Logs |
➖ | file_to_blackhole_1000ms_latency_linear_load | egress throughput | -0.12 | [-0.59, +0.35] | 1 | Logs |
➖ | quality_gate_idle | memory utilization | -0.28 | [-0.35, -0.21] | 1 | Logs bounds checks dashboard |
➖ | file_to_blackhole_1000ms_latency | egress throughput | -0.30 | [-1.08, +0.48] | 1 | Logs |
➖ | tcp_syslog_to_blackhole | ingress throughput | -0.38 | [-0.44, -0.32] | 1 | Logs |
➖ | quality_gate_idle_all_features | memory utilization | -0.58 | [-0.64, -0.52] | 1 | Logs bounds checks dashboard |
➖ | uds_dogstatsd_20mb_12k_contexts_20_senders | memory utilization | -0.70 | [-0.75, -0.66] | 1 | Logs |
➖ | quality_gate_logs | % cpu utilization | -1.04 | [-3.86, +1.77] | 1 | Logs |
➖ | file_tree | memory utilization | -3.05 | [-3.24, -2.87] | 1 | Logs |
Bounds Checks: ✅ Passed
perf | experiment | bounds_check_name | replicates_passed | links |
---|---|---|---|---|
✅ | file_to_blackhole_0ms_latency | lost_bytes | 10/10 | |
✅ | file_to_blackhole_0ms_latency | memory_usage | 10/10 | |
✅ | file_to_blackhole_0ms_latency_http1 | lost_bytes | 10/10 | |
✅ | file_to_blackhole_0ms_latency_http1 | memory_usage | 10/10 | |
✅ | file_to_blackhole_0ms_latency_http2 | lost_bytes | 10/10 | |
✅ | file_to_blackhole_0ms_latency_http2 | memory_usage | 10/10 | |
✅ | file_to_blackhole_1000ms_latency | memory_usage | 10/10 | |
✅ | file_to_blackhole_1000ms_latency_linear_load | memory_usage | 10/10 | |
✅ | file_to_blackhole_100ms_latency | lost_bytes | 10/10 | |
✅ | file_to_blackhole_100ms_latency | memory_usage | 10/10 | |
✅ | file_to_blackhole_300ms_latency | lost_bytes | 10/10 | |
✅ | file_to_blackhole_300ms_latency | memory_usage | 10/10 | |
✅ | file_to_blackhole_500ms_latency | lost_bytes | 10/10 | |
✅ | file_to_blackhole_500ms_latency | memory_usage | 10/10 | |
✅ | quality_gate_idle | intake_connections | 10/10 | bounds checks dashboard |
✅ | quality_gate_idle | memory_usage | 10/10 | bounds checks dashboard |
✅ | quality_gate_idle_all_features | intake_connections | 10/10 | bounds checks dashboard |
✅ | quality_gate_idle_all_features | memory_usage | 10/10 | bounds checks dashboard |
✅ | quality_gate_logs | intake_connections | 10/10 | |
✅ | quality_gate_logs | lost_bytes | 10/10 | |
✅ | quality_gate_logs | memory_usage | 10/10 |
Explanation
Confidence level: 90.00%
Effect size tolerance: |Δ mean %| ≥ 5.00%
Performance changes are noted in the perf column of each table:
- ✅ = significantly better comparison variant performance
- ❌ = significantly worse comparison variant performance
- ➖ = no significant change in performance
A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".
For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:
-
Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.
-
Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.
-
Its configuration does not mark it "erratic".
CI Pass/Fail Decision
✅ Passed. All Quality Gates passed.
- quality_gate_idle, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_idle, bounds check intake_connections: 10/10 replicas passed. Gate passed.
- quality_gate_idle_all_features, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_idle_all_features, bounds check intake_connections: 10/10 replicas passed. Gate passed.
- quality_gate_logs, bounds check lost_bytes: 10/10 replicas passed. Gate passed.
- quality_gate_logs, bounds check intake_connections: 10/10 replicas passed. Gate passed.
- quality_gate_logs, bounds check memory_usage: 10/10 replicas passed. Gate passed.
What does this PR do?
Add separators to a variety of large int literals that we own.
While making the change I happened to run into some dead code in testutils and so removed
TestSpan()
Motivation
Separators improve human readability. We have previously missed changes to literals like these that resulted in breaking changes, with improved readability human reviewers are more likely to see when a value changes, especially for large numbers like
1000000
(e.g. can you at a glance tell what that number is? I know I certainly can't. Is it the same as100000
?)Describe how you validated your changes
I added a custom Datadog Code Security rule that looks for these and suggests fixes. On making these changes I ensured that the values stayed the same. Unit tests and visual verification should be sufficient here.
Possible Drawbacks / Trade-offs
Maybe someone out there doesn't like these separators?
Additional Notes