Skip to content

feat: Allow cancelling of grouping operations which are CPU bound #16196

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 77 commits into
base: main
Choose a base branch
from

Conversation

zhuqi-lucas
Copy link
Contributor

@zhuqi-lucas zhuqi-lucas commented May 27, 2025

Which issue does this PR close?

Rationale for this change

Some AggregateExecs can always make progress and thus they may never notice that the plan was canceled.

🧵 Yield-based Cooperative Scheduling in AggregateExec

This PR introduces a lightweight yielding mechanism to AggregateExec to improve responsiveness to external signals (like Ctrl-C) without adding any measurable performance overhead.

🧭 Motivation

For aggregation queries without any GROUP BY

SELECT SUM(value) FROM range(1, 50000000000);

Similarly for queries like

SELECT DISTINCT a, b, c, d, ... 

Where the computational work for each input is substantial

Are these changes tested?

Yes

Before this PR:

SET datafusion.execution.target_partitions = 1;
SELECT SUM(value) FROM range(1, 50000000000);

It will always stuck until done, we can't ctril c to stop it.

Are there any user-facing changes?

Some CPU heavy aggregation plans now cancel much sooner

@github-actions github-actions bot added the physical-plan Changes to the physical-plan crate label May 27, 2025
@zhuqi-lucas
Copy link
Contributor Author

The PR is limited to solve aggregate with no group streaming, we can extend to more cases if it's not affecting performance?

@zhuqi-lucas zhuqi-lucas changed the title feat: support inability to yeild cpu for loop when it's not using Tok… feat: support inability to yeild for loop when it's not using Tok… May 27, 2025
@zhuqi-lucas zhuqi-lucas marked this pull request as draft May 27, 2025 15:10
@alamb
Copy link
Contributor

alamb commented May 27, 2025

Thanks @zhuqi-lucas -- I'll try running the cancellation benchmark from @carols10cents

@alamb
Copy link
Contributor

alamb commented May 27, 2025

🤖 ./gh_compare_branch.sh Benchmark Script Running
Linux aal-dev 6.11.0-1013-gcp #13~24.04.1-Ubuntu SMP Wed Apr 2 16:34:16 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
Comparing issue_16193 (b18aeaa) to 2d12bf6 diff
Benchmarks: cancellation
Results will be posted here when complete

1 similar comment
@alamb
Copy link
Contributor

alamb commented May 27, 2025

🤖 ./gh_compare_branch.sh Benchmark Script Running
Linux aal-dev 6.11.0-1013-gcp #13~24.04.1-Ubuntu SMP Wed Apr 2 16:34:16 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
Comparing issue_16193 (b18aeaa) to 2d12bf6 diff
Benchmarks: cancellation
Results will be posted here when complete

@alamb
Copy link
Contributor

alamb commented May 27, 2025

🤖: Benchmark completed

Details

Comparing HEAD and issue_16193
--------------------
Benchmark cancellation.json
--------------------
┏━━━━━━━━━━━━━━┳━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━━━━━━┓
┃ Query        ┃    HEAD ┃ issue_16193 ┃       Change ┃
┡━━━━━━━━━━━━━━╇━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━━━━━━━┩
│ QCancellati… │ 31.29ms │     33.69ms │ 1.08x slower │
└──────────────┴─────────┴─────────────┴──────────────┘
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━┓
┃ Benchmark Summary          ┃         ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━┩
│ Total Time (HEAD)          │ 31.29ms │
│ Total Time (issue_16193)   │ 33.69ms │
│ Average Time (HEAD)        │ 31.29ms │
│ Average Time (issue_16193) │ 33.69ms │
│ Queries Faster             │       0 │
│ Queries Slower             │       1 │
│ Queries with No Change     │       0 │
└────────────────────────────┴─────────┘

@zhuqi-lucas
Copy link
Contributor Author

zhuqi-lucas commented May 28, 2025

Thank you @alamb for review and benchmark.

I am wandering if it will hit datafusion itself running performance, because we add (if logic) in the aggregate and other core exec.

If we only want to support datafusion-cli canceling logic, maybe we can add the wapper logic to datafusion-cli.

But from other related issue, it seems some cusomers use grpc to drop stream not only limited to datafusion-cli.

#14036 (comment)

May be the perfect solution is:

  1. We have a public drop stream interface which can be called by customers (grpc, etc) to cancel. I am still can't find a solution for this besides change the core exec logic itself...
  2. We also has a wrapper for datafusion-cli, so we call Ctril c, we can cancel the execution.

@zhuqi-lucas
Copy link
Contributor Author

I polish the code only affect the no grouping aggregate, maybe we can compare the clickbench, so we can be confident to merge if it not affect aggregate performance.

@zhuqi-lucas zhuqi-lucas marked this pull request as ready for review May 28, 2025 10:31
@zhuqi-lucas zhuqi-lucas changed the title feat: support inability to yeild for loop when it's not using Tok… feat: support inability to yield for loop when it's not using Tok… May 28, 2025
@zhuqi-lucas
Copy link
Contributor Author

zhuqi-lucas commented May 28, 2025

Updated the performance for current PR:

SET datafusion.execution.target_partitions = 1;

SELECT SUM(value)
FROM range(1,50000000000) AS t;
+----------------------+
| sum(t.value)         |
+----------------------+
| -4378597037249509888 |
+----------------------+
1 row(s) fetched.
Elapsed 22.315 seconds.

The main branch:

SET datafusion.execution.target_partitions = 1;


SELECT SUM(value)
FROM range(1,50000000000) AS t;
+----------------------+
| sum(t.value)         |
+----------------------+
| -4378597037249509888 |
+----------------------+
1 row(s) fetched.
Elapsed 22.567 seconds.

No performance regression from the above testing.

@@ -77,6 +77,11 @@ impl AggregateStream {
let baseline_metrics = BaselineMetrics::new(&agg.metrics, partition);
let input = agg.input.execute(partition, Arc::clone(&context))?;

// Only wrap no‐grouping aggregates in our YieldStream
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In my own testing with partition_count = 1 group by aggregates suffer from the same problem

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you @pepijnve for review, i will try to reproduce it for group by aggregates.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are right, i can reproduce it now:

SELECT
  (value % 10)         AS group_key,
  COUNT(*)             AS cnt,
  SUM(value)           AS sum_val
FROM range(1, 5000000000) AS t
GROUP BY (value % 10)
ORDER BY group_key;

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you @pepijnve , i also added the grouping support in latest PR.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From testing result, it seems the grouping by cases have some performance regression.

@zhuqi-lucas
Copy link
Contributor Author

Another solution is using CoalescePartitionsExec to wrapper:

diff --git a/datafusion/physical-plan/src/coalesce_partitions.rs b/datafusion/physical-plan/src/coalesce_partitions.rs
index 114f83068..ffb24463e 100644
--- a/datafusion/physical-plan/src/coalesce_partitions.rs
+++ b/datafusion/physical-plan/src/coalesce_partitions.rs
@@ -154,10 +154,10 @@ impl ExecutionPlan for CoalescePartitionsExec {
             0 => internal_err!(
                 "CoalescePartitionsExec requires at least one input partition"
             ),
-            1 => {
-                // bypass any threading / metrics if there is a single partition
-                self.input.execute(0, context)
-            }
+            // 1 => {
+            //     // bypass any threading / metrics if there is a single partition
+            //     self.input.execute(0, context)
+            // }
             _ => {
                 let baseline_metrics = BaselineMetrics::new(&self.metrics, partition);
                 // record the (very) minimal work done so that
diff --git a/datafusion/physical-plan/src/execution_plan.rs b/datafusion/physical-plan/src/execution_plan.rs
index b81b3c8be..8bb8b2145 100644
--- a/datafusion/physical-plan/src/execution_plan.rs
+++ b/datafusion/physical-plan/src/execution_plan.rs
@@ -963,8 +963,7 @@ pub fn execute_stream(
 ) -> Result<SendableRecordBatchStream> {
     match plan.output_partitioning().partition_count() {
         0 => Ok(Box::pin(EmptyRecordBatchStream::new(plan.schema()))),
-        1 => plan.execute(0, context),
-        2.. => {
+        1.. => {
             // merge into a single partition
             let plan = CoalescePartitionsExec::new(Arc::clone(&plan));
             // CoalescePartitionsExec must produce a single partition
diff --git a/parquet-testing b/parquet-testing
index 6e851ddd7..107b36603 160000
--- a/parquet-testing
+++ b/parquet-testing
@@ -1 +1 @@
-Subproject commit 6e851ddd768d6af741c7b15dc594874399fc3cff
+Subproject commit 107b36603e051aee26bd93e04b871034f6c756c0

@zhuqi-lucas
Copy link
Contributor Author

Hi @alamb , i believe we also can do the clickbench benchmark for this PR. But i am not confident about the result since it seems we will always add some overhead to aggregate. Thanks!

@alamb
Copy link
Contributor

alamb commented May 28, 2025

🤖 ./gh_compare_branch.sh Benchmark Script Running
Linux aal-dev 6.11.0-1013-gcp #13~24.04.1-Ubuntu SMP Wed Apr 2 16:34:16 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
Comparing issue_16193 (6cf3bf0) to 2d12bf6 diff
Benchmarks: tpch_mem clickbench_partitioned clickbench_extended
Results will be posted here when complete

@alamb
Copy link
Contributor

alamb commented May 28, 2025

🤖: Benchmark completed

Details

Comparing HEAD and issue_16193
--------------------
Benchmark clickbench_extended.json
--------------------
┏━━━━━━━━━━━━━━┳━━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━━━━━━┓
┃ Query        ┃       HEAD ┃ issue_16193 ┃       Change ┃
┡━━━━━━━━━━━━━━╇━━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━━━━━━━┩
│ QQuery 0     │  1884.53ms │   1932.55ms │    no change │
│ QQuery 1     │   692.42ms │    704.34ms │    no change │
│ QQuery 2     │  1422.09ms │   1424.04ms │    no change │
│ QQuery 3     │   727.80ms │    722.42ms │    no change │
│ QQuery 4     │  1434.99ms │   1447.24ms │    no change │
│ QQuery 5     │ 15295.77ms │  15299.58ms │    no change │
│ QQuery 6     │  1997.15ms │   2013.51ms │    no change │
│ QQuery 7     │  2049.62ms │   2168.78ms │ 1.06x slower │
│ QQuery 8     │   836.82ms │    848.06ms │    no change │
└──────────────┴────────────┴─────────────┴──────────────┘
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━┓
┃ Benchmark Summary          ┃            ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━┩
│ Total Time (HEAD)          │ 26341.19ms │
│ Total Time (issue_16193)   │ 26560.52ms │
│ Average Time (HEAD)        │  2926.80ms │
│ Average Time (issue_16193) │  2951.17ms │
│ Queries Faster             │          0 │
│ Queries Slower             │          1 │
│ Queries with No Change     │          8 │
└────────────────────────────┴────────────┘
--------------------
Benchmark clickbench_partitioned.json
--------------------
┏━━━━━━━━━━━━━━┳━━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━━━━━━┓
┃ Query        ┃       HEAD ┃ issue_16193 ┃       Change ┃
┡━━━━━━━━━━━━━━╇━━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━━━━━━━┩
│ QQuery 0     │    15.54ms │     15.42ms │    no change │
│ QQuery 1     │    33.15ms │     33.70ms │    no change │
│ QQuery 2     │    80.58ms │     80.48ms │    no change │
│ QQuery 3     │    96.06ms │     95.48ms │    no change │
│ QQuery 4     │   588.87ms │    595.63ms │    no change │
│ QQuery 5     │   818.19ms │    815.70ms │    no change │
│ QQuery 6     │    23.64ms │     23.47ms │    no change │
│ QQuery 7     │    37.27ms │     36.33ms │    no change │
│ QQuery 8     │   926.36ms │    910.32ms │    no change │
│ QQuery 9     │  1187.66ms │   1209.81ms │    no change │
│ QQuery 10    │   267.19ms │    257.24ms │    no change │
│ QQuery 11    │   293.30ms │    292.35ms │    no change │
│ QQuery 12    │   911.85ms │    911.14ms │    no change │
│ QQuery 13    │  1247.36ms │   1317.95ms │ 1.06x slower │
│ QQuery 14    │   848.42ms │    844.38ms │    no change │
│ QQuery 15    │   834.03ms │    835.94ms │    no change │
│ QQuery 16    │  1736.24ms │   1752.77ms │    no change │
│ QQuery 17    │  1613.09ms │   1606.14ms │    no change │
│ QQuery 18    │  3084.41ms │   3051.47ms │    no change │
│ QQuery 19    │    83.51ms │     84.57ms │    no change │
│ QQuery 20    │  1125.18ms │   1120.50ms │    no change │
│ QQuery 21    │  1285.82ms │   1308.43ms │    no change │
│ QQuery 22    │  2139.79ms │   2162.32ms │    no change │
│ QQuery 23    │  8003.46ms │   7992.05ms │    no change │
│ QQuery 24    │   463.30ms │    450.76ms │    no change │
│ QQuery 25    │   384.45ms │    381.51ms │    no change │
│ QQuery 26    │   521.56ms │    522.91ms │    no change │
│ QQuery 27    │  1586.93ms │   1590.23ms │    no change │
│ QQuery 28    │ 12567.57ms │  12455.67ms │    no change │
│ QQuery 29    │   526.70ms │    534.84ms │    no change │
│ QQuery 30    │   807.81ms │    815.44ms │    no change │
│ QQuery 31    │   858.67ms │    848.84ms │    no change │
│ QQuery 32    │  2639.73ms │   2641.91ms │    no change │
│ QQuery 33    │  3342.77ms │   3316.54ms │    no change │
│ QQuery 34    │  3312.00ms │   3325.35ms │    no change │
│ QQuery 35    │  1312.86ms │   1309.17ms │    no change │
│ QQuery 36    │   117.39ms │    116.89ms │    no change │
│ QQuery 37    │    55.88ms │     56.08ms │    no change │
│ QQuery 38    │   121.82ms │    119.37ms │    no change │
│ QQuery 39    │   192.09ms │    195.42ms │    no change │
│ QQuery 40    │    44.75ms │     49.63ms │ 1.11x slower │
│ QQuery 41    │    42.75ms │     44.34ms │    no change │
│ QQuery 42    │    38.39ms │     38.13ms │    no change │
└──────────────┴────────────┴─────────────┴──────────────┘
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━┓
┃ Benchmark Summary          ┃            ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━┩
│ Total Time (HEAD)          │ 56218.38ms │
│ Total Time (issue_16193)   │ 56166.59ms │
│ Average Time (HEAD)        │  1307.40ms │
│ Average Time (issue_16193) │  1306.20ms │
│ Queries Faster             │          0 │
│ Queries Slower             │          2 │
│ Queries with No Change     │         41 │
└────────────────────────────┴────────────┘
--------------------
Benchmark tpch_mem_sf1.json
--------------------
┏━━━━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓
┃ Query        ┃     HEAD ┃ issue_16193 ┃        Change ┃
┡━━━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩
│ QQuery 1     │ 122.53ms │    122.67ms │     no change │
│ QQuery 2     │  21.63ms │     21.72ms │     no change │
│ QQuery 3     │  35.49ms │     33.31ms │ +1.07x faster │
│ QQuery 4     │  19.72ms │     20.20ms │     no change │
│ QQuery 5     │  51.62ms │     52.88ms │     no change │
│ QQuery 6     │  11.84ms │     11.99ms │     no change │
│ QQuery 7     │  97.55ms │     94.51ms │     no change │
│ QQuery 8     │  25.70ms │     25.67ms │     no change │
│ QQuery 9     │  59.26ms │     59.82ms │     no change │
│ QQuery 10    │  56.45ms │     56.10ms │     no change │
│ QQuery 11    │  11.50ms │     11.77ms │     no change │
│ QQuery 12    │  41.66ms │     39.51ms │ +1.05x faster │
│ QQuery 13    │  27.43ms │     26.76ms │     no change │
│ QQuery 14    │   9.52ms │      9.56ms │     no change │
│ QQuery 15    │  23.59ms │     22.84ms │     no change │
│ QQuery 16    │  21.71ms │     21.73ms │     no change │
│ QQuery 17    │  94.44ms │     95.31ms │     no change │
│ QQuery 18    │ 210.48ms │    215.52ms │     no change │
│ QQuery 19    │  26.60ms │     26.03ms │     no change │
│ QQuery 20    │  35.72ms │     36.88ms │     no change │
│ QQuery 21    │ 160.00ms │    160.99ms │     no change │
│ QQuery 22    │  16.65ms │     17.19ms │     no change │
└──────────────┴──────────┴─────────────┴───────────────┘
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━┓
┃ Benchmark Summary          ┃           ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━┩
│ Total Time (HEAD)          │ 1181.11ms │
│ Total Time (issue_16193)   │ 1182.95ms │
│ Average Time (HEAD)        │   53.69ms │
│ Average Time (issue_16193) │   53.77ms │
│ Queries Faster             │         2 │
│ Queries Slower             │         0 │
│ Queries with No Change     │        20 │
└────────────────────────────┴───────────┘

@alamb

This comment was marked as outdated.

@alamb
Copy link
Contributor

alamb commented May 28, 2025

Running the benchmarks again to gather more details

@alamb

This comment was marked as outdated.

@zhuqi-lucas
Copy link
Contributor Author

🤖: Benchmark completed

Details

Comparing HEAD and issue_16193
--------------------
Benchmark clickbench_extended.json
--------------------
┏━━━━━━━━━━━━━━┳━━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━━━┓
┃ Query        ┃       HEAD ┃ issue_16193 ┃    Change ┃
┡━━━━━━━━━━━━━━╇━━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━━━━┩
│ QQuery 0     │  1906.70ms │   1850.03ms │ no change │
│ QQuery 1     │   691.18ms │    691.76ms │ no change │
│ QQuery 2     │  1411.18ms │   1436.19ms │ no change │
│ QQuery 3     │   696.17ms │    700.91ms │ no change │
│ QQuery 4     │  1438.68ms │   1458.59ms │ no change │
│ QQuery 5     │ 15100.87ms │  14874.14ms │ no change │
│ QQuery 6     │  1996.44ms │   1982.89ms │ no change │
│ QQuery 7     │  2089.84ms │   2063.71ms │ no change │
│ QQuery 8     │   830.88ms │    835.14ms │ no change │
└──────────────┴────────────┴─────────────┴───────────┘
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━┓
┃ Benchmark Summary          ┃            ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━┩
│ Total Time (HEAD)          │ 26161.93ms │
│ Total Time (issue_16193)   │ 25893.35ms │
│ Average Time (HEAD)        │  2906.88ms │
│ Average Time (issue_16193) │  2877.04ms │
│ Queries Faster             │          0 │
│ Queries Slower             │          0 │
│ Queries with No Change     │          9 │
└────────────────────────────┴────────────┘
--------------------
Benchmark clickbench_partitioned.json
--------------------
┏━━━━━━━━━━━━━━┳━━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓
┃ Query        ┃       HEAD ┃ issue_16193 ┃        Change ┃
┡━━━━━━━━━━━━━━╇━━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩
│ QQuery 0     │    15.18ms │     15.80ms │     no change │
│ QQuery 1     │    33.16ms │     32.97ms │     no change │
│ QQuery 2     │    77.60ms │     80.92ms │     no change │
│ QQuery 3     │    97.69ms │     97.56ms │     no change │
│ QQuery 4     │   589.28ms │    594.05ms │     no change │
│ QQuery 5     │   847.85ms │    836.44ms │     no change │
│ QQuery 6     │    23.38ms │     22.84ms │     no change │
│ QQuery 7     │    39.02ms │     36.26ms │ +1.08x faster │
│ QQuery 8     │   923.80ms │    927.93ms │     no change │
│ QQuery 9     │  1194.31ms │   1194.76ms │     no change │
│ QQuery 10    │   261.49ms │    264.83ms │     no change │
│ QQuery 11    │   298.34ms │    300.14ms │     no change │
│ QQuery 12    │   889.86ms │    893.87ms │     no change │
│ QQuery 13    │  1343.97ms │   1319.24ms │     no change │
│ QQuery 14    │   850.37ms │    847.33ms │     no change │
│ QQuery 15    │   829.14ms │    827.15ms │     no change │
│ QQuery 16    │  1756.84ms │   1712.96ms │     no change │
│ QQuery 17    │  1607.84ms │   1612.75ms │     no change │
│ QQuery 18    │  3257.72ms │   3059.25ms │ +1.06x faster │
│ QQuery 19    │    84.43ms │     81.15ms │     no change │
│ QQuery 20    │  1143.94ms │   1098.25ms │     no change │
│ QQuery 21    │  1299.30ms │   1302.19ms │     no change │
│ QQuery 22    │  2139.01ms │   2154.26ms │     no change │
│ QQuery 23    │  7959.94ms │   7886.41ms │     no change │
│ QQuery 24    │   463.30ms │    458.37ms │     no change │
│ QQuery 25    │   386.27ms │    380.85ms │     no change │
│ QQuery 26    │   524.88ms │    519.71ms │     no change │
│ QQuery 27    │  1581.12ms │   1583.97ms │     no change │
│ QQuery 28    │ 12730.61ms │  12509.52ms │     no change │
│ QQuery 29    │   536.15ms │    528.59ms │     no change │
│ QQuery 30    │   793.18ms │    795.03ms │     no change │
│ QQuery 31    │   857.17ms │    852.83ms │     no change │
│ QQuery 32    │  2653.95ms │   2631.77ms │     no change │
│ QQuery 33    │  3315.12ms │   3322.64ms │     no change │
│ QQuery 34    │  3381.42ms │   3355.04ms │     no change │
│ QQuery 35    │  1317.57ms │   1278.45ms │     no change │
│ QQuery 36    │   129.98ms │    122.68ms │ +1.06x faster │
│ QQuery 37    │    56.20ms │     53.71ms │     no change │
│ QQuery 38    │   119.55ms │    121.04ms │     no change │
│ QQuery 39    │   193.87ms │    194.76ms │     no change │
│ QQuery 40    │    48.93ms │     49.63ms │     no change │
│ QQuery 41    │    44.38ms │     45.77ms │     no change │
│ QQuery 42    │    36.47ms │     37.46ms │     no change │
└──────────────┴────────────┴─────────────┴───────────────┘
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━┓
┃ Benchmark Summary          ┃            ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━┩
│ Total Time (HEAD)          │ 56733.55ms │
│ Total Time (issue_16193)   │ 56041.14ms │
│ Average Time (HEAD)        │  1319.38ms │
│ Average Time (issue_16193) │  1303.28ms │
│ Queries Faster             │          3 │
│ Queries Slower             │          0 │
│ Queries with No Change     │         40 │
└────────────────────────────┴────────────┘
--------------------
Benchmark tpch_mem_sf1.json
--------------------
┏━━━━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━━━┓
┃ Query        ┃     HEAD ┃ issue_16193 ┃    Change ┃
┡━━━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━━━━┩
│ QQuery 1     │ 120.69ms │    123.19ms │ no change │
│ QQuery 2     │  22.13ms │     21.87ms │ no change │
│ QQuery 3     │  34.29ms │     34.30ms │ no change │
│ QQuery 4     │  19.41ms │     19.60ms │ no change │
│ QQuery 5     │  52.72ms │     51.09ms │ no change │
│ QQuery 6     │  11.87ms │     12.05ms │ no change │
│ QQuery 7     │  96.12ms │     93.63ms │ no change │
│ QQuery 8     │  25.70ms │     25.62ms │ no change │
│ QQuery 9     │  58.58ms │     58.90ms │ no change │
│ QQuery 10    │  55.95ms │     56.01ms │ no change │
│ QQuery 11    │  11.46ms │     11.44ms │ no change │
│ QQuery 12    │  41.40ms │     40.05ms │ no change │
│ QQuery 13    │  27.95ms │     27.54ms │ no change │
│ QQuery 14    │   9.57ms │      9.64ms │ no change │
│ QQuery 15    │  23.71ms │     24.29ms │ no change │
│ QQuery 16    │  21.66ms │     22.40ms │ no change │
│ QQuery 17    │  96.13ms │     95.60ms │ no change │
│ QQuery 18    │ 209.28ms │    219.00ms │ no change │
│ QQuery 19    │  26.43ms │     25.81ms │ no change │
│ QQuery 20    │  37.60ms │     36.09ms │ no change │
│ QQuery 21    │ 161.77ms │    163.00ms │ no change │
│ QQuery 22    │  16.00ms │     16.74ms │ no change │
└──────────────┴──────────┴─────────────┴───────────┘
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━┓
┃ Benchmark Summary          ┃           ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━┩
│ Total Time (HEAD)          │ 1180.44ms │
│ Total Time (issue_16193)   │ 1187.87ms │
│ Average Time (HEAD)        │   53.66ms │
│ Average Time (issue_16193) │   53.99ms │
│ Queries Faster             │         0 │
│ Queries Slower             │         0 │
│ Queries with No Change     │        22 │
└────────────────────────────┴───────────┘

Thank you @alamb , it's surprising that performance has no regression, even faster for clickbench_partitioned, it may due to we yield for each partition running, and those make the partition running more efficient.

@xudong963 xudong963 changed the title feat: support inability to yield for loop when it's not using Tok… feat: support inability to yield for loop when it's not using Tokio MPSC (RecordBatchReceiverStream) May 29, 2025
@pepijnve
Copy link
Contributor

pepijnve commented Jun 6, 2025

One performance aspect I've been looking at is the cost of yielding. There's no magic as far as I can tell. Returning a Pending simply leads to a full unwind of the call stack by virtue of the return bubbling up all the way up to the tokio executor and then a full descent back to the point where you left off using the same function calls that got you there in the first place.
That would suggest it's most interesting to do a cooperative yield from as shallow a point as possible in the call tree rather than from the deepest possible point so that you keep the roundtrip to the executor and back as short as possible.

Running with target_partitions = 1, shows that for queries like the deeply nested window/sort query you linked to @ozankabak the call stack can get pretty deep. It's essentially proportionate to the depth of the plan tree.

To mitigate this, would it make sense for pipeline breaking operators to run their pipeline breaking portion in a SpawnedTask instead of as child? I'm thinking of the sort phase of sort, the build phase of join, etc. Regardless of how where you inject Pending that seems beneficial to keep the call stack that needs to be unwound shallow.
Each of the Sort/Window pairs basically becomes an idle task that's just sitting there waiting for the pipeline breaker it depends on to start emitting data it will yield once since the JoinHandle is not ready and only reactivate once the join handle wakes it.

Note that this same argument does suggest it could more interesting to do the cooperative yield where the looping is happening rather than where the data is produced. The loop is the shallowest point, definitely if you spawn a task since that gets you a new root, while the producer is the deepest point.

Cutting the call stack using spawned tasks may also mitigate the deeply nested query concern regarding checking for yielding at multiple levels. The yield is never going to go beyond the scope of a single task.

pepijnve@75f8800 illustrates the change I'm suggesting. If you think it's useful I can turn this into a separate PR.

Copy link
Contributor

@ozankabak ozankabak left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I found some time to work on this tonight and it looks good to me now.

To summarize where we are:

  • We add yields to all leaf nodes, but no yields to any intermediate node.
  • We added a bunch of tests to cover some corner cases and all of them pass.
  • There is a single new with_cooperative_yields API, which returns a cooperatively yielding version of a plan object (if it exists). If it doesn't exist for a leaf node, we add an auxiliary operator to handle yielding.

Future work:

  • We will study input-side pipelining behaviors and improve the pipelining API, so that we only trigger explicit yielding when it is necessary. Given the small number of leaf nodes, we are not that far off from optimality even as is, which is great. We have some ideas on what to try here, but the current state seems quite good -- so we can merge it to fix downstream issues as we make further progress.
  • We will think about supporting cases involving non-volcano (e.g. spill) data flow.

@zhuqi-lucas and @alamb, PTAL

@zhuqi-lucas
Copy link
Contributor Author

zhuqi-lucas commented Jun 7, 2025

I found some time to work on this tonight and it looks good to me now.

To summarize where we are:

  • We add yields to all leaf nodes, but no yields to any intermediate node.
  • We added a bunch of tests to cover some corner cases and all of them pass.
  • There is a single new with_cooperative_yields API, which returns a cooperatively yielding version of a plan object (if it exists). If it doesn't exist for a leaf node, we add an auxiliary operator to handle yielding.

Future work:

  • We will study input-side pipelining behaviors and improve the pipelining API, so that we only trigger explicit yielding when it is necessary. Given the small number of leaf nodes, we are not that far off from optimality even as is, which is great. We have some ideas on what to try here, but the current state seems quite good -- so we can merge it to fix downstream issues as we make further progress.
  • We will think about supporting cases involving non-volcano (i.e. spill) data flow.

@zhuqi-lucas and @alamb, PTAL

Thank you, i agree that we are in good state, because:

  1. This PR can help both datafusion operator and also custom defined operator automatically.
  2. It will not affect performance.

I will also help investigate following case, may be as a follow-up ticket, thanks!

We will think about supporting cases involving non-volcano (i.e. spill) data flow.

Additional benefit from this PR?
I will investigate that if we can remove some internal yield logic, such as repartition? etc

@ozankabak
Copy link
Contributor

ozankabak commented Jun 7, 2025

I will investigate that if we can remove some internal yield logic, such as repartition? etc

Good idea, I'm curious to see if you can. RepartitionExec is a little bit of an outlier because it also breaks the volcano flow by actively draining its input pushing it into channels. As you experiment with it, let's make sure to test cases with nested repartition operators so that we don't miss any corner cases.

@ozankabak
Copy link
Contributor

I merged the latest from main, this is good to go

…nstead of frequency for config parameter terminology
@ozankabak
Copy link
Contributor

@zhuqi-lucas, I wanted to make a few final finishing touches as we gave a chance in case @alamb wants to take a final look. I changed the config terminology from "frequency" to "period" because the former was kind of a misnomer. I also did some refactoring to remove some code repeatitions. Can you please double check to make sure all looks good on your end? Thanks.

@zhuqi-lucas
Copy link
Contributor Author

@zhuqi-lucas, I wanted to make a few final finishing touches as we gave a chance in case @alamb wants to take a final look. I changed the config terminology from "frequency" to "period" because the former was kind of a misnomer. I also did some refactoring to remove some code repeatitions. Can you please double check to make sure all looks good on your end? Thanks.

Thank you @ozankabak , the final refractor and name change looks good to me. Yeah , let's wait @alamb to get the final look. Thanks!

@zhuqi-lucas
Copy link
Contributor Author

I will investigate that if we can remove some internal yield logic, such as repartition? etc

Good idea, I'm curious to see if you can. RepartitionExec is a little bit of an outlier because it also breaks the volcano flow by actively draining its input pushing it into channels. As you experiment with it, let's make sure to test cases with nested repartition operators so that we don't miss any corner cases.

Thank you, it makes sense, i will experiment it as a follow-up.

@ozankabak
Copy link
Contributor

@alamb FYI I plan to merge this soon. It is OK if you don't have the bandwidth to take a look, it is the first step towards the design we discussed before.

@pepijnve
Copy link
Contributor

pepijnve commented Jun 8, 2025

As time permits, we can explore alternate, more universal strategies for cancellation
100% agree with not merging this until we are in agreement

I can't help but feel that this is needlessly being rushed. Committing to a new public API on an extension point before it's clearly proven feels like a bad idea to me. If it was purely an implementation detail it would be less of an issue. What's the hurry?

The more I've been digging into the code over the past few days the clearer it is that getting yielding just right while avoiding wasteful work is something you need to be careful about. See for instance #16196.
We started out with concerns that yielding needlessly would introduce performance overhead, but now this PR does so even when it's not necessary at all. Admittedly it's nowhere near as extreme as the above issue, but still waste is waste.

Wouldn't it be prudent to give this some more time to mature and maybe see if there are better strategies? It's not a universal solution, but just as an example what I found in #16319 is that restructuring the operator code a little bit makes them behave much nicer from the caller's perspective. Unfortunately you do need to do this kind of thing at the operator implementation level. I do think there are implementation patterns here that could server as building blocks for operators. 'Build a stream async and then emit from it' for instance seems to be pretty common. Rather than having a bespoke implementation in each operator it would be useful to have a combinator that operators can use. Perhaps there's a similar zero cost solution to the 'drains input before first emit' pattern as well?

@alamb alamb dismissed their stale review June 8, 2025 23:27

Have some questions about the new design

Copy link
Contributor

@alamb alamb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you @zhuqi-lucas and @ozankabak and @pepijnve -- I think this PR is really nicely commented and structured. It is easy to read and review.

However, I am sorry but I am a bit confused by the implications of this PR now.

From what I can tell, it doesn't insert YieldExec or add Yielding for AggregateExec, which is the operator we have real evidence doesn't yield. Instead it seems to add yielding for DataSource exec, which already will yield when reading Parquet from a remote store, for example 🤔

fn wrap_leaves_of_pipeline_breakers(
plan: Arc<dyn ExecutionPlan>,
) -> Result<Transformed<Arc<dyn ExecutionPlan>>> {
let is_pipeline_breaker = plan.properties().emission_type == EmissionType::Final;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I missed this code in the initial PR

/// with the runtime by yielding control back to the runtime every `frequency`
/// batches. This is useful for operators that do not natively support yielding
/// control, allowing them to be used in a runtime that requires yielding for
/// cancellation or other purposes.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would also help to explain here how to avoid this Exec in your plan

Suggested change
/// cancellation or other purposes.
/// cancellation or other purposes.
///
/// # Note
/// If your ExecutionPlan periodically yields control back to the scheduler
/// implement [`ExecutionPlan::with_cooperative_yields`] to avoid the need for this
/// node.

@@ -137,6 +138,7 @@ impl PhysicalOptimizer {
// are not present, the load of executors such as join or union will be
// reduced by narrowing their input tables.
Arc::new(ProjectionPushdown::new()),
Arc::new(WrapLeaves::new()),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we please call this pass something related to Cancel or Yield? Like InsertYieldExec to make it clearer what it is doing?

}

#[derive(Debug)]
struct InfiniteExec {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of a new exec, perhaps we could use MemoryExec (with like 1000 batch.clone() for example) to show it is configured correctly

@@ -242,6 +242,7 @@ physical_plan after OutputRequirements DataSourceExec: file_groups={1 group: [[W
physical_plan after LimitAggregation SAME TEXT AS ABOVE
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am confused about why YieldStreamExec does not appear in more of the explain slt plans

@alamb
Copy link
Contributor

alamb commented Jun 8, 2025

I am happy to wait for a bit more testing on this PR -- we have now about a month before the next release so there is no pressure from there.

However, I do like a bias of action, and if this PR fixes a real problem, I don't think we should bikeshed it indefinitely

Unfortunately you do need to do this kind of thing at the operator implementation level. I do think there are implementation patterns here that could server as building blocks for operators. 'Build a stream async and then emit from it' for instance seems to be pretty common. Rather than having a bespoke implementation in each operator it would be useful to have a combinator that operators can use. Perhaps there's a similar zero cost solution to the 'drains input before first emit' pattern as well?

I was thinking of YieldStream as such a combinator 🤔

@alamb FYI I plan to merge this soon. It is OK if you don't have the bandwidth to take a look, it is the first step towards the design we discussed before.

@ozankabak -- what are the next steps? I may have lost track -- if this PR needs some follow on I think we should file tickets to explain what they are before merging it (I can help to file such tickets)

@alamb
Copy link
Contributor

alamb commented Jun 9, 2025

🤖 ./gh_compare_branch.sh Benchmark Script Running
Linux aal-dev 6.11.0-1013-gcp #13~24.04.1-Ubuntu SMP Wed Apr 2 16:34:16 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
Comparing issue_16193 (56361a4) to 1daa5ed diff
Benchmarks: tpch_mem clickbench_partitioned clickbench_extended
Results will be posted here when complete

@alamb
Copy link
Contributor

alamb commented Jun 9, 2025

🤖: Benchmark completed

Details

Comparing HEAD and issue_16193
--------------------
Benchmark clickbench_extended.json
--------------------
┏━━━━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━━━━━━┓
┃ Query        ┃        HEAD ┃ issue_16193 ┃       Change ┃
┡━━━━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━━━━━━━┩
│ QQuery 0     │  1774.52 ms │  1915.09 ms │ 1.08x slower │
│ QQuery 1     │   702.21 ms │   689.79 ms │    no change │
│ QQuery 2     │  1431.65 ms │  1393.68 ms │    no change │
│ QQuery 3     │   699.08 ms │   674.34 ms │    no change │
│ QQuery 4     │  1423.03 ms │  1483.81 ms │    no change │
│ QQuery 5     │ 15706.11 ms │ 16249.84 ms │    no change │
│ QQuery 6     │  2019.19 ms │  2017.67 ms │    no change │
│ QQuery 7     │  2010.38 ms │  2123.45 ms │ 1.06x slower │
│ QQuery 8     │   828.39 ms │   858.33 ms │    no change │
└──────────────┴─────────────┴─────────────┴──────────────┘
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━┓
┃ Benchmark Summary          ┃            ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━┩
│ Total Time (HEAD)          │ 26594.57ms │
│ Total Time (issue_16193)   │ 27405.98ms │
│ Average Time (HEAD)        │  2954.95ms │
│ Average Time (issue_16193) │  3045.11ms │
│ Queries Faster             │          0 │
│ Queries Slower             │          2 │
│ Queries with No Change     │          7 │
│ Queries with Failure       │          0 │
└────────────────────────────┴────────────┘
--------------------
Benchmark clickbench_partitioned.json
--------------------
┏━━━━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓
┃ Query        ┃        HEAD ┃ issue_16193 ┃        Change ┃
┡━━━━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩
│ QQuery 0     │    15.00 ms │    15.22 ms │     no change │
│ QQuery 1     │    32.82 ms │    33.10 ms │     no change │
│ QQuery 2     │    80.12 ms │    80.05 ms │     no change │
│ QQuery 3     │    97.94 ms │    98.43 ms │     no change │
│ QQuery 4     │   578.12 ms │   585.68 ms │     no change │
│ QQuery 5     │   820.56 ms │   836.55 ms │     no change │
│ QQuery 6     │    23.39 ms │    23.11 ms │     no change │
│ QQuery 7     │    37.30 ms │    36.90 ms │     no change │
│ QQuery 8     │   891.45 ms │   902.35 ms │     no change │
│ QQuery 9     │  1166.10 ms │  1193.38 ms │     no change │
│ QQuery 10    │   264.85 ms │   263.41 ms │     no change │
│ QQuery 11    │   294.82 ms │   293.40 ms │     no change │
│ QQuery 12    │   896.36 ms │   900.32 ms │     no change │
│ QQuery 13    │  1199.00 ms │  1333.54 ms │  1.11x slower │
│ QQuery 14    │   828.80 ms │   832.24 ms │     no change │
│ QQuery 15    │   818.26 ms │   850.21 ms │     no change │
│ QQuery 16    │  1726.19 ms │  1726.91 ms │     no change │
│ QQuery 17    │  1621.59 ms │  1594.69 ms │     no change │
│ QQuery 18    │  3055.91 ms │  3032.30 ms │     no change │
│ QQuery 19    │    84.56 ms │    84.16 ms │     no change │
│ QQuery 20    │  1131.97 ms │  1126.00 ms │     no change │
│ QQuery 21    │  1316.89 ms │  1286.64 ms │     no change │
│ QQuery 22    │  2185.01 ms │  2140.56 ms │     no change │
│ QQuery 23    │  8040.54 ms │  7888.40 ms │     no change │
│ QQuery 24    │   465.48 ms │   464.48 ms │     no change │
│ QQuery 25    │   393.23 ms │   390.35 ms │     no change │
│ QQuery 26    │   533.73 ms │   525.41 ms │     no change │
│ QQuery 27    │  1596.30 ms │  1560.51 ms │     no change │
│ QQuery 28    │ 13568.05 ms │ 12429.02 ms │ +1.09x faster │
│ QQuery 29    │   531.82 ms │   517.36 ms │     no change │
│ QQuery 30    │   798.55 ms │   799.26 ms │     no change │
│ QQuery 31    │   857.27 ms │   884.07 ms │     no change │
│ QQuery 32    │  2623.40 ms │  2664.08 ms │     no change │
│ QQuery 33    │  3354.76 ms │  3339.54 ms │     no change │
│ QQuery 34    │  3338.46 ms │  3400.82 ms │     no change │
│ QQuery 35    │  1275.73 ms │  1308.99 ms │     no change │
│ QQuery 36    │   124.96 ms │   125.55 ms │     no change │
│ QQuery 37    │    54.57 ms │    57.31 ms │  1.05x slower │
│ QQuery 38    │   126.65 ms │   120.73 ms │     no change │
│ QQuery 39    │   193.09 ms │   189.66 ms │     no change │
│ QQuery 40    │    49.83 ms │    45.90 ms │ +1.09x faster │
│ QQuery 41    │    43.40 ms │    45.26 ms │     no change │
│ QQuery 42    │    38.46 ms │    38.37 ms │     no change │
└──────────────┴─────────────┴─────────────┴───────────────┘
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━┓
┃ Benchmark Summary          ┃            ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━┩
│ Total Time (HEAD)          │ 57175.29ms │
│ Total Time (issue_16193)   │ 56064.20ms │
│ Average Time (HEAD)        │  1329.66ms │
│ Average Time (issue_16193) │  1303.82ms │
│ Queries Faster             │          2 │
│ Queries Slower             │          2 │
│ Queries with No Change     │         39 │
│ Queries with Failure       │          0 │
└────────────────────────────┴────────────┘
--------------------
Benchmark tpch_mem_sf1.json
--------------------
┏━━━━━━━━━━━━━━┳━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━━━┓
┃ Query        ┃      HEAD ┃ issue_16193 ┃    Change ┃
┡━━━━━━━━━━━━━━╇━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━━━━┩
│ QQuery 1     │ 118.89 ms │   122.00 ms │ no change │
│ QQuery 2     │  22.49 ms │    21.99 ms │ no change │
│ QQuery 3     │  34.29 ms │    32.89 ms │ no change │
│ QQuery 4     │  19.84 ms │    19.56 ms │ no change │
│ QQuery 5     │  51.90 ms │    51.48 ms │ no change │
│ QQuery 6     │  12.07 ms │    12.04 ms │ no change │
│ QQuery 7     │  95.55 ms │    93.85 ms │ no change │
│ QQuery 8     │  25.06 ms │    25.59 ms │ no change │
│ QQuery 9     │  58.61 ms │    58.95 ms │ no change │
│ QQuery 10    │  48.46 ms │    47.45 ms │ no change │
│ QQuery 11    │  11.38 ms │    11.44 ms │ no change │
│ QQuery 12    │  41.56 ms │    41.35 ms │ no change │
│ QQuery 13    │  28.78 ms │    27.37 ms │ no change │
│ QQuery 14    │   9.82 ms │     9.63 ms │ no change │
│ QQuery 15    │  22.56 ms │    22.71 ms │ no change │
│ QQuery 16    │  21.92 ms │    21.26 ms │ no change │
│ QQuery 17    │  95.31 ms │    94.12 ms │ no change │
│ QQuery 18    │ 212.34 ms │   211.30 ms │ no change │
│ QQuery 19    │  25.25 ms │    26.07 ms │ no change │
│ QQuery 20    │  36.45 ms │    34.98 ms │ no change │
│ QQuery 21    │ 155.64 ms │   158.50 ms │ no change │
│ QQuery 22    │  16.07 ms │    16.54 ms │ no change │
└──────────────┴───────────┴─────────────┴───────────┘
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━┓
┃ Benchmark Summary          ┃           ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━┩
│ Total Time (HEAD)          │ 1164.25ms │
│ Total Time (issue_16193)   │ 1161.06ms │
│ Average Time (HEAD)        │   52.92ms │
│ Average Time (issue_16193) │   52.78ms │
│ Queries Faster             │         0 │
│ Queries Slower             │         0 │
│ Queries with No Change     │        22 │
│ Queries with Failure       │         0 │
└────────────────────────────┴───────────┘

@alamb
Copy link
Contributor

alamb commented Jun 9, 2025

🤖 ./gh_compare_branch.sh Benchmark Script Running
Linux aal-dev 6.11.0-1013-gcp #13~24.04.1-Ubuntu SMP Wed Apr 2 16:34:16 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
Comparing issue_16193 (56361a4) to 1daa5ed diff
Benchmarks: cancellation
Results will be posted here when complete

@alamb
Copy link
Contributor

alamb commented Jun 9, 2025

🤖: Benchmark completed

Details

Comparing HEAD and issue_16193
--------------------
Benchmark cancellation.json
--------------------
┏━━━━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━━━┓
┃ Query        ┃     HEAD ┃ issue_16193 ┃    Change ┃
┡━━━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━━━━┩
│ QCancellati… │ 27.20 ms │    27.81 ms │ no change │
└──────────────┴──────────┴─────────────┴───────────┘
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━┓
┃ Benchmark Summary          ┃         ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━┩
│ Total Time (HEAD)          │ 27.20ms │
│ Total Time (issue_16193)   │ 27.81ms │
│ Average Time (HEAD)        │ 27.20ms │
│ Average Time (issue_16193) │ 27.81ms │
│ Queries Faster             │       0 │
│ Queries Slower             │       0 │
│ Queries with No Change     │       1 │
│ Queries with Failure       │       0 │
└────────────────────────────┴─────────┘

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
common Related to common crate core Core DataFusion crate datasource Changes to the datasource crate documentation Improvements or additions to documentation optimizer Optimizer rules physical-plan Changes to the physical-plan crate proto Related to proto crate sqllogictest SQL Logic Tests (.slt)
Projects
None yet
5 participants