Fix map_query_sql benchmark duplicate key error #18427
Open
+10
−2
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Fix map_query_sql benchmark duplicate key error
Description
The build_keys() function was generating 1000 random keys from range 0..9999, which could result in duplicate keys due to the birthday paradox. The map() function requires unique keys, causing the benchmark to fail with: Execution("map key must be unique, duplicate key found: {key}")
This fix ensures all generated keys are unique by:
Using a HashSet to track seen keys
Only adding keys to the result if they haven't been seen before
Continuing to generate until exactly 1000 unique keys are produced
Fixes #18421
Which issue does this PR close?
Closes #18421
Rationale for this change
The benchmark was non-deterministic: it could pass or fail depending on random key generation. With 1000 keys from a range of 9999 values, collisions are likely (~50% chance), making the benchmark unreliable. This change ensures uniqueness so the benchmark consistently succeeds and accurately measures map function performance.
What changes are included in this PR?
Added use std::collections::HashSet; import
Modified build_keys() to:
Track generated keys using a HashSet
Only add keys if they are unique
Continue generating until exactly 1000 unique keys are produced
File changed: datafusion/core/benches/map_query_sql.rs
Code changes:
Added HashSet import at the top of the file
Replaced simple loop with uniqueness-checking logic in build_keys() function
Are these changes tested?
The fix was verified by:
Logic review: the HashSet approach guarantees uniqueness
Code review: changes follow Rust best practices
No linter errors
The benchmark itself serves as the test — running cargo bench -p datafusion --bench map_query_sql should now complete without errors. Before this fix, the benchmark would fail with duplicate key errors in a significant portion of runs.
Are there any user-facing changes?
No user-facing changes. This is an internal benchmark fix that ensures the map_query_sql benchmark runs reliably. It does not affect the public API or any runtime behavior of DataFusion.