-
-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MDEV-32732 Support DESC indexes in loose scan optimization #3872
base: main
Are you sure you want to change the base?
Conversation
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the last piece of input I have. Will be ok to push after the testcases are fixed.
if ($have_debug) { | ||
--disable_query_log | ||
set @old_debug=@@debug; | ||
set debug="+d,force_group_by"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
t doesn't look like a good idea to use this approach:
- We may not get test coverage in release builds.
- The .test file is made to produce the same result regardless of whether the optimization is used or not. If it gets disabled for some reason (in both release and debug builds), we will never know.
I might have said earlier that one can use DBUG-based code to force the optimizer to make a desired choice. I think it's still true when one or more of the following holds:
- it is impossible (or very difficult) to construct a testcase
- the testcase needs to be very large, too large for MTR.
Note that here we have neither:
- I have tried changing SELECTs to EXPLAINs and removing the SET DEBUG - it looks like all tests will pick loose scan anyway?
- If something doesn't pick loose scan, I bet one can easily get the loose scan to be picked by adding several big GROUP BY groups into the table and adding them into WHERE so that non-loose-scan becomes much more expensive while loose scan becomes just a bit more expensive.
Take away: like some other tests, please do
let $q=SELECT ....;
eval $q;
eval EXPLAIN $q;
and remove the set debug="+d,force_group_by";
.
I don't expect it will cause a lot of issues.
AFAIU the tests are not using InnoDB. If they do, please use innodb_stable_estimates.inc
and maybe add innodb_max_purge_lag_wait=0
there...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, done.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Patch sent for testing.
e57bfc1
to
3816db2
Compare
Extend loose index scan to support descending indexes. This is achieved by removing a block skipping creating loose index scan plan for descending index, as well as generalising the execution of such plans. The generalisation applies to all levels looking for min/max in loose index scan. In the highest level (get_next), generalise min and max to first and last, so that it still proceeds in the direction agreeing with the index parity. In the lower levels, combine next_min and next_max methods into next_min_max, and combine next_min_in_range and next_max_in_range into next_min_max_in_range. This retains existing logic of these functions and reduces code duplication, while allowing handling of all four combinations (min, max) x (asc index, desc index).
3816db2
to
0d710fb
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now, looks good.
Extend loose index scan to support descending indexes.
This is achieved by removing a block skipping creating loose index
scan plan for descending index, as well as generalising the execution
of such plans.
The generalisation applies to all levels looking for min/max in loose
index scan. In the highest level (get_next), generalise min and max to
first and last, so that it still proceeds in the direction agreeing
with the index parity. In the lower levels, combine next_min and
next_max methods into next_min_max, and combine next_min_in_range and
next_max_in_range into next_min_max_in_range. This retains existing
logic of these functions and reduces code duplication, while allowing
handling of all four combinations (min, max) x (asc index, desc
index).
Description
See the commit message.
Addressing comments in the first round of review:
Applied.
Done.
Done. Moved the call out into next_min_max(). Also added some tests
covering these index lookups before the call to skip_nulls().
Fixed the handling of NULL_RANGE's, especially when looking for a MIN,
which the logic in next_min_in_range before the patch carries over.
More specifically, in case of MIN we consider ranges from the left to
the right, if the leftmost range is NULL_RANGE and a NULL is found,
then the NULL needs to be saved:
NULL
How can this PR be tested?
See tests in the patch
Basing the PR against the correct MariaDB version