Skip to content

Prefer: count=exact optimization on HEAD request #3957

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
steve-chavez opened this issue Mar 19, 2025 · 0 comments
Open

Prefer: count=exact optimization on HEAD request #3957

steve-chavez opened this issue Mar 19, 2025 · 0 comments
Labels

Comments

@steve-chavez
Copy link
Member

steve-chavez commented Mar 19, 2025

Why is the page count slow? Shouldn't it be quick compared to the whole table count?

It's quicker if db-max-rows is set to a lower value. If it's not set or if it's big enough then it will take more time.

Maybe share EXPLAIN analyze output of these requests?

(These requests do not have db-max-rows set)

Without `limit=0`

curl 'localhost:3030/big_table' -H 'Prefer: count=exact' -I
EXPLAIN (FORMAT TEXT, ANALYZE, VERBOSE)
WITH pgrst_source AS
  (SELECT "public"."big_table".* FROM "public"."big_table") ,
 pgrst_source_count AS
  (SELECT 1 FROM "public"."big_table")
SELECT
  (SELECT pg_catalog.count(*) FROM pgrst_source_count) AS total_result_set,
  pg_catalog.count(_postgrest_t) AS page_total,
  ''::text AS body,
  nullif(current_setting('response.headers', TRUE), '') AS response_headers,
  nullif(current_setting('response.status', TRUE), '') AS response_status,
  '' AS response_inserted
FROM
  (SELECT * FROM pgrst_source) _postgrest_t
Aggregate  (cost=4562.01..4562.03 rows=1 width=144) (actual time=33.566..33.566 rows=1 loops=1)
  Output: (InitPlan 1).col1, count(ROW(big_table.id, big_table.name, big_table.val, big_table.num)), ''::text, NULLIF(current_setting('response.headers'::text, true), ''::text), NULLIF(current_setting('response.status'::text, true), ''::text), ''::text
  InitPlan 1
    ->  Aggregate  (cost=2281.00..2281.01 rows=1 width=8) (actual time=4.900..4.900 rows=1 loops=1)
          Output: count(*)
          ->  Seq Scan on public.big_table big_table_1  (cost=0.00..2031.00 rows=100000 width=0) (actual time=0.009..2.627 rows=100000 loops=1)
                Output: big_table_1.id, big_table_1.name, big_table_1.val, big_table_1.num
  ->  Seq Scan on public.big_table  (cost=0.00..2031.00 rows=100000 width=50) (actual time=0.009..7.667 rows=100000 loops=1)
        Output: big_table.id, big_table.name, big_table.val, big_table.num
Planning Time: 0.151 ms
Execution Time: 33.598 ms

With `limit=0`

curl 'localhost:3030/big_table?limit=0' -H 'Prefer: count=exact' -I
EXPLAIN (FORMAT TEXT, ANALYZE, VERBOSE)
WITH pgrst_source AS
  (SELECT "public"."big_table".* FROM "public"."big_table" LIMIT 0 OFFSET 0) ,
 pgrst_source_count AS
  (SELECT 1 FROM "public"."big_table")
SELECT
  (SELECT pg_catalog.count(*) FROM pgrst_source_count) AS total_result_set,
  pg_catalog.count(_postgrest_t) AS page_total,
  ''::text AS body,
  nullif(current_setting('response.headers', TRUE), '') AS response_headers,
  nullif(current_setting('response.status', TRUE), '') AS response_status,
  '' AS response_inserted
FROM
  (SELECT * FROM pgrst_source) _postgrest_t
Aggregate  (cost=2281.03..2281.05 rows=1 width=144) (actual time=17.579..17.580 rows=1 loops=1)
  Output: (InitPlan 1).col1, count(ROW(big_table.id, big_table.name, big_table.val, big_table.num)), ''::text, NULLIF(current_setting('response.headers'::text, true), ''::text), NULLIF(current_setting('response.status'::text, true), ''::text), ''::text
  InitPlan 1
    ->  Aggregate  (cost=2281.00..2281.01 rows=1 width=8) (actual time=17.566..17.566 rows=1 loops=1)
          Output: count(*)
          ->  Seq Scan on public.big_table big_table_1  (cost=0.00..2031.00 rows=100000 width=0) (actual time=0.015..9.224 rows=100000 loops=1)
                Output: big_table_1.id, big_table_1.name, big_table_1.val, big_table_1.num
  ->  Limit  (cost=0.00..0.02 rows=1 width=50) (actual time=0.001..0.002 rows=0 loops=1)
        Output: big_table.id, big_table.name, big_table.val, big_table.num
        ->  Seq Scan on public.big_table  (cost=0.00..2031.00 rows=100000 width=50) (never executed)
              Output: big_table.id, big_table.name, big_table.val, big_table.num
Planning Time: 0.289 ms
Execution Time: 17.657 ms

Originally posted by @laurenceisla in #2849

See possible solution on #2849 (comment)

@steve-chavez steve-chavez changed the title Prefer: count=exact optimization on HEAD request when db-max-rows is set Prefer: count=exact optimization on HEAD request Mar 19, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Development

No branches or pull requests

1 participant