Skip to content

Conversation

@zilder
Copy link
Member

@zilder zilder commented Dec 17, 2024

Not for merge. The current scripts we are planning to use to refresh the CAggs, that have been materialized with timescaledb.enable_tiered_reads disabled (SDC). Based on #40 but adapted specifically to discover CAggs built on tiered data and to insert an invalidation record to _timescaledb_catalog.continuous_aggs_materialization_invalidation_log to allow data re-materialization.

zilder and others added 3 commits December 17, 2024 17:55
Also added a function to calculate the time bucket using the
Continuous Aggregate bucket function configuration.

Changed the API to define the number of the buckets to split ranges. The
range size is based on the bucket width of the CAgg.
FROM
ranges,
-- Split ranges with 5 times the bucket width
LATERAL generate_series(ranges.global_start, ranges.global_end, (bucket_width * nbuckets)) AS start
Copy link
Member Author

@zilder zilder Jan 2, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've been looking through the changes, and I think one thing is missing from my original code. The user has this weird data points in year 1917 or so (probably inserted by mistake). And apparently they want to keep it that way. If we generate ranges with this query it would create all the ranges between 1917 and 2024, while they only have few data points in pre-2020. In my original code I had this join to only generate the ranges that intersect with existing tiered chunks:

        FROM timescaledb_osm.tiered_chunks ch
        JOIN _timescaledb_additional.generate_increments(start_t, end_t, increment_size) AS i
            ON tstzrange(i.incr_start, i.incr_end, '[)') && tstzrange(ch.range_start, ch.range_end, '[)')

@philkra philkra changed the title Incremental refresh for Molex Incremental refresh Jan 9, 2025
@zilder zilder closed this Jan 9, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants