-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Open
Labels
Description
Background
We want to implement selective tiering for list data structures . Lists may grow large and their oldest elements are often not actively accessed. Offloading old nodes to disk/storage enhances memory efficiency while retaining fast access to the newest elements.
Describe the solution you'd like
- Support offloading (tiering) of list nodes (quicklist nodes) once the list exceeds a configurable length K.
- When node_count > K, offload the first M nodes (where M ≤ K) to tiered storage.
- Each quicklist node contains a single listpack or a huge value, making it a good unit for offloading.
I/O Strategy
- Single-node operations (RPUSH, RPOP, LLEN, etc.):
- No I/O required for operations on tail nodes (in-memory). Fast path for job queue usage.
- Multi-node operations (LRANGE, LINDEX, etc.):
- Batch and parallelize reads for all required nodes to avoid sequential I/O round trips.
Open Questions
- Should K default to 1 (aggressive) or be dynamic based on memory pressure/self tuning?
- Should offload policy use a cooling or warmup period?
- When offloaded nodes are accessed, should they stay warm (in memory) or immediately become eligible for re-offloading?
Additional context
graph TB
subgraph Queue ["List Structure"]
N1["Node 1<br/>(oldest)<br/>TIERED"]
N2["Node 2<br/>TIERED"]
N3["..."]
NM["Node M<br/>TIERED"]
NM1["Node M+1<br/>IN-MEMORY"]
NK["Node K<br/>(newest)<br/>IN-MEMORY"]
end
subgraph Disk ["Tiered Storage"]
D1[Segment 1]
D2[Segment 2]
DM[Segment M]
end
N1 -.-> D1
N2 -.-> D2
NM -.-> DM
style N1 fill:#e1f5ff,stroke:#0078d4
style N2 fill:#e1f5ff,stroke:#0078d4
style NM fill:#e1f5ff,stroke:#0078d4
style NM1 fill:#d4f4dd,stroke:#107c10
style NK fill:#d4f4dd,stroke:#107c10
This feature is a sub-task of #6447 (selective tiering for hmap and lists).