You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently compression is done per writes. But there are a lot of duplicated bytes between raft log entries, if compression is enabled for a stream, the compress ratio can be optimized further.
The text was updated successfully, but these errors were encountered:
One issue here is the recovery speed. Right now only indexes is read during recovery, #129 shows full scan (large enough read-block-size) can bring nearly 3x regression.
Performance-wise, streaming compression seems to have lower latency and fewer allocation, but compression will be in the critical path inside mutex.
I'm not sure if such algorithm exists, but compression can collect dict during the whole time, while flushing to disk block by block, perhaps per 64MiB. This can take care of both recover speed and runtime performance.
A log entry is usually only 0~512B as observed in a typical online service.
Currently compression is done per writes. But there are a lot of duplicated bytes between raft log entries, if compression is enabled for a stream, the compress ratio can be optimized further.
The text was updated successfully, but these errors were encountered: