You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When the CachingPageManager evicts pages, it will not actually free the memory. That's the job of the GC. This means that when we evict a page and it's later requested again, we have to always re-read the page from disk, even tho the evicted page object might be still in memory and not reclaimed by the GC yet, essentially a preventable read.
Instead of throwing away the evicted pages, a secondary page keeping weak references to the pages would thus reduce file I/O and improve performance.
I implemented a basic secondary cache, nothing more than a Dictionary<ulong, WeakReference<Page>> really, as well as some cache hit/miss/rate counters and let it lose on one of my data stores in a real world program, that read about 2M items and wrote about 300K new/updated items, using 16K pages and MaxCachePages = 10,000, and using a non-concurrent workstation GC. The primary cache hit rate was 96.7%, so the current cache works for the most part (tho I also have rudimentary code to prioritize metadata/index pages over data pages in the cache eviction). But the secondary page (the weak references one I added) was able to serve 62.1% of fetch requests that were missed in the primary cache, essentially for free if you compare Dictionary lookup and the small amount of space for the dictionary vs hitting the disk. Of all fetch requests, only 1.25% requests were read from disk (compared to 3.3% without the secondary cache), and most would be requests to a page never requested before that thus could not possibly have been cached already.
The text was updated successfully, but these errors were encountered:
When the CachingPageManager evicts pages, it will not actually free the memory. That's the job of the GC. This means that when we evict a page and it's later requested again, we have to always re-read the page from disk, even tho the evicted page object might be still in memory and not reclaimed by the GC yet, essentially a preventable read.
Instead of throwing away the evicted pages, a secondary page keeping weak references to the pages would thus reduce file I/O and improve performance.
I implemented a basic secondary cache, nothing more than a
Dictionary<ulong, WeakReference<Page>>
really, as well as some cache hit/miss/rate counters and let it lose on one of my data stores in a real world program, that read about 2M items and wrote about 300K new/updated items, using 16K pages and MaxCachePages = 10,000, and using a non-concurrent workstation GC. The primary cache hit rate was 96.7%, so the current cache works for the most part (tho I also have rudimentary code to prioritize metadata/index pages over data pages in the cache eviction). But the secondary page (the weak references one I added) was able to serve 62.1% of fetch requests that were missed in the primary cache, essentially for free if you compare Dictionary lookup and the small amount of space for the dictionary vs hitting the disk. Of all fetch requests, only 1.25% requests were read from disk (compared to 3.3% without the secondary cache), and most would be requests to a page never requested before that thus could not possibly have been cached already.The text was updated successfully, but these errors were encountered: