Blow away ramdisks #415
Hardcore-fs
started this conversation in
Ideas
Replies: 2 comments 4 replies
-
That's a pretty good idea indeed, except you will have to specify RAM size as a parameter. |
Beta Was this translation helpful? Give feedback.
3 replies
-
I completely agree with idea - especially as RAM speed is sometimes 10x faster than accessing files on RAM disk due to FS overhead. (It seems that some NVME-s are even faster than RAM disk with tmpfs) |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Can we look at totally removing all the nonsense with 120G of ram to build a ramdisk, then trying to jimmy the files thru the os File system API.
start program..
take a look at available ram.
allocate a FBRB...
write as many file blocks as possible to this buffer, using a mult as index thru the data.
Any file data that WONT fit write out to secondary storage. (this then scales to the users ram without special situations.)
each file varies from 600mb ->1.3gb depending on the bucket size.
on read back, process the stuff in ram, then the files, DO NOT copy files into this ram buffer. from secondary storage for "faster" processing...
as the ram is freed up as each file is destroyed, write a new set of data to this buffer, when full write balance out to secondary storage.
In effect build your own FS in ram, without the need to store directory & file information, worry about mutex locking and all the other bit allocations for block storage calculations that go with a FS.
I think you will see a significant increase once you take the OS API out of it.
Plus it scales well for everyone who is NOT running servers with massive amounts of ram.
Beta Was this translation helpful? Give feedback.
All reactions