Anyone dealing with large data knows that stock tools in R are bad at loading (non-binary) data to R. This package started as an attempt to provide high-performance parsing tools that minimize copying and avoid the use of strings when possible (see mstrsplit, for example).
To allow processing of arbitrarily large files we have added way to process chunk-wise input, making it possible to compute on streaming input as well as very large files (see chunk.reader and chunk.apply).
The next natural progress was to wrap support for Hadoop
streaming. The major goal was to make it possible to compute using
Hadoop Map Reduce by writing code that is very natural - very much
like using lapply
on data chunks without the need to know anything
about Hadoop. See the WiKi page
for the idea and
hmr function for
the documentation.