Skip to content

Snapshot: Adaptive chunk sizes #119

@tjwsch

Description

@tjwsch

Enabling faster reading and writing of subsets of an HDF5 file can be achieved by chunking. To make use of this, it is important to optimize the chunk size. As the size of snapshots is unknown before the computation of the first snapshot in parameter space, the snapshot computation needs to determine the chunk size automatically with the knowledge of the first output. In a parallel run of the snapshot computation, chunks mustn't allocate a lot of space that is not used to store data, and results from different processes can be merged. Currently, the snapshot computation uses a fixed chunk size

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementEnchance existing functionality

    Type

    No type

    Projects

    No projects

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions