Contents
- Add requirements.txt
- Simplify
write_main_dataset()
:- accept the N-dimensional array itself. No flattening required by user.
- Minimize the requirements like quantity and units. Things will look ugly with a lot of unknowns. Is this relaxation worth it.
- Simplify documentation or add new series to focus on important functions rather than going over the many developer functions that users will not care about
- New optional attribute attached to Main dataset -
type
:AFM scan
. - Simplify
USIDataset
:- This or a new class could extend
Dask.Array
instead ofh5py.Dataset
. Users would not need to look at the 2D form of data. A reference to the HDF5 datasets would still be maintained - Alternatively,
USIDataset.data
could be the N-dimensional form in Dask. However, note that this array can be manipulated. Will the changes be reflected into the HDF5 dataset?
- This or a new class could extend
- Extend USIDataset to a scientific data type such as an
PFMDataset
and add domain specific:- data access (e.g. - in field in BEPS)
- metadata access
- visualization that replaces / builds on the base interactive / static visualization
- operations (e.g. - functional fitting)
- relationships / links to other source / child datasets
- Consider Dask based backend for
Process
- Add instructions on pbs script and python script information from distUSID
- Fix problems with Travis CI
- Extend
Process
class to work with multiple GPUs using cupy - Relax restrictions with regards to expecting region references
- Examples within docstrings for popular functions where possible - not a high priority due to the complexity of functions, need for data, availability of cookbooks
- file dialog for Jupyter not working on Mac OS
- Revisit and address as many pending TODOs as possible
- Itk for visualization - https://github.com/InsightSoftwareConsortium/itk-jupyter-widgets
- Look into versioneer
- A sister package with the base labview subvis that enable writing h5usid files. The actual acquisition can be ignored.
- Intelligent method (using timing) to ensure that process and Fitter compute over small chunks and write to file periodically. Alternatively expose number of positions to user and provide intelligent guess by default
- function for saving sub-tree to new h5 file
- Windows compatible function for deleting sub-tree
- Profile code to see where things are slow
- Cloud deployment
- Container installation - not all that challenging given that pyUSID is pure python
- Check out HDF5Cloud
- AWS cloud cluster
Pydap.client
: wrapper ofopendap
– accessing data remotely and remote execution of notebooks - https://github.com/caseyjlaw/jupyter-notebooks/blob/master/vlite_hdfits_opendap_demo.ipynb- Alternate visualization packages - http://lightning-viz.org