A parallel framework for deep learning. Read the paper here.
- Dense, fully connected neural layers
- Convolutional and max-pooling layers (experimental, forward propagation only)
- Flatten and reshape layers (forward and backward passes)
- Loading dense and convolutional models from Keras h5 files
- Stochastic and mini-batch gradient descent for back-propagation
- Data-based parallelism
- Several activation functions and their derivatives
Layer type | Constructor name | Supported input layers | Rank of output array | Forward pass | Backward pass |
---|---|---|---|---|---|
Input (1-d and 3-d) | input |
n/a | 1, 3 | n/a | n/a |
Dense (fully-connected) | dense |
input1d |
1 | ✅ | ✅ |
Convolutional (2-d) | conv2d |
input3d , conv2d , maxpool2d |
3 | ✅ | ❌ |
Max-pooling (2-d) | maxpool2d |
input3d , conv2d , maxpool2d |
3 | ✅ | ❌ |
Flatten | flatten |
input3d , conv2d , maxpool2d |
1 | ✅ | ✅ |
Reshape (1-d to 3-d) | reshape |
input1d , dense , flatten |
3 | ✅ | ✅ |
Get the code:
git clone https://github.com/modern-fortran/neural-fortran
cd neural-fortran
Required dependencies are:
- A Fortran compiler
- HDF5 (must be provided by the OS package manager or your own build from source)
- functional-fortran, h5fortran, json-fortran (all handled by neural-fortran's build systems, no need for a manual install)
- fpm or CMake for building the code
Optional dependencies are:
- OpenCoarrays (for parallel execution with GFortran)
- BLAS, MKL, or similar (for offloading
matmul
anddot_product
calls) - curl (for downloading testing and example datasets)
Compilers tested include:
- gfortran-9.4.0
- ifort-2021.4
- ifx-2021.4
With gfortran, the following will create an optimized build of neural-fortran:
fpm build \
--profile release \
--flag "-fno-frontend-optimize -I$HDF5INC -L$HDF5LIB"
HDF5 is now a required dependency, so you have to provide it to fpm.
The above command assumes that the HDF5INC
and HDF5LIB
environment
variables are set to the include and library paths, respectively, of your
HDF5 install.
The -fno-frontend-optimize
disables some optimizations that may be harmful
when building neural-fortran.
If you use GFortran and want to run neural-fortran in parallel,
you must first install OpenCoarrays.
Once installed, use the compiler wrappers caf
and cafrun
to build and execute
in parallel, respectively:
fpm build \
--compiler caf \
--profile release \
--flag "-fno-frontend-optimize -I$HDF5INC -L$HDF5LIB"
fpm test \
--profile release \
--flag "-fno-frontend-optimize -I$HDF5INC -L$HDF5LIB"
For the time being, you need to specify the same compiler flags to fpm test
as you did in fpm build
so that fpm knows it should use the same build
profile.
See Fortran Package Manager for more info on fpm.
mkdir build
cd build
cmake .. -DSERIAL=1
make
Tests and examples will be built in the bin/
directory.
If you use GFortran and want to run neural-fortran in parallel,
you must first install OpenCoarrays.
Once installed, use the compiler wrappers caf
and cafrun
to build and execute
in parallel, respectively:
FC=caf cmake ..
make
cafrun -n 4 bin/mnist # run MNIST example on 4 cores
If you want to build with a different compiler, such as Intel Fortran,
set the HDF5_ROOT
environment variable to the root path of your
Intel HDF5 build, and specify FC
when issuing cmake
:
FC=ifort cmake ..
for a parallel build of neural-fortran, or
FC=ifort cmake .. -DSERIAL=1
for a serial build.
To use an external BLAS or MKL library for matmul
calls,
run cmake like this:
cmake .. -DBLAS=-lblas
where the value of -DBLAS
should point to the desired BLAS implementation,
which has to be available in the linking path.
This option is currently available only with gfortran.
To build with debugging flags enabled, type:
cmake .. -DCMAKE_BUILD_TYPE=debug
Type:
ctest
to run the tests.
The easiest way to get a sense of how to use neural-fortran is to look at examples, in increasing level of complexity:
- simple: Approximating a simple, constant data relationship
- sine: Approximating a sine function
- mnist: Hand-written digit recognition using the MNIST dataset
- cnn: Creating and running forward a simple CNN using
input
,conv2d
,maxpool2d
,flatten
, anddense
layers. - dense_from_keras: Creating a pre-trained dense model from a Keras HDF5 file and running the inference.
- cnn_from_keras: Creating a pre-trained convolutional model from a Keras HDF5 file and running the inference.
The examples also show you the extent of the public API that's meant to be
used in applications, i.e. anything from the nf
module.
Examples 3-6 rely on curl to download the needed datasets, so make sure you have it installed on your system. Most Linux OSs have it out of the box. The dataset will be downloaded only the first time you run the example in any given directory.
If you're using Windows OS or don't have curl for any other reason, download mnist.tar.gz directly and unpack in the directory in which you will run the example program.
API documentation can be generated with FORD. Assuming you have FORD installed on your system, run
ford ford.md
from the neural-fortran top-level directory to generate the API documentation in doc/html. Point your browser to doc/html/index.html to read it.
Thanks to all open-source contributors to neural-fortran: @awvwgk, @ivan-pi, @jacobwilliams, @jvdp1, @milancurcic, @pirpyn, @rouson, and @scivision.
Development of convolutional networks in neural-fortran was funded by a contract from NASA Goddard Space Flight Center to the University of Miami.