You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Update to tensorflow-aarch64 and pytorch-aarch64 Docker builds
- Adds more verbose welcome message
- Enables support for v8a targets
Default TensorFlow build is now 'generic' which will now support a greater range of targets
including A72 and Neoverse cores.
- Updates TF build to use Compute Library 21.11
- Updates PyTorch build to use Compute Library 21.11
- Disables caching of ACL softmax primitives in TensorFlow
Co-authored-by: Luke Ireland <[email protected]>
Co-authored-by: Jonathan Louis Kaplan <[email protected]>
Copy file name to clipboardExpand all lines: docker/pytorch-aarch64/examples/README.md
+2-3Lines changed: 2 additions & 3 deletions
Original file line number
Diff line number
Diff line change
@@ -185,11 +185,10 @@ In order to reduce the runtime, for the purposes of confirming that it runs as e
185
185
186
186
#### RNNT
187
187
188
-
The speech recognition RNNT benchmark ([original paper available here](https://arxiv.org/pdf/1811.06621.pdf)) generates character transcriptions from raw audio samples. It is not built by default due to the time requirements for building its dependencies, and the size of the dataset (over 1000 hours of audio).
188
+
The speech recognition RNNT benchmark ([original paper available here](https://arxiv.org/pdf/1811.06621.pdf)) generates character transcriptions from raw audio samples. The data and model parameters are not included in the Docker image by default due to their size (~1GB), but can be downloaded easily using the shell scripts described below.
189
189
190
-
Three separate shell scripts for dependency build, model and data download, and running stages are generated for the built image using a patch file. These scripts can be found in `$HOME/examples/MLCommons/inference/speech_recognition/rnnt/` of the built image.
190
+
Two shell scripts for model and data download, and running scenarios are generated for the built image using a patch file. These scripts can be found in `$HOME/examples/MLCommons/inference/speech_recognition/rnnt/` of the built image.
191
191
192
-
*`build-rnnt.sh` builds sox with flac support from source and installs the requisite Python packages
193
192
*`download_dataset_model.sh` downloads the model and test dataset
194
193
*`run.sh` runs the test, by default only the SingleStream scenario latency test
+(cd third_party; tar xf flac-1.3.2.tar.xz; cd flac-1.3.2; CFLAGS="-I/home/ubuntu/inference/speech_recognition/rnnt/third_party/pybind/include"; ./configure --prefix=$install_dir && make && make install)
+(cd third_party; tar zxf sox-14.4.2.tar.gz; cd sox-14.4.2; LDFLAGS="-L${install_dir}/lib" CFLAGS="-I${install_dir}/include -I/home/ubuntu/inference/speech_recognition/rnnt/third_party/pybind/include" ./configure --prefix=$install_dir --with-flac && make && make install)
0 commit comments