Release 2.12.0 corresponding to NGC container 21.07
Triton Inference Server
The Triton Inference Server provides a cloud inferencing solution optimized for both CPUs and GPUs. The server provides an inference service via an HTTP or GRPC endpoint, allowing remote clients to request inferencing for any model being managed by the server. For edge deployments, Triton Server is also available as a shared library with an API that allows the full functionality of the server to be included directly in an application.
What's New In 2.12.0
-
Added support for CPU in RAPIDS FIL Backend.
-
Inference requests using the C API are now allowed to provide multiple copies of an input tensor in different memories. Triton will choose the most performant copy to use depending on where the inference request is executed.
-
For ONNX models using TensorRT acceleration, the tensorrt_accelerator option in the model configuration can now specify precision and workspace size. https://github.com/triton-inference-server/server/blob/main/docs/optimization.md#onnx-with-tensorrt-optimization
-
Model Analyzer added an offline mode, which prioritizes throughput over latency for offline inferencing scenarios. A new set of reports and graphs are created to better analyze the offline use case.
Known Issues
-
The 21.07 release includes libsystemd and libudev versions that have a known vulnerability that was discovered late in our QA process. See CVE-2021-33910 for details. This will be fixed in the next release.
-
ONNX Runtime TRT support was removed due to incompatibility with TensorRT 8.0.
-
There is a known issue in TensorRT 8.0 regarding accuracy for a certain case of int8 inferencing on A40 and similar GPUs. The version of TF-TRT in TF2 21.07 includes a feature that works around this issue, but TF1 21.07 does not include that feature and therefore Triton users may experience the accuracy drop for a small subset of model/data type/batch size combinations on A40 when TF-TRT is used through the TF1 backend. This will be fixed in the next version of TensorRT.
-
Running a PyTorch TorchScript model using the PyTorch backend, where multiple instances of a model are configured can lead to a slowdown in model execution due to the following PyTorch issue: pytorch/pytorch#27902
-
There are backwards incompatible changes in the example Python client shared-memory support library when that library is used for tensors of type BYTES. The utils.serialize_byte_tensor() and utils.deserialize_byte_tensor() functions now return np.object_ numpy arrays where previously they returned np.bytes_ numpy arrays. Code depending on np.bytes_ must be updated. This change was necessary because the np.bytes_ type removes all trailing zeros from each array element and so binary sequences ending in zero(s) could not be represented with the old behavior. Correct usage of the Python client shared-memory support library is shown in https://github.com/triton-inference-server/server/blob/r21.03/src/clients/python/examples/simple_http_shm_string_client.py.
Client Libraries and Examples
Ubuntu 20.04 builds of the client libraries and examples are included in this release in the attached v2.12.0_ubuntu2004.clients.tar.gz file. The SDK is also available for as an Ubuntu 20.04 based NGC Container. The SDK container includes the client libraries and examples, Performance Analyzer and Model Analyzer. Some components are also available in the tritonclient pip package. See Getting the Client Libraries for more information on each of these options.
For windows, the client libraries and some examples are available in the attached tritonserver2.12.0-sdk-win.zip file.
Windows Support
An alpha release of Triton for Windows is provided in the attached file: tritonserver2.12.0-win.zip. This is an alpha release so functionality is limited and performance is not optimized. Additional features and improved performance will be provided in future releases. Specifically in this release:
-
TensorRT models are supported. The TensorRT version is 7.2.2.
-
ONNX models are supported by the ONNX Runtime backend. The ONNX Runtime version is 1.8.0. The CPU, CUDA, and TensorRT execution providers are supported. The OpenVINO execution provider is not supported.
-
OpenVINO models are supported. The OpenVINO version is 2021.2.
-
Only the GRPC endpoint is supported, HTTP/REST is not supported.
-
Prometheus metrics endpoint is not supported.
-
System and CUDA shared memory are not supported.
The following components are required for this release and must be installed on the Windows system:
-
NVIDIA Driver release 455 or later.
-
CUDA 11.1.1
-
cuDNN 8.0.5
-
TensorRT 7.2.2
Jetson Jetpack Support
A release of Triton for JetPack 4.6 (https://developer.nvidia.com/embedded/jetpack) is provided in the attached file: tritonserver2.12.0-jetpack4.6.tgz. This release supports the TensorFlow 2.5.0, TensorFlow 1.15.5, TensorRT 8.0.1, OnnxRuntime 1.8.0 and as well as ensembles. For the OnnxRuntime backend the TensorRT and OpenVino execution providers are not supported. System shared memory is supported on Jetson. GPU metrics, GCS storage, S3 storage and Azure storage are not supported.
The tar file contains the Triton server executable and shared libraries and also the C++ and Python client libraries and examples.
Installation and Usage
The following dependencies must be installed before running Triton.
apt-get update && \
apt-get install -y --no-install-recommends \
software-properties-common \
autoconf \
automake \
build-essential \
cmake \
git \
libb64-dev \
libre2-dev \
libssl-dev \
libtool \
libboost-dev \
libcurl4-openssl-dev \
libopenblas-dev \
rapidjson-dev \
patchelf \
zlib1g-dev
Note: When building Triton on Jetson, you will require a newer version of cmake. We recommend using cmake 3.18.4. Below is a script to upgrade your cmake version to 3.18.4.
apt remove cmake
wget https://cmake.org/files/v3.18/cmake-3.18.4.tar.gz
tar -xf cmake-3.18.4.tar.gz
(cd cmake-3.18.4 && ./configure && sudo make install)
Note: Seeing a core dump when using numpy 1.19.5 on Jetson is a known issue. We recommend using numpy version 1.19.4 or earlier to work around this issue.
To run the clients the following dependencies must be installed.
apt-get install -y --no-install-recommends \
curl \
libopencv-dev=3.2.0+dfsg-4ubuntu0.1 \
libopencv-core-dev=3.2.0+dfsg-4ubuntu0.1 \
pkg-config \
python3 \
python3-pip \
python3-dev
pip3 install --upgrade wheel setuptools cython && \
pip3 install --upgrade grpcio-tools numpy==1.19.4 future attrdict
The Python wheel for the python client library is present in the tar file and can be installed by running the following command:
python3 -m pip install --upgrade clients/python/tritonclient-2.12.0-py3-none-linux_aarch64.whl[all]
On Jetson, the backend directory needs to be explicitly set with the --backend-directory
flag. Triton also defaults to using TensorFlow 1.x and a version string is required to specify TensorFlow 2.x.
tritonserver --model-repository=/path/to/model_repo --backend-directory=/path/to/tritonserver/backends \
--backend-config=tensorflow,version=2