Release 2.14.0 corresponding to NGC container 21.09
Triton Inference Server
The Triton Inference Server provides a cloud inferencing solution optimized for both CPUs and GPUs. The server provides an inference service via an HTTP or GRPC endpoint, allowing remote clients to request inferencing for any model being managed by the server. For edge deployments, Triton Server is also available as a shared library with an API that allows the full functionality of the server to be included directly in an application.
What's New In 2.14.0
-
Full-featured, Beta version of Business Logic Scripting (BLS) released.
-
Beta version for basic JAVA Client released. See https://github.com/triton-inference-server/client/tree/r21.09/src/java for a list of supported features.
-
A stack trace is now printed when Triton crashes to aid in debugging.
-
The Triton Client SDK wheel file is now available directly from PyPI for both Ubuntu and Windows.
-
The TensorRT backend is now an optional part of Triton just like all the other backends. The compose utility can be used to create a Triton container that does not contain the TensorRT backend.
-
Model Analyzer can profile with perf_analyzer's C-API.
-
Model Analyzer can use the CUDA Device Index in addition to the GPU UUID in the
--gpus
flag.
Known Issues
-
Triton’s TensorRT support depends on the input-consumed feature of TensorRT. In some rare cases using TensorRT 8.0 and earlier versions, the input-consumed event fires earlier than expected, causing Triton to overwrite input tensors while they are still in use and leading to corrupt input data being used for inference. This situation occurs when the inputs feed directly into a TensorRT layer that is optimized into a
ForeignNode
in the builder log. If you encounter accuracy issues with your TensorRT model, you can work-around the issue by enabling theoutput_copy_stream
option in your model’s configuration (https://github.com/triton-inference-server/common/blob/main/protobuf/model_config.proto#L816) -
Triton cannot retrieve GPU metrics with MIG-enabled GPU devices (A100 and A30)
-
Triton metrics may not work if the host machine is running a separate DCGM agent, either on bare-metal or in a container
-
There is a known issue in TensorRT 8.0 regarding accuracy for a certain case of int8 inferencing on A40 and similar GPUs. The version of TF-TRT in TF2 21.09 includes a feature that works around this issue, but TF1 21.08 does not include that feature and therefore Triton users may experience the accuracy drop for a small subset of model/data type/batch size combinations on A40 when TF-TRT is used through the TF1 backend. This will be fixed in the next version of TensorRT.
-
Running a PyTorch TorchScript model using the PyTorch backend, where multiple instances of a model are configured can lead to a slowdown in model execution due to the following PyTorch issue: pytorch/pytorch#27902
Client Libraries and Examples
Ubuntu 20.04 builds of the client libraries and examples are included in this release in the attached v2.14.0_ubuntu2004.clients.tar.gz file. The SDK is also available for as an Ubuntu 20.04 based NGC Container. The SDK container includes the client libraries and examples, Performance Analyzer and Model Analyzer. Some components are also available in the tritonclient pip package. See Getting the Client Libraries for more information on each of these options.
For windows, the client libraries and some examples are available in the attached tritonserver2.14.0-sdk-win.zip file.
Windows Support
An alpha release of Triton for Windows is provided in the attached file: tritonserver2.14.0-win.zip. This is an alpha release so functionality is limited and performance is not optimized. Additional features and improved performance will be provided in future releases. Specifically in this release:
-
TensorRT models are supported. The TensorRT version is 8.0.1.6.
-
ONNX models are supported by the ONNX Runtime backend. The ONNX Runtime version is 1.8.1. The CPU, CUDA, and TensorRT execution providers are supported. The OpenVINO execution provider is not supported.
-
OpenVINO models are supported. The OpenVINO version is 2021.2.
-
Only the GRPC endpoint is supported, HTTP/REST is not supported.
-
Prometheus metrics endpoint is not supported.
-
System and CUDA shared memory are not supported.
The following components are required for this release and must be installed on the Windows system:
-
NVIDIA Driver release 470 or later.
-
CUDA 11.4.1
-
cuDNN 8.2.2.26
-
TensorRT 8.0.1.6
Jetson Jetpack Support
A release of Triton for JetPack 4.6 (https://developer.nvidia.com/embedded/jetpack) is provided in the attached tar file: tritonserver2.14.0-jetpack4.6.tgz.
- This release supports the TensorFlow 2.6.0, TensorFlow 1.15.5, TensorRT 8.0.1.6, OnnxRuntime 1.8.1 and as well as ensembles.
- For the OnnxRuntime backend the OpenVino execution provider is not supported but the TensorRT execution provider is supported.
- System shared memory is supported on Jetson.
- GPU metrics, GCS storage, S3 storage and Azure storage are not supported.
The tar file contains the Triton server executable and shared libraries and also the C++ and Python client libraries and examples.
Installation and Usage
The following dependencies must be installed before building / running Triton.
apt-get update && \
apt-get install -y --no-install-recommends \
software-properties-common \
autoconf \
automake \
build-essential \
cmake \
git \
libb64-dev \
libre2-dev \
libssl-dev \
libtool \
libboost-dev \
libcurl4-openssl-dev \
libopenblas-dev \
rapidjson-dev \
patchelf \
zlib1g-dev
Note: When building Triton on Jetson, you will require a newer version of cmake. We recommend using cmake 3.21.0. Below is a script to upgrade your cmake version to 3.21.0. You can use cmake 3.18.4 if you are not enabling OnnxRuntime support.
apt remove cmake
wget https://cmake.org/files/v3.21/cmake-3.21.0.tar.gz
tar -xf cmake-3.21.0.tar.gz
(cd cmake-3.21.0 && ./configure && make install)
Note: Seeing a core dump when using numpy 1.19.5 on Jetson is a known issue. We recommend using numpy version 1.19.4 or earlier to work around this issue.
To run the clients the following dependencies must be installed.
apt-get install -y --no-install-recommends \
curl \
libopencv-dev=3.2.0+dfsg-4ubuntu0.1 \
libopencv-core-dev=3.2.0+dfsg-4ubuntu0.1 \
pkg-config \
python3 \
python3-pip \
python3-dev
pip3 install --upgrade wheel setuptools cython && \
pip3 install --upgrade grpcio-tools numpy==1.19.4 future attrdict
The Python wheel for the python client library is present in the tar file and can be installed by running the following command:
python3 -m pip install --upgrade clients/python/tritonclient-2.14.0-py3-none-linux_aarch64.whl[all]
On Jetson, the backend directory needs to be explicitly set with the --backend-directory
flag. Triton also defaults to using TensorFlow 1.x and a version string is required to specify TensorFlow 2.x.
tritonserver --model-repository=/path/to/model_repo --backend-directory=/path/to/tritonserver/backends \
--backend-config=tensorflow,version=2