Releases: triton-inference-server/server
Release 2.41.0 corresponding to NGC container 23.12
Triton Inference Server
The Triton Inference Server provides a cloud inferencing solution optimized for both CPUs and GPUs. The server provides an inference service via an HTTP or GRPC endpoint, allowing remote clients to request inferencing for any model being managed by the server. For edge deployments, Triton Server is also available as a shared library with an API that allows the full functionality of the server to be included directly in an application.
New Features and Improvements
-
Added metrics support to TRTLLM backend when running within Triton.
-
Request ID will be included in opentelemetry tracing.
-
For Jetson devices which support Jetpack 6.0 and above, Triton now publishes containers, based on the latest version of Jetpack, on NGC with the suffix
-igpu
. These containers are:XX.YY-py3-igpu
- much like theXX.YY-py3
container, this contains tritonserver and all supported backends for Jetson devices.XX.YY-py3-sdk-igpu
- much like theXX.YY-py3-sdk
container, this contains the Tritonclient and Triton Tools supported on Jetson devices.
-
Refer to the 23.12 column of the Frameworks Support Matrix for container image versions on which the 23.10 inference server container is based.
Known Issues
-
Reuse-grpc-port and reuse-http-port are now properly parsed as booleans.
0
and1
will continue to work as values. Any other integers will throw an error. -
The TensorRT-LLM backend provides limited support of Triton extensions and features.
-
The TensorRT-LLM backend may core dump on server shutdown. This impacts server teardown only and will not impact inferencing.
-
When using decoupled models, there is a possibility that response order as sent from the backend may not match with the order in which these responses are received by the streaming gRPC client. Note that this only applies to responses from different requests. Any responses corresponding to the same request will still be received in their expected order, relative to each other.
-
The FasterTransformer backend is only officially supported for 22.12, though it can be built for Triton container versions up to 23.07.
-
The Java CAPI is known to have intermittent segfaults we’re looking for a root cause.
-
Some systems which implement
malloc()
may not release memory back to the operating system right away causing a false memory leak. This can be mitigated by using a different malloc implementation. Tcmalloc and jemalloc are installed in the Triton container and can be used by specifying the library in LD_PRELOAD. We recommend experimenting with both tcmalloc and jemalloc to determine which one works better for your use case. -
Auto-complete may cause an increase in server start time. To avoid a start time increase, users can provide the full model configuration and launch the server with
--disable-auto-complete-config
. -
Auto-complete does not support PyTorch models due to lack of metadata in the model. It can only verify that the number of inputs and the input names matches what is specified in the model configuration. There is no model metadata about the number of outputs and datatypes. Related PyTorch bug: pytorch/pytorch#38273
-
Triton Client PIP wheels for ARM SBSA are not available from PyPI and pip will install an incorrect Jetson version of Triton Client library for Arm SBSA. The correct client wheel file can be pulled directly from the Arm SBSA SDK image and manually installed.
-
Traced models in PyTorch seem to create overflows when int8 tensor values are transformed to int32 on the GPU. Refer to pytorch/pytorch#66930 for more information.
-
Triton cannot retrieve GPU metrics with MIG-enabled GPU devices (A100 and A30).
-
Triton metrics might not work if the host machine is running a separate DCGM agent on bare-metal or in a container.
-
When cloud storage (AWS, GCS, AZURE) is used as a model repository and a model has multiple versions, Triton creates an extra local copy of the cloud model’s folder in the temporary directory, which is deleted upon server’s shutdown.
-
Model Analyzer is not able to analyze and optimize ensemble model configs due to a bug in the way composing models are loaded.
-
Model Analyzer does not work with SSL via gRPC.
Client Libraries and Examples
Ubuntu 22.04 builds of the client libraries and examples are included in this release in the attached v2.41.0_ubuntu22.04.clients.tar.gz
file. The SDK is also available for as an Ubuntu 22.04 based NGC Container. The SDK container includes the client libraries and examples, Performance Analyzer and Model Analyzer. Some components are also available in the tritonclient pip package. See Getting the Client Libraries for more information on each of these options.
For Windows, the client libraries and some examples are available in the attached tritonserver2.41.0-sdk-win.zip
file.
Windows Support
Note
There is no Windows release for 23.12, the latest release is 23.11.
Jetson iGPU Support
Important
For Jetpack v5.1.2 running Triton 23.06 or older, an update has been posted on the 23.06 release page , tritonserver2.35.0-jetpack5.1.2-update-1.tgz
, which fixes CVE-2023-31036. See our security bulletin for more details.
A release of Triton for IGX is provided in the attached tar file: tritonserver2.41.0-igpu.tgz
.
- This release supports TensorFlow
2.14.0
, TensorRT8.6.2.3
, Onnx Runtime1.16.3
, PyTorch2.2.0a0+81ea7a4
, Python3.10
and as well as ensembles. - ONNXRuntime backend does not support the OpenVino and TensorRT execution providers. The CUDA execution provider is in Beta.
- System shared memory is supported on Jetson. CUDA shared memory is not supported.
- GPU metrics, GCS storage, S3 storage and Azure storage are not supported.
The tar file contains the Triton server executable and shared libraries and also the C++ and Python client libraries and examples. For more information on how to install and use Triton on JetPack refer to jetson.md
.
The wheel for the Python client library is present in the tar file and can be installed by running the following command:
python3 -m pip install --upgrade clients/python/tritonclient-2.41.0-py3-none-manylinux2014_aarch64.whl[all]
Release 2.40.0 corresponding to NGC container 23.11
Triton Inference Server
The Triton Inference Server provides a cloud inferencing solution optimized for both CPUs and GPUs. The server provides an inference service via an HTTP or GRPC endpoint, allowing remote clients to request inferencing for any model being managed by the server. For edge deployments, Triton Server is also available as a shared library with an API that allows the full functionality of the server to be included directly in an application.
New Features and Improvements
-
Starting with the 23.11 release, Triton containers supporting iGPU architectures are published, and run on Jetson devices. Please refer to the Frameworks Support Matrix for information regarding which iGPU hardware/software is supported by which container.
-
Implicit state management has been enhanced to support growing buffers and use a single buffer for both input and output states.
-
Sequence batcher has been enhanced to support iterative scheduling.
-
The backend API has been enhanced to support rescheduling a request. Currently, only Python backend and Custom C++ backends support request rescheduling.
-
TRT-LLM backend now supports request cancellation.
-
Configuration of a vLLM backend model can now be auto-completed by Triton. The user just needs to pass
backend: "vllm"
to leverage the auto-complete feature. -
Python backend now supports parameters in BLS requests.
-
Python backend GPU tensor support has been improved to provide better performance.
-
A new tutorial demonstrating how to deploy LLaMa2 using TRT-LLM has been added.
-
The HTTP endpoint has been enhanced to support access restriction.
-
Secure Deployment Guide has been added to provide guidance on deploying Triton securely.
-
The client model loading API no longer allows uploading files outside the model repository.
-
DCGM version has been upgraded to 3.2.6.
-
The Kubernetes Deploy example now supports Kubernetes’ new StartupProbe to allow Triton pods time to finish startup before running health probes.
Known Issues
-
When using the generate streaming endpoint, Triton will segfault if the client closes the connection before all responses have been generated. The fix will be available in the next release.
-
Reuse-grpc-port and reuse-http-port are now properly parsed as booleans. 0 and 1 will continue to work as values. Any other integers will throw an error.
-
The TensorRT-LLM backend provides limited support of Triton extensions and features.
-
The TensorRT-LLM backend may core dump on server shutdown. This impacts server teardown only and will not impact inferencing.
-
When using decoupled models, there is a possibility that response order as sent from the backend may not match with the order in which these responses are received by the streaming gRPC client. Note that this only applies to responses from different requests. Any responses corresponding to the same request will still be received in their expected order, relative to each other.
-
The FasterTransformer backend is only officially supported for 22.12, though it can be built for Triton container versions up to 23.07.
-
The Java CAPI is known to have intermittent segfaults we’re looking for a root cause.
-
Some systems which implement
malloc()
may not release memory back to the operating system right away causing a false memory leak. This can be mitigated by using a different malloc implementation. Tcmalloc and jemalloc are installed in the Triton container and can be used by specifying the library in LD_PRELOAD. We recommend experimenting with both tcmalloc and jemalloc to determine which one works better for your use case. -
Auto-complete may cause an increase in server start time. To avoid a start time increase, users can provide the full model configuration and launch the server with
--disable-auto-complete-config
. -
Auto-complete does not support PyTorch models due to lack of metadata in the model. It can only verify that the number of inputs and the input names matches what is specified in the model configuration. There is no model metadata about the number of outputs and datatypes. Related PyTorch bug: pytorch/pytorch#38273
-
Triton Client PIP wheels for ARM SBSA are not available from PyPI and pip will install an incorrect Jetson version of Triton Client library for Arm SBSA. The correct client wheel file can be pulled directly from the Arm SBSA SDK image and manually installed.
-
Traced models in PyTorch seem to create overflows when int8 tensor values are transformed to int32 on the GPU. Refer to pytorch/pytorch#66930 for more information.
-
Triton cannot retrieve GPU metrics with MIG-enabled GPU devices (A100 and A30).
-
Triton metrics might not work if the host machine is running a separate DCGM agent on bare-metal or in a container.
-
When cloud storage (AWS, GCS, AZURE) is used as a model repository and a model has multiple versions, Triton creates an extra local copy of the cloud model’s folder in the temporary directory, which is deleted upon server’s shutdown.
Client Libraries and Examples
Ubuntu 22.04 builds of the client libraries and examples are included in this release in the attached v2.40.0_ubuntu22.04.clients.tar.gz
file. The SDK is also available for as an Ubuntu 22.04 based NGC Container. The SDK container includes the client libraries and examples, Performance Analyzer and Model Analyzer. Some components are also available in the tritonclient pip package. See Getting the Client Libraries for more information on each of these options.
For Windows, the client libraries and some examples are available in the attached tritonserver2.40.0-sdk-win.zip
file.
Windows Support
A beta release of Triton for Windows is provided in the attached file:tritonserver2.40.0-win.zip
. This is a beta release so functionality is limited and performance is not optimized. Additional features and improved performance will be provided in future releases. Specifically in this release:
-
HTTP/REST and GRPC endpoints are supported.
-
ONNX models are supported by the ONNXRuntime backend. The ONNXRuntime version is 1.16.3. The CPU, CUDA, and TensorRT execution providers are supported. The OpenVINO execution provider is not supported.
-
OpenVINO models are supported. The OpenVINO version is 2023.0.0.
-
Prometheus metrics endpoint is not supported.
-
System and CUDA shared memory are not supported.
To use the Windows version of Triton, you must install all the necessary dependencies on your Windows system. These dependencies are available in the Dockerfile.win10.min. The Dockerfile includes the following CUDA-related components:
-
CUDA 12.3.0
-
cuDNN 8.9.6.50
-
TensorRT 8.6.1.6
Jetson iGPU Support
A release of Triton for IGX is provided in the attached tar file: tritonserver2.40.0-igpu.tgz
.
- This release supports TensorFlow
2.14.0
, TensorRT8.6.2.3
, Onnx Runtime1.16.3
, PyTorch2.2.0a0+6a974be
, Python3.10
and as well as ensembles. - ONNXRuntime backend does not support the OpenVino and TensorRT execution providers. The CUDA execution provider is in Beta.
- System shared memory is supported on Jetson. CUDA shared memory is not supported.
- GPU metrics, GCS storage, S3 storage and Azure storage are not supported.
The tar file contains the Triton server executable and shared libraries and also the C++ and Python client libraries and examples. For more information on how to install and use Triton on JetPack refer to jetson.md
.
The wheel for the Python client library is present in the tar file and can be installed by running the following command:
python3 -m pip install --upgrade clients/python/tritonclient-2.40.0-py3-none-manylinux2014_aarch64.whl[al...
Release 2.39.0 corresponding to NGC container 23.10
Triton Inference Server
The Triton Inference Server provides a cloud inferencing solution optimized for both CPUs and GPUs. The server provides an inference service via an HTTP or GRPC endpoint, allowing remote clients to request inferencing for any model being managed by the server. For edge deployments, Triton Server is also available as a shared library with an API that allows the full functionality of the server to be included directly in an application.
New Features and Improvements
-
Triton now supports the TensorRT-LLM backend. This backend uses the Nvidia TensorRT-LLM, which replaces the Fastertransformer backend. A new container with TensorRT-LLM backend is available on NGC for 23.10.
-
Added support for handling client-side request cancellation in Triton server and backends. (server docs, client docs).
-
Triton can deploy supported models on the vLLM engine using the new vLLM backend. A new container with vLLM backend is available on NGC for 23.10.
-
Added Generate extension (beta) which provides better REST APIs for inference on Large Language Models.
-
New tutorials with respect to how to run vLLM with the new REST API, how to run Llama2 with TensorRT-LLM backend, and how to run with HuggingFace models in the tutorial repo.
-
Support Scalar I/O in ONNXRuntime backend.
-
Added support for writing custom backends in python, a.k.a. Python-based backends.
-
Refer to the 23.10 column of the Frameworks Support Matrix for container image versions on which the 23.10 inference server container is based.
Known Issues
-
For its initial release, the TensorRT-LLM backend provides limited support of Triton extensions and features.
-
The TensorRT-LLM backend may core dump on server shutdown. This impacts server teardown only and will not impact inferencing.
-
When a model uses a backend which is not found, Triton would reference the missing backend as `backend_name /model.py” in the error message. This is already fixed for future releases.
-
When using decoupled models, there is a possibility that response order as sent from the backend may not match with the order in which these responses are received by the streaming gRPC client. Note that this only applies to responses from different requests. Any responses corresponding to the same request will still be received in their expected order, relative to each other.
-
The FasterTransformer backend is only officially supported for 22.12, though it can be built for Triton container versions up to 23.07.
-
The Java CAPI is known to have intermittent segfaults we’re looking for a root cause.
-
Some systems which implement malloc() may not release memory back to the operating system right away causing a false memory leak. This can be mitigated by using a different malloc implementation. Tcmalloc and jemalloc are installed in the Triton container and can be used by specifying the library in LD_PRELOAD. We recommend experimenting with both tcmalloc and jemalloc to determine which one works better for your use case.
-
Auto-complete may cause an increase in server start time. To avoid a start time increase, users can provide the full model configuration and launch the server with
--disable-auto-complete-config
. -
Auto-complete does not support PyTorch models due to lack of metadata in the model. It can only verify that the number of inputs and the input names matches what is specified in the model configuration. There is no model metadata about the number of outputs and datatypes. Related PyTorch bug: pytorch/pytorch#38273
-
Triton Client PIP wheels for ARM SBSA are not available from PyPI and pip will install an incorrect Jetson version of Triton Client library for Arm SBSA. The correct client wheel file can be pulled directly from the Arm SBSA SDK image and manually installed.
-
Traced models in PyTorch seem to create overflows when int8 tensor values are transformed to int32 on the GPU. Refer to pytorch/pytorch#66930 for more information.
-
Triton cannot retrieve GPU metrics with MIG-enabled GPU devices (A100 and A30).
-
Triton metrics might not work if the host machine is running a separate DCGM agent on bare-metal or in a container.
-
When cloud storage (AWS, GCS, AZURE) is used as a model repository and a model has multiple versions, Triton creates an extra local copy of the cloud model’s folder in the temporary directory, which is deleted upon server’s shutdown.
Client Libraries and Examples
Ubuntu 22.04 builds of the client libraries and examples are included in this release in the attached v2.39.0_ubuntu22.04.clients.tar.gz
file. The SDK is also available for as an Ubuntu 22.04 based NGC Container. The SDK container includes the client libraries and examples, Performance Analyzer and Model Analyzer. Some components are also available in the tritonclient pip package. See Getting the Client Libraries for more information on each of these options.
For Windows, the client libraries and some examples are available in the attached tritonserver2.39.0-sdk-win.zip
file.
Windows Support
Note
There is no Windows release for 23.10, the latest release is 23.09.
Jetson Jetpack Support
Note
There is no Jetpack release for 23.08, the latest release is 23.06.
Release 2.38.0 corresponding to NGC container 23.09
Triton Inference Server
The Triton Inference Server provides a cloud inferencing solution optimized for both CPUs and GPUs. The server provides an inference service via an HTTP or GRPC endpoint, allowing remote clients to request inferencing for any model being managed by the server. For edge deployments, Triton Server is also available as a shared library with an API that allows the full functionality of the server to be included directly in an application.
New Features and Improvements
-
Triton now has Python bindings for the C API. Please refer to this PR for usage.
-
Triton now forwards request parameters to each of the composing models of an ensemble model.
-
The Filesystem API now supports named temporary cache directories when downloading models using the repository agent.
-
Added the number of requests currently in the queue to the metrics API. Documentation can be found here.
-
Python backend models can now respond with error codes in addition to error messages.
-
TensorRT backend now supports TensortRT version compatibility across models generated with the same major version of TensorRT. Use the
--backend-config=tensorrt,version-compatible=true
flag to enable this feature. -
Triton’s backend API now supports accessing the inference response outputs by name or by index. See the new API here.
-
The Python backend now supports loading Pytorch models directly. This feature is experimental and should be treated as Beta.
-
Fixed an issue where if the user didn't call
SetResponseReleaseCallback
, canceling a new request could cancel the old response factory as well. Now when canceling a request which is being re-used, a new response factory is created for each inference. -
Refer to the 23.09 column of the Frameworks Support Matrix for container image versions on which the 23.09 inference server container is based.
Known Issues
-
When using decoupled models, there is a possibility that response order as sent from the backend may not match with the order in which these responses are received by the streaming gRPC client. Note that this only applies to responses from different requests. Any responses corresponding to the same request will still be received in their expected order, relative to each other.
-
The FasterTransformer backend is only officially supported for 22.12, though it can be built for Triton container versions up to 23.07.
-
The Java CAPI is known to have intermittent segfaults we’re looking for a root cause.
-
Some systems which implement
malloc()
may not release memory back to the operating system right away causing a false memory leak. This can be mitigated by using a different malloc implementation. Tcmalloc and jemalloc are installed in the Triton container and can be used by specifying the library in LD_PRELOAD. We recommend experimenting with bothtcmalloc
andjemalloc
to determine which one works better for your use case. -
Auto-complete may cause an increase in server start time. To avoid a start time increase, users can provide the full model configuration and launch the server with
--disable-auto-complete-config
. -
Auto-complete does not support PyTorch models due to lack of metadata in the model. It can only verify that the number of inputs and the input names matches what is specified in the model configuration. There is no model metadata about the number of outputs and datatypes. Related PyTorch bug: pytorch/pytorch#38273
-
Triton Client PIP wheels for ARM SBSA are not available from PyPI and pip will install an incorrect Jetson version of Triton Client library for Arm SBSA. The correct client wheel file can be pulled directly from the Arm SBSA SDK image and manually installed.
-
Traced models in PyTorch seem to create overflows when int8 tensor values are transformed to int32 on the GPU. Refer to pytorch/pytorch#66930 for more information.
-
Triton cannot retrieve GPU metrics with MIG-enabled GPU devices (A100 and A30).
-
Triton metrics might not work if the host machine is running a separate DCGM agent on bare-metal or in a container.
-
When cloud storage (AWS, GCS, AZURE) is used as a model repository and a model has multiple versions, Triton creates an extra local copy of the cloud model’s folder in the temporary directory, which is deleted upon server’s shutdown.
Client Libraries and Examples
Ubuntu 22.04 builds of the client libraries and examples are included in this release in the attached v2.380_ubuntu2204.clients.tar.gz
file. The SDK is also available for as an Ubuntu 22.04 based NGC Container. The SDK container includes the client libraries and examples, Performance Analyzer and Model Analyzer. Some components are also available in the tritonclient pip package. See Getting the Client Libraries for more information on each of these options.
For Windows, the client libraries and some examples are available in the attached tritonserver2.38.0-sdk-win.zip
file.
Windows Support
A beta release of Triton for Windows is provided in the attached file:tritonserver2.37.0-win.zip
. This is a beta release so functionality is limited and performance is not optimized. Additional features and improved performance will be provided in future releases. Specifically in this release:
-
HTTP/REST and GRPC endpoints are supported.
-
ONNX models are supported by the ONNXRuntime backend. The ONNXRuntime version is 1.15.1. The CPU, CUDA, and TensorRT execution providers are supported. The OpenVINO execution provider is not supported.
-
OpenVINO models are supported. The OpenVINO version is 2023.0.0.
-
Prometheus metrics endpoint is not supported.
-
System and CUDA shared memory are not supported.
To use the Windows version of Triton, you must install all the necessary dependencies on your Windows system. These dependencies are available in the Dockerfile.win10.min. The Dockerfile includes the following CUDA-related components:
-
CUDA 12.2.0
-
cuDNN 8.9.4.25
-
TensorRT 8.6.1.6
Jetson Jetpack Support
Note
There is no Jetpack release for 23.08, the latest release is 23.06.
Release 2.37.0 corresponding to NGC container 23.08
Triton Inference Server
The Triton Inference Server provides a cloud inferencing solution optimized for both CPUs and GPUs. The server provides an inference service via an HTTP or GRPC endpoint, allowing remote clients to request inferencing for any model being managed by the server. For edge deployments, Triton Server is also available as a shared library with an API that allows the full functionality of the server to be included directly in an application.
New Features and Improvements
-
Triton can load model instances in parallel for supporting backends. See TRITONBACKEND_BackendAttributeSetParallelModelInstanceLoading for more details. As of 23.08, only python and onnxruntime backends support loading model instances in parallel.
-
Python backend models can capture trace for composing child models when executing BLS requests.
-
Triton OpenTelemetry Tracing exposes resource settings which can be used to configure the service name and version.
-
Python backend supports directly loading and serving PyTorch models with torch.compile().
-
Exposed preserve_ordering field to oldest strategy sequence batcher. The default behavior of the oldest strategy sequence batcher to preserve response order across the independent requests belonging to different sequences is changed from True to False. Note: This setting does not impact order of responses within a sequence.
-
Refer to the 23.08 column of the Frameworks Support Matrix for container image versions on which the 23.08 inference server container is based.
Known Issues
-
Triton uses OpenTelemetry CPP library version, which can cause Triton to crash
, when OpenTelemetry’s exporter timeouts. -
When using decoupled models, there is a possibility that response order as sent
from the backend may not match with the order in which these responses are
received by the streaming gRPC client. -
The "fastertransformer_backend" is only officially supported for 22.12, though it can
be built for Triton container versions up to 23.07. -
The Java CAPI is known to have intermittent segfaults we’re looking for a root cause.
-
Some systems which implement
malloc()
may not release memory back to the
operating system right away causing a false memory leak. This can be mitigate
by using a different malloc implementation.tcmalloc
andjemalloc
are
installed in the Triton container and can be
used by specifying the library in LD_PRELOAD.We recommend experimenting with both
tcmalloc
andjemalloc
to determine which
one works better for your use case. -
Auto-complete may cause an increase in server start time. To avoid a start
time increase, users can provide the full model configuration and launch the
server with--disable-auto-complete-config
. -
Auto-complete does not support PyTorch models due to lack of metadata in the
model. It can only verify that the number of inputs and the input names
matches what is specified in the model configuration. There is no model
metadata about the number of outputs and datatypes. Related PyTorch bug:
pytorch/pytorch#38273 -
Triton Client PIP wheels for ARM SBSA are not available from PyPI and pip will
install an incorrect Jetson version of Triton Client library for Arm SBSA. The
correct client wheel file can be pulled directly from the Arm SBSA SDK image
and manually installed. -
Traced models in PyTorch seem to create overflows when int8 tensor values are
transformed to int32 on the GPU. Refer to
pytorch/pytorch#66930 for more information. -
Triton cannot retrieve GPU metrics with MIG-enabled GPU devices (A100 and
A30). -
Triton metrics might not work if the host machine is running a separate DCGM
agent on bare-metal or in a container.
Client Libraries and Examples
Ubuntu 22.04 builds of the client libraries and examples are included in this release in the attached v2.37.0_ubuntu2204.clients.tar.gz
file. The SDK is also available for as an Ubuntu 22.04 based NGC Container. The SDK container includes the client libraries and examples, Performance Analyzer and Model Analyzer. Some components are also available in the tritonclient pip package. See Getting the Client Libraries for more information on each of these options.
For Windows, the client libraries and some examples are available in the attached tritonserver2.37.0-sdk-win.zip
file.
Windows Support
A beta release of Triton for Windows is provided in the attached file:tritonserver2.37.0-win.zip
. This is a beta release so functionality is limited and performance is not optimized. Additional features and improved performance will be provided in future releases. Specifically in this release:
-
HTTP/REST and GRPC endpoints are supported.
-
ONNX models are supported by the ONNXRuntime backend. The ONNXRuntime version is 1.15.1. The CPU, CUDA, and TensorRT execution providers are supported. The OpenVINO execution provider is not supported.
-
OpenVINO models are supported. The OpenVINO version is 2023.0.0.
-
Prometheus metrics endpoint is not supported.
-
System and CUDA shared memory are not supported.
To use the Windows version of Triton, you must install all the necessary dependencies on your Windows system. These dependencies are available in the Dockerfile.win10.min. The Dockerfile includes the following CUDA-related components:
-
CUDA 12.2.0
-
cuDNN 8.9.4.25
-
TensorRT 8.6.1.6
Jetson Jetpack Support
Note
There is no Jetpack release for 23.08, the latest release is 23.06.
Release 2.36.0 corresponding to NGC container 23.07
Triton Inference Server
The Triton Inference Server provides a cloud inferencing solution optimized for both CPUs and GPUs. The server provides an inference service via an HTTP or GRPC endpoint, allowing remote clients to request inferencing for any model being managed by the server. For edge deployments, Triton Server is also available as a shared library with an API that allows the full functionality of the server to be included directly in an application.
New Features and Improvements
-
"pytorch_backend" supports implicit state management.
-
"python_backend" supports direct serving of TensorFlow SavedModel.
-
"python_backend" supports unpacked Conda execution environment.
-
"python_backend" added the model loading APIs for BLS usage.
-
Triton OpenTelemetry trace mode supports ensemble model tracing.
-
Triton Python client supports DLPack tensors in CUDA shared memory utilities.
-
Triton supports the S3 model repository that contains more than 1000 files.
-
Added Java binding of the Triton in-process C++ API.
-
Refer to the 23.07 column of the Frameworks Support Matrix for container image versions on which the 23.07 inference server container is based.
Known Issues
-
The "fastertransformer_backend" build only works with Triton 23.04 and older releases.
-
Some systems which implement
malloc()
may not release memory back to the operating system right away causing a false memory leak. This can be mitigate by using a different malloc implementation.tcmalloc
andjemalloc
are installed in the Triton container and can be used by specifying the library in LD_PRELOAD.We recommend experimenting with both
tcmalloc
andjemalloc
to determine which one works better for your use case. -
Auto-complete may cause an increase in server start time. To avoid a start time increase, users can provide the full model configuration and launch the server with
--disable-auto-complete-config
. -
Auto-complete does not support PyTorch models due to lack of metadata in the model. It can only verify that the number of inputs and the input names matches what is specified in the model configuration. There is no model metadata about the number of outputs and datatypes. Related PyTorch bug: pytorch/pytorch#38273
-
Triton Client PIP wheels for ARM SBSA are not available from PyPI and pip will install an incorrect Jetson version of Triton Client library for Arm SBSA. The correct client wheel file can be pulled directly from the Arm SBSA SDK image and manually installed.
-
Traced models in PyTorch seem to create overflows when int8 tensor values are transformed to int32 on the GPU. Refer to pytorch/pytorch#66930 for more information.
-
Triton cannot retrieve GPU metrics with MIG-enabled GPU devices (A100 and A30).
-
Triton metrics might not work if the host machine is running a separate DCGM agent on bare-metal or in a container.
Client Libraries and Examples
Ubuntu 22.04 builds of the client libraries and examples are included in this release in the attached v2.36.0_ubuntu2204.clients.tar.gz
file. The SDK is also available for as an Ubuntu 22.04 based NGC Container. The SDK container includes the client libraries and examples, Performance Analyzer and Model Analyzer. Some components are also available in the tritonclient pip package. See Getting the Client Libraries for more information on each of these options.
For Windows, the client libraries and some examples are available in the attached tritonserver2.36.0-sdk-win.zip
file.
Windows Support
A beta release of Triton for Windows is provided in the attached file:tritonserver2.36.0-win.zip
. This is a beta release so functionality is limited and performance is not optimized. Additional features and improved performance will be provided in future releases. Specifically in this release:
-
HTTP/REST and GRPC endpoints are supported.
-
ONNX models are supported by the ONNXRuntime backend. The ONNXRuntime version is 1.15.1. The CPU, CUDA, and TensorRT execution providers are supported. The OpenVINO execution provider is not supported.
-
OpenVINO models are supported. The OpenVINO version is 2023.0.0.
-
Prometheus metrics endpoint is not supported.
-
System and CUDA shared memory are not supported.
To use the Windows version of Triton, you must install all the necessary dependencies on your Windows system. These dependencies are available in the Dockerfile.win10.min. The Dockerfile includes the following CUDA-related components:
-
CUDA 12.1.1
-
cuDNN 8.9.3.28
-
TensorRT 8.6.1.6
Jetson Jetpack Support
Note
There is no Jetpack release for 23.07, the latest release is 23.06.
Release 2.35.0 corresponding to NGC container 23.06
Important
tritonserver2.35.0-jetpack5.1.2-update-1.tgz
release asset has been replaced with tritonserver2.35.0-jetpack5.1.2-update-2.tgz
which includes the fix for CVE-2023-31036. See our security bulletin for more details.
This new updated package also contains a boost filesystem shared library that Triton depends on in the folder boost_filesystem
. This shared library must be added to dynamic loader path for path for proper operation.
This asset can be built from source using the r23.06-update-2-jp
tag.
Triton Inference Server
The Triton Inference Server provides a cloud inferencing solution optimized for both CPUs and GPUs. The server provides an inference service via an HTTP or GRPC endpoint, allowing remote clients to request inferencing for any model being managed by the server. For edge deployments, Triton Server is also available as a shared library with an API that allows the full functionality of the server to be included directly in an application.
New Features and Improvements
-
Support for KIND_MODEL instance type has been extended to the PyTorch backend.
-
The gRPC clients can now indicate whether they want to receive the flags associated with each response. This can help the clients to programmatically determine when all the responses for a given request have been received on the client side for decoupled models.
-
Added beta support for using Redis as a cache for inference requests.
-
The statistics extension now includes the memory usage of the loaded models This statistics is currently implemented only for TensorRT and ONNXRuntime backends.
-
Added support for batch inputs in ragged batching for PyTorch backend.
-
Added serial sequences mode for Perf Analyzer.
-
Refer to the 23.06 column of the Frameworks Support Matrix for container image versions on which the 23.06 inference server container is based.
Known Issues
-
The Fastertransfomer backend build only works with Triton 23.04 and older releases.
-
Tensorflow backend no longer supports TensorFlow version 1.
-
OpenVINO 2022.1 is used in the OpenVINO backend and the OpenVINO execution provider for the Onnxruntime Backend. OpenVINO 2022.1 is not officially supported on Ubuntu 22.04 and should be treated as beta.
-
Some systems which implement malloc() may not release memory back to the operating system right away causing a false memory leak. This can be mitigated by using a different malloc implementation. Tcmalloc and jemalloc are installed
in the Triton container and can be used by specifying the library in LD_PRELOAD. We recommend experimenting with both tcmalloc and jemalloc to determine which one works better for your use case. -
Auto-complete may cause an increase in server start time. To avoid a start time increase, users can provide the full model configuration and launch the server with --disable-auto-complete-config.
-
Auto-complete does not support PyTorch models due to lack of metadata in the model. It can only verify that the number of inputs and the input names matches what is specified in the model configuration. There is no model metadata
about the number of outputs and datatypes. Related PyTorch bug: pytorch/pytorch#38273 -
Triton Client PIP wheels for ARM SBSA are not available from PyPI and pip will install an incorrect Jetson version of Triton Client library for Arm SBSA. The correct client wheel file can be pulled directly from the Arm SBSA SDK image and
manually installed. -
Traced models in PyTorch seem to create overflows when int8 tensor values are transformed to int32 on the GPU. Refer to pytorch/pytorch#66930 for more information.
-
Triton cannot retrieve GPU metrics with MIG-enabled GPU devices (A100 and A30).
-
Triton metrics might not work if the host machine is running a separate DCGM agent on bare-metal or in a container.
Client Libraries and Examples
Ubuntu 22.04 builds of the client libraries and examples are included in this release in the attached v2.35.0_ubuntu2204.clients.tar.gz
file. The SDK is also available for as an Ubuntu 22.04 based NGC Container. The SDK container includes the client libraries and examples, Performance Analyzer and Model Analyzer. Some components are also available in the tritonclient pip package. See Getting the Client Libraries for more information on each of these options.
For Windows, the client libraries and some examples are available in the attached tritonserver2.35.0-sdk-win.zip
file.
Windows Support
A beta release of Triton for Windows is provided in the attached file:tritonserver2.35.0-win.zip
. This is a beta release so functionality is limited and performance is not optimized. Additional features and improved performance will be provided in future releases. Specifically in this release:
-
HTTP/REST and GRPC endpoints are supported.
-
ONNX models are supported by the ONNXRuntime backend. The ONNXRuntime version is 1.15.0. The CPU, CUDA, and TensorRT execution providers are supported. The OpenVINO execution provider is not supported.
-
OpenVINO models are supported. The OpenVINO version is 2021.4.
-
Prometheus metrics endpoint is not supported.
-
System and CUDA shared memory are not supported.
To use the Windows version of Triton, you must install all the necessary dependencies on your Windows system. These dependencies are available in the Dockerfile.win10.min. The Dockerfile includes the following CUDA-related components:
-
CUDA 12.1.1
-
cuDNN 8.9.2.26
-
TensorRT 8.6.1.6
Jetson Jetpack Support
A release of Triton for JetPack is provided in the attached tar file: tritonserver2.35.0-jetpack5.1.2.tgz
.
- This release supports TensorFlow
2.12.0
, TensorRT8.5.2.2
, Onnx Runtime1.15.0
, PyTorch2.1.0a0+41361538
, Python3.8
and as well as ensembles. - ONNXRuntime backend does not support the OpenVino and TensorRT execution providers. The CUDA execution provider is in Beta.
- System shared memory is supported on Jetson. CUDA shared memory is not supported.
- GPU metrics, GCS storage, S3 storage and Azure storage are not supported.
The tar file contains the Triton server executable and shared libraries and also the C++ and Python client libraries and examples. For more information on how to install and use Triton on JetPack refer to jetson.md
.
The wheel for the Python client library is present in the tar file and can be installed by running the following command:
python3 -m pip install --upgrade clients/python/tritonclient-2.35.0-py3-none-manylinux2014_aarch64.whl[all]
Release 2.34.0 corresponding to NGC container 23.05
Triton Inference Server
The Triton Inference Server provides a cloud inferencing solution optimized for both CPUs and GPUs. The server provides an inference service via an HTTP or GRPC endpoint, allowing remote clients to request inferencing for any model being managed by the server. For edge deployments, Triton Server is also available as a shared library with an API that allows the full functionality of the server to be included directly in an application.
What's New in 2.34.0
-
Python backend supports Custom Metrics allowing users to define and report counters and gauges similar to the C API.
-
Python Triton Client defines the Triton Client Plugin API allowing users to register custom plugins to add or modify request headers.
This feature is in beta and is subject to change in future releases. -
Improved performance of model instance creation/removal. When the model instance group is the only model configuration change, Triton will update the model with the number of instances needed rather than reloading the model. This feature is limited to non-sequence models only. Read more about this feature here in bullet point four.
-
Added new command line option
--metrics-address=<address>
allowing the metrics server to bind to a different address than the default0.0.0.0
. -
Reduced the default number of model load threads from 2*(number of CPU cores) to 4. This eliminates Triton hitting resource limits on systems with large CPU core counts. Use the
--model-load-thread-count
command line option to change this default. -
Added support for DLPack Python specification in Python backend.
-
Refer to the 23.05 column of the Frameworks Support Matrix for container image versions on which the 23.05 inference server container is based.
Known Issues
-
Tensorflow backend no longer supports TensorFlow version 1.
-
OpenVINO 2022.1 is used in the OpenVINO backend and the OpenVINO execution provider for the Onnxruntime Backend. OpenVINO 2022.1 is not officially supported on Ubuntu 22.04 and should be treated as beta.
-
Some systems which implement
malloc()
may not release memory back to the operating system right away causing a false memory leak. This can be mitigated by using a different malloc implementation. Tcmalloc and jemalloc are
installed in the Triton container and can be used by specifying the library in LD_PRELOAD. We recommend experimenting with bothtcmalloc
andjemalloc
to determine which one works better for your use case. -
Auto-complete may cause an increase in server start time. To avoid a start time increase, users can provide the full model configuration and launch the server with
--disable-auto-complete-config
. -
Auto-complete does not support PyTorch models due to lack of metadata in the model. It can only verify that the number of inputs and the input names matches what is specified in the model configuration. There is no model metadata about the number of outputs and datatypes. Related PyTorch bug: pytorch/pytorch#38273.
-
Triton Client PIP wheels for ARM SBSA are not available from PyPI and pip will install an incorrect Jetson version of Triton Client library for Arm SBSA.
The correct client wheel file can be pulled directly from the Arm SBSA SDK image and manually installed.
-
Traced models in PyTorch seem to create overflows when int8 tensor values are transformed to int32 on the GPU.
Refer to pytorch/pytorch#66930 for more information.
-
Triton cannot retrieve GPU metrics with MIG-enabled GPU devices (A100 and A30).
-
Triton metrics might not work if the host machine is running a separate DCGM agent on bare-metal or in a container.
Client Libraries and Examples
Ubuntu 20.04 builds of the client libraries and examples are included in this release in the attached v2.34.0_ubuntu2004.clients.tar.gz
file. The SDK is also available for as an Ubuntu 20.04 based NGC Container. The SDK container includes the client libraries and examples, Performance Analyzer and Model Analyzer. Some components are also available in the tritonclient pip package. See Getting the Client Libraries for more information on each of these options.
For windows, the client libraries and some examples are available in the attached tritonserver2.34.0-sdk-win.zip
file.
Windows Support
A beta release of Triton for Windows is provided in the attached file:tritonserver2.34.0-win.zip
. This is a beta release so functionality is limited and performance is not optimized. Additional features and improved performance will be provided in future releases. Specifically in this release:
-
HTTP/REST and GRPC endpoints are supported.
-
ONNX models are supported by the ONNX Runtime backend. The ONNX Runtime version is 1.15.0. The CPU, CUDA, and TensorRT execution providers are supported. The OpenVINO execution provider is not supported.
-
OpenVINO models are supported. The OpenVINO version is 2021.4.
-
Prometheus metrics endpoint is not supported.
-
System and CUDA shared memory are not supported.
To use the Windows version of Triton, you must install all the necessary dependencies on your Windows system. These dependencies are available in the Dockerfile.win10.min. The Dockerfile includes the following CUDA-related components:
-
CUDA 12.1.1
-
cuDNN 8.9.1.23
-
TensorRT 8.6.1.6
Jetson Jetpack Support
A release of Triton for JetPack is provided in the attached tar file: tritonserver2.34.0-jetpack5.1.tgz
.
- This release supports TensorFlow
2.12.0
, TensorRT8.5.2.2
, Onnx Runtime1.15.0
, PyTorch2.0.0a0+8aa34602
, Python3.8
and as well as ensembles. - Onnx Runtime backend does not support the OpenVino and TensorRT execution providers. The CUDA execution provider is in Beta.
- System shared memory is supported on Jetson. CUDA shared memory is not supported.
- GPU metrics, GCS storage, S3 storage and Azure storage are not supported.
The tar file contains the Triton server executable and shared libraries and also the C++ and Python client libraries and examples. For more information on how to install and use Triton on JetPack refer to jetson.md
.
The wheel for the Python client library is present in the tar file and can be installed by running the following command:
python3 -m pip install --upgrade clients/python/tritonclient-2.34.0-py3-none-manylinux2014_aarch64.whl[all]
Release 2.33.0 corresponding to NGC container 23.04
Triton Inference Server
The Triton Inference Server provides a cloud inferencing solution optimized for both CPUs and GPUs. The server provides an inference service via an HTTP or GRPC endpoint, allowing remote clients to request inferencing for any model being managed by the server. For edge deployments, Triton Server is also available as a shared library with an API that allows the full functionality of the server to be included directly in an application.
What's New in 2.33.0
-
Triton can now load models concurrently reducing the server start-up times.
-
Sequence batcher with direct scheduling strategy now includes experimental support for schedule policy.
-
Triton’s ragged batching support has been extended to PyTorch backend.
-
Triton can now forward HTTP/GRPC headers as inference request parameters to the backend.
-
Triton python backend’s business logic scripting now allows developers to select a specific device to receive output tensors from a BLS call.
-
Triton latency metrics can now be obtained as configurable quantiles over a sliding time window using experimental metrics summary support.
-
Users can now restrict the access of the protocols on a given Triton endpoint.
-
Triton now provides a limited support for tracing inference requests using OpenTelemetry Trace APIs.
-
Model Analyzer now supports BLS Models.
-
Refer to the 23.04 column of the Frameworks Support Matrix for container image versions on which the 23.04 inference server container is based.
Known Issues
-
Tensorflow backend no longer supports TensorFlow version 1.
-
Triton Inferentia guide is out of date. Some users have reported issues with running Triton on AWS Inferentia instances.
-
Some systems which implement
malloc()
may not release memory back to the operating system right away causing a false memory leak. This can be mitigated by using a different malloc implementation. Tcmalloc is installed in the Triton container and can be used by specifying the library inLD_PRELOAD
. -
Auto-complete may cause an increase in server start time. To avoid a start time increase, users can provide the full model configuration and launch the server with
--disable-auto-complete-config
. -
Auto-complete does not support PyTorch models due to lack of metadata in the model. It can only verify that the number of inputs and the input names matches what is specified in the model configuration. There is no model metadata about the number of outputs and datatypes. Related PyTorch bug: pytorch/pytorch#38273.
-
Triton Client PIP wheels for ARM SBSA are not available from PyPI and pip will install an incorrect Jetson version of Triton Client library for Arm SBSA.
The correct client wheel file can be pulled directly from the Arm SBSA SDK image and manually installed.
-
Traced models in PyTorch seem to create overflows when int8 tensor values are transformed to int32 on the GPU.
Refer to pytorch/pytorch#66930 for more information.
-
Triton cannot retrieve GPU metrics with MIG-enabled GPU devices (A100 and A30).
-
Triton metrics might not work if the host machine is running a separate DCGM agent on bare-metal or in a container.
Client Libraries and Examples
Ubuntu 20.04 builds of the client libraries and examples are included in this release in the attached v2.33.0_ubuntu2004.clients.tar.gz
file. The SDK is also available for as an Ubuntu 20.04 based NGC Container. The SDK container includes the client libraries and examples, Performance Analyzer and Model Analyzer. Some components are also available in the tritonclient pip package. See Getting the Client Libraries for more information on each of these options.
For windows, the client libraries and some examples are available in the attached tritonserver2.33.0-sdk-win.zip
file.
Windows Support
A beta release of Triton for Windows is provided in the attached file:tritonserver2.33.0-win.zip
. This is a beta release so functionality is limited and performance is not optimized. Additional features and improved performance will be provided in future releases. Specifically in this release:
-
HTTP/REST and GRPC endpoints are supported.
-
ONNX models are supported by the ONNX Runtime backend. The ONNX Runtime version is 1.14.1. The CPU, CUDA, and TensorRT execution providers are supported. The OpenVINO execution provider is not supported.
-
OpenVINO models are supported. The OpenVINO version is 2021.4.
-
Prometheus metrics endpoint is not supported.
-
System and CUDA shared memory are not supported.
To use the Windows version of Triton, you must install all the necessary dependencies on your Windows system. These dependencies are available in the Dockerfile.win10.min. The Dockerfile includes the following CUDA-related components:
-
CUDA 11.8.0
-
cuDNN 8.8.1.3
-
TensorRT 8.5.3.1
Jetson Jetpack Support
Note
In order to build Jetson target from source code please refer to the "r23.04-jetson" branch for "python_backend".
A release of Triton for JetPack is provided in the attached tar file: tritonserver2.33.0-jetpack5.1.tgz
.
- This release supports TensorFlow
2.12.0
, TensorRT8.5.2.2
, Onnx Runtime1.14.1
, PyTorch2.0.0a0+8aa34602
, Python3.8
and as well as ensembles. - Onnx Runtime backend does not support the OpenVino and TensorRT execution providers. The CUDA execution provider is in Beta.
- System shared memory is supported on Jetson. CUDA shared memory is not supported.
- GPU metrics, GCS storage, S3 storage and Azure storage are not supported.
The tar file contains the Triton server executable and shared libraries and also the C++ and Python client libraries and examples. For more information on how to install and use Triton on JetPack refer to jetson.md
.
The wheel for the Python client library is present in the tar file and can be installed by running the following command:
python3 -m pip install --upgrade clients/python/tritonclient-2.33.0-py3-none-manylinux2014_aarch64.whl[all]
Release 2.32.0 corresponding to NGC container 23.03
Triton Inference Server
The Triton Inference Server provides a cloud inferencing solution optimized for both CPUs and GPUs. The server provides an inference service via an HTTP or GRPC endpoint, allowing remote clients to request inferencing for any model being managed by the server. For edge deployments, Triton Server is also available as a shared library with an API that allows the full functionality of the server to be included directly in an application.
What's New in 2.32.0
-
Added the Parameters Extension which allows an inference request to provide custom parameters that cannot be provided as inputs. These parameters can be used in the python backend as described here.
-
Added support for models that use decoupled API for Business Scripting Logic (BLS) in Python backend. Examples can be found here.
-
The same model name can be used across different repositories if the
--model-namespacing
flag is set. -
Triton’s Response Cache feature has been converted internally to a shared library implementation of the new TRITONCACHE APIs, similar to how backends and repo agents are used today. The default cache implementation is local_cache, which is equivalent to the fixed-size in-memory buffer implementation used before. The
--response-cache-byte-size
flag will continue to function in the same way, but the--cache-config
flag will be the preferred method of cache configuration moving forward. For more information, see the cache documentation here. -
Triton’s trace tool now supports tracing for
request_id
. -
Refer to the 23.03 column of the Frameworks Support Matrix for container image versions on which the 23.03 inference server container is based.
Known Issues
-
Support for TensorFlow1 will be removed starting from 23.04.
-
Triton Inferentia guide is out of date. Some users have reported issues with running Triton on AWS Inferentia instances.
-
Some systems which implement
malloc()
may not release memory back to the operating system right away causing a false memory leak. This can be mitigated by using a different malloc implementation. Tcmalloc is installed in the Triton container and can be used by specifying the library in LD_PRELOAD. -
Auto-complete may cause an increase in server start time. To avoid a start time increase, users can provide the full model configuration and launch the server with
--disable-auto-complete-config
. -
Auto-complete does not support PyTorch models due to lack of metadata in the model. It can only verify that the number of inputs and the input names matches what is specified in the model configuration. There is no model metadata about the number of outputs and datatypes. Related PyTorch bug: pytorch/pytorch#38273
-
Triton Client PIP wheels for ARM SBSA are not available from PyPI and pip will install an incorrect Jetson version of Triton Client library for Arm SBSA.
The correct client wheel file can be pulled directly from the Arm SBSA SDK image and manually installed.
-
Traced models in PyTorch seem to create overflows when int8 tensor values are transformed to
int32
on the GPU.Refer to pytorch/pytorch#66930 for more information.
-
Triton cannot retrieve GPU metrics with MIG-enabled GPU devices (A100 and A30).
-
Triton metrics might not work if the host machine is running a separate DCGM agent on bare-metal or in a container.
Client Libraries and Examples
Ubuntu 20.04 builds of the client libraries and examples are included in this release in the attached v2.32.0_ubuntu2004.clients.tar.gz
file. The SDK is also available for as an Ubuntu 20.04 based NGC Container. The SDK container includes the client libraries and examples, Performance Analyzer and Model Analyzer. Some components are also available in the tritonclient pip package. See Getting the Client Libraries for more information on each of these options.
For windows, the client libraries and some examples are available in the attached tritonserver2.32.0-sdk-win.zip
file.
Windows Support
A beta release of Triton for Windows is provided in the attached file:tritonserver2.32.0-win.zip
. This is a beta release so functionality is limited and performance is not optimized. Additional features and improved performance will be provided in future releases. Specifically in this release:
-
HTTP/REST and GRPC endpoints are supported.
-
ONNX models are supported by the ONNX Runtime backend. The ONNX Runtime version is 1.14.1. The CPU, CUDA, and TensorRT execution providers are supported. The OpenVINO execution provider is not supported.
-
OpenVINO models are supported. The OpenVINO version is 2021.4.
-
Prometheus metrics endpoint is not supported.
-
System and CUDA shared memory are not supported.
To use the Windows version of Triton, you must install all the necessary dependencies on your Windows system. These dependencies are available in the Dockerfile.win10.min. The Dockerfile includes the following CUDA-related components:
-
CUDA 11.8.0
-
cuDNN 8.8.1.3
-
TensorRT 8.5.3.1
Jetson Jetpack Support
A release of Triton for JetPack is provided in the attached tar file: tritonserver2.32.0-jetpack5.1.tgz.
- This release supports TensorFlow 2.11.0, TensorFlow 1.15.5, TensorRT 8.5.2.2, Onnx Runtime 1.14.1, PyTorch2.0.0, Python 3.8 and as well as ensembles.
- Onnx Runtime backend does not support the OpenVino and TensorRT execution providers. The CUDA execution provider is in Beta.
- System shared memory is supported on Jetson. CUDA shared memory is not supported.
- GPU metrics, GCS storage, S3 storage and Azure storage are not supported.
The tar file contains the Triton server executable and shared libraries and also the C++ and Python client libraries and examples. For more information on how to install and use Triton on JetPack refer to jetson.md.
The wheel for the Python client library is present in the tar file and can be installed by running the following command:
python3 -m pip install --upgrade clients/python/tritonclient-2.32.0-py3-none-manylinux2014_aarch64.whl[all]