You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I can load the ONNX model on the Jetson AGX Orin host, but it fails inside the Docker container. The container keeps throwing errors during ONNX parsing even though I've tried several combinations of CUDA, cuDNN and TensorRT versions. What could be the root cause?thanks!!!
nvidia@EAORA07AXFI:~/Desktop$ sudo docker run -it --net=host --runtime=nvidia --gpus all --privileged -e LOCAL_UID=0 -e LOCAL_GID=0 -e LOCAL_USER=root -e LOCAL_GROUP=root -e DISPLAY=:1 -v /tmp/.X11-unix/:/tmp/.X11-unix -e XAUTHORITY= -e XDG_RUNTIME_DIR= -e NVIDIA_DRIVER_CAPABILITIES=all -e TZ=Asia/Shanghai -v /media/nvidia/3cba374a-2b26-4700-92cc-42311800c957/new/autoware:/workspace -v /media/nvidia/mydisk/new/autoware_map:/autoware_map:ro -v /media/nvidia/mydisk/new/autoware_data:/autoware_data:rw -v /dev:/dev ghcr.io/autowarefoundation/autoware:universe-devel-cuda /bin/bash
Starting with user: root >> UID 0, GID: 0
groupadd: group 'root' already exists
useradd: user 'root' already exists
ln: failed to create symbolic link '/home/root/autoware_data': No such file or directory
Linked /autoware_data to /home/root/autoware_data
root@EAORA07AXFI:/autoware#
root@EAORA07AXFI:/autoware#
root@EAORA07AXFI:/autoware#
root@EAORA07AXFI:/autoware# source /opt/ros/${ROS_DISTRO}/setup.bash
source /workspace/install/setup.bash
cd /workspace
root@EAORA07AXFI:/workspace# ros2 launch self_sensor_kit_launch camera.launch.xml
[INFO] [launch]: All log files can be found below /root/.ros/log/2025-09-28-21-45-51-681288-EAORA07AXFI-90
[INFO] [launch]: Default logging verbosity is set to INFO
[INFO] [component_container_mt-1]: process started with pid [102]
[component_container_mt-1] [INFO] [1759067152.294379454] [perception.object_detection.front_camera_container]: Load Library: /workspace/install/my_camera_pkg/lib/libself_camera_component.so
[component_container_mt-1] [INFO] [1759067152.643411768] [perception.object_detection.front_camera_container]: Found class: rclcpp_components::NodeFactoryTemplate<my_camera_pkg::SelfCameraNode>
[component_container_mt-1] [INFO] [1759067152.643555577] [perception.object_detection.front_camera_container]: Instantiate class: rclcpp_components::NodeFactoryTemplate<my_camera_pkg::SelfCameraNode>
[component_container_mt-1] [INFO] [1759067152.646831541] [camera.self_camera_node]: Show window: false
[component_container_mt-1] (Argus) Error FileOperationFailed: Connecting to nvargus-daemon failed: No such file or directory (in src/rpc/socket/client/SocketClientDispatch.cpp, function openSocketConnection(), line 205)
[component_container_mt-1] (Argus) Error FileOperationFailed: Cannot create camera provider (in src/rpc/socket/client/SocketClientDispatch.cpp, function createCameraProvider(), line 107)
[component_container_mt-1] [INFO] [1759067153.135170717] [camera.self_camera_node]: GStreamer pipeline: v4l2src device=/dev/video3 ! video/x-raw,width=1920,height=1080,framerate=30/1 ! videoconvert ! appsink name=sink emit-signals=true
[component_container_mt-1] [ERROR] [1759067153.148678801] [perception.object_detection.front_camera_container]: Component constructor threw an exception: Failed to create GStreamer pipeline
[ERROR] [launch_ros.actions.load_composable_nodes]: Failed to load node '' of type 'my_camera_pkg::SelfCameraNode' in container '/perception/object_detection/front_camera_container': Component constructor threw an exception: Failed to create GStreamer pipeline
[component_container_mt-1] [INFO] [1759067153.158850281] [perception.object_detection.front_camera_container]: Load Library: /workspace/install/autoware_tensorrt_yolox/lib/libautoware_tensorrt_yolox_node.so
[component_container_mt-1] [INFO] [1759067153.300863470] [perception.object_detection.front_camera_container]: Found class: rclcpp_components::NodeFactoryTemplateautoware::tensorrt_yolox::TrtYoloXNode
[component_container_mt-1] [INFO] [1759067153.301009807] [perception.object_detection.front_camera_container]: Instantiate class: rclcpp_components::NodeFactoryTemplateautoware::tensorrt_yolox::TrtYoloXNode
[component_container_mt-1] [TrtYoloX] CUDA Driver API version: 12.6
[component_container_mt-1] [TrtYoloX] CUDA Runtime version : 12.4
[component_container_mt-1] [TrtYoloX] Total visible GPUs: 1
[component_container_mt-1] GPU 0: Orin
[component_container_mt-1] [TrtYoloX] GPU 0 selected successfully
[component_container_mt-1] [I] [TRT] [MemUsageChange] Init CUDA: CPU +0, GPU +0, now: CPU 25, GPU 3673 (MiB)
[component_container_mt-1] [I] [TRT] [MemUsageChange] Init builder kernel library: CPU +1, GPU -8, now: CPU 181, GPU 3839 (MiB)
[component_container_mt-1] [I] [TRT] ----------------------------------------------------------------
[component_container_mt-1] [I] [TRT] Input filename: /autoware_data/tensorrt_yolox/yolox-tiny.onnx
[component_container_mt-1] [I] [TRT] ONNX IR version: 0.0.8
[component_container_mt-1] [I] [TRT] Opset version: 11
[component_container_mt-1] [I] [TRT] Producer name: pytorch
[component_container_mt-1] [I] [TRT] Producer version: 1.12.0
[component_container_mt-1] [I] [TRT] Domain:
[component_container_mt-1] [I] [TRT] Model version: 0
[component_container_mt-1] [I] [TRT] Doc string:
[component_container_mt-1] [I] [TRT] ----------------------------------------------------------------
[component_container_mt-1] [I] [TRT] No checker registered for op: EfficientNMS_TRT. Attempting to check as plugin.
[component_container_mt-1] [I] [TRT] No importer registered for op: EfficientNMS_TRT. Attempting to import as plugin.
[component_container_mt-1] [I] [TRT] Searching for plugin: EfficientNMS_TRT, plugin_version: 1, plugin_namespace:
[component_container_mt-1] [I] [TRT] Successfully created plugin: EfficientNMS_TRT
[component_container_mt-1] [W] [TRT] onnxOpImporters.cpp:6119: Attribute class_agnostic not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[component_container_mt-1] [W] [TRT] Engine is not initialized. Retrieving data from network
[component_container_mt-1] [W] [TRT] Engine is not initialized. Retrieving data from network
[component_container_mt-1] [I] [TRT] Setting optimization profile for tensor: images {min [1, 3, 608, 960], opt [1, 3, 608, 960], max [1, 3, 608, 960]}
[component_container_mt-1] [I] [TRT] Loading engine
[component_container_mt-1] [I] [TRT] Loaded engine size: 12 MiB
[component_container_mt-1] [E] [TRT] IRuntime::deserializeCudaEngine: Error Code 6: API Usage Error (The engine plan file is not compatible with this version of TensorRT, expecting library version 10.3.0.26 got
[component_container_mt-1] ..)
[component_container_mt-1] [E] [TRT] Fail to create engine
[ERROR] [launch_ros.actions.load_composable_nodes]: Failed to load node 'tensorrt_yolox' of type 'autoware::tensorrt_yolox::TrtYoloXNode' in container '/perception/object_detection/front_camera_container': Component constructor threw an exception: Failed to setup TensorRT engine
[component_container_mt-1] [ERROR] [1759067153.928869959] [perception.object_detection.front_camera_container]: Component constructor threw an exception: Failed to setup TensorRT engine
[component_container_mt-1] [INFO] [1759067461.648516421] [perception.object_detection.front_camera_container]: Found class: rclcpp_components::NodeFactoryTemplate<my_camera_pkg::SelfCameraNode>
[component_container_mt-1] [INFO] [1759067461.648664902] [perception.object_detection.front_camera_container]: Instantiate class: rclcpp_components::NodeFactoryTemplate<my_camera_pkg::SelfCameraNode>
[component_container_mt-1] [INFO] [1759067461.652447111] [camera.self_camera_node]: Show window: false
[component_container_mt-1] [INFO] [1759067461.652745033] [camera.self_camera_node]: GStreamer pipeline: v4l2src device=/dev/video3 ! video/x-raw,width=1920,height=1080,framerate=30/1 ! videoconvert ! appsink name=sink emit-signals=true
[component_container_mt-1] [ERROR] [1759067461.665883162] [perception.object_detection.front_camera_container]: Component constructor threw an exception: Failed to create GStreamer pipeline
[component_container_mt-1] [INFO] [1759067461.673567101] [perception.object_detection.front_camera_container]: Found class: rclcpp_components::NodeFactoryTemplateautoware::tensorrt_yolox::TrtYoloXNode
[component_container_mt-1] [INFO] [1759067461.673690654] [perception.object_detection.front_camera_container]: Instantiate class: rclcpp_components::NodeFactoryTemplateautoware::tensorrt_yolox::TrtYoloXNode
[component_container_mt-1] [TrtYoloX] CUDA Driver API version: 12.6
[component_container_mt-1] [TrtYoloX] CUDA Runtime version : 12.4
[component_container_mt-1] [TrtYoloX] Total visible GPUs: 1
[component_container_mt-1] GPU 0: Orin
[component_container_mt-1] [TrtYoloX] GPU 0 selected successfully
[component_container_mt-1] [I] [TRT] [MemUsageChange] Init CUDA: CPU +0, GPU +0, now: CPU 182, GPU 3960 (MiB)
[component_container_mt-1] [I] [TRT] [MemUsageChange] Init builder kernel library: CPU +0, GPU -19, now: CPU 182, GPU 3941 (MiB)
[component_container_mt-1] [I] [TRT] ----------------------------------------------------------------
[component_container_mt-1] [I] [TRT] Input filename: /autoware_data/tensorrt_yolox/yolox-tiny.onnx
[component_container_mt-1] [I] [TRT] ONNX IR version: 0.0.8
[component_container_mt-1] [I] [TRT] Opset version: 11
[component_container_mt-1] [I] [TRT] Producer name: pytorch
[component_container_mt-1] [I] [TRT] Producer version: 1.12.0
[component_container_mt-1] [I] [TRT] Domain:
[component_container_mt-1] [I] [TRT] Model version: 0
[component_container_mt-1] [I] [TRT] Doc string:
[component_container_mt-1] [I] [TRT] ----------------------------------------------------------------
[component_container_mt-1] [I] [TRT] No checker registered for op: EfficientNMS_TRT. Attempting to check as plugin.
[component_container_mt-1] [I] [TRT] No importer registered for op: EfficientNMS_TRT. Attempting to import as plugin.
[component_container_mt-1] [I] [TRT] Searching for plugin: EfficientNMS_TRT, plugin_version: 1, plugin_namespace:
[component_container_mt-1] [I] [TRT] Successfully created plugin: EfficientNMS_TRT
[component_container_mt-1] [W] [TRT] onnxOpImporters.cpp:6119: Attribute class_agnostic not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[component_container_mt-1] [W] [TRT] Engine is not initialized. Retrieving data from network
[component_container_mt-1] [W] [TRT] Engine is not initialized. Retrieving data from network
[component_container_mt-1] [I] [TRT] Setting optimization profile for tensor: images {min [1, 3, 608, 960], opt [1, 3, 608, 960], max [1, 3, 608, 960]}
[component_container_mt-1] [I] [TRT] Loading engine
[component_container_mt-1] [I] [TRT] Loaded engine size: 12 MiB
[component_container_mt-1] [E] [TRT] IRuntime::deserializeCudaEngine: Error Code 6: API Usage Error (The engine plan file is not compatible with this version of TensorRT, expecting library version 10.3.0.26 got
[component_container_mt-1] ..)
[component_container_mt-1] [E] [TRT] Fail to create engine
[component_container_mt-1] [ERROR] [1759067461.885111544] [perception.object_detection.front_camera_container]: Component constructor threw an exception: Failed to setup TensorRT engine
[component_container_mt-1] [INFO] [1759067508.243727735] [perception.object_detection.front_camera_container]: Found class: rclcpp_components::NodeFactoryTemplate<my_camera_pkg::SelfCameraNode>
[component_container_mt-1] [INFO] [1759067508.243899545] [perception.object_detection.front_camera_container]: Instantiate class: rclcpp_components::NodeFactoryTemplate<my_camera_pkg::SelfCameraNode>
[component_container_mt-1] [INFO] [1759067508.247505592] [camera.self_camera_node]: Show window: false
[component_container_mt-1] [INFO] [1759067508.247840378] [camera.self_camera_node]: GStreamer pipeline: v4l2src device=/dev/video3 ! video/x-raw,width=1920,height=1080,framerate=30/1 ! videoconvert ! appsink name=sink emit-signals=true
[component_container_mt-1] [ERROR] [1759067508.260105572] [perception.object_detection.front_camera_container]: Component constructor threw an exception: Failed to create GStreamer pipeline
[component_container_mt-1] [INFO] [1759067508.267237985] [perception.object_detection.front_camera_container]: Found class: rclcpp_components::NodeFactoryTemplateautoware::tensorrt_yolox::TrtYoloXNode
[component_container_mt-1] [INFO] [1759067508.267380867] [perception.object_detection.front_camera_container]: Instantiate class: rclcpp_components::NodeFactoryTemplateautoware::tensorrt_yolox::TrtYoloXNode
[component_container_mt-1] [TrtYoloX] CUDA Driver API version: 12.6
[component_container_mt-1] [TrtYoloX] CUDA Runtime version : 12.4
[component_container_mt-1] [TrtYoloX] Total visible GPUs: 1
[component_container_mt-1] GPU 0: Orin
[component_container_mt-1] [TrtYoloX] GPU 0 selected successfully
[component_container_mt-1] [I] [TRT] [MemUsageChange] Init CUDA: CPU +0, GPU +0, now: CPU 182, GPU 4002 (MiB)
[component_container_mt-1] [I] [TRT] [MemUsageChange] Init builder kernel library: CPU +0, GPU +13, now: CPU 182, GPU 4015 (MiB)
[component_container_mt-1] [I] [TRT] ----------------------------------------------------------------
[component_container_mt-1] [I] [TRT] Input filename: /autoware_data/tensorrt_yolox/yolox-tiny.onnx
[component_container_mt-1] [I] [TRT] ONNX IR version: 0.0.8
[component_container_mt-1] [I] [TRT] Opset version: 11
[component_container_mt-1] [I] [TRT] Producer name: pytorch
[component_container_mt-1] [I] [TRT] Producer version: 1.12.0
[component_container_mt-1] [I] [TRT] Domain:
[component_container_mt-1] [I] [TRT] Model version: 0
[component_container_mt-1] [I] [TRT] Doc string:
[component_container_mt-1] [I] [TRT] ----------------------------------------------------------------
[component_container_mt-1] [I] [TRT] No checker registered for op: EfficientNMS_TRT. Attempting to check as plugin.
[component_container_mt-1] [I] [TRT] No importer registered for op: EfficientNMS_TRT. Attempting to import as plugin.
[component_container_mt-1] [I] [TRT] Searching for plugin: EfficientNMS_TRT, plugin_version: 1, plugin_namespace:
[component_container_mt-1] [I] [TRT] Successfully created plugin: EfficientNMS_TRT
[component_container_mt-1] [W] [TRT] onnxOpImporters.cpp:6119: Attribute class_agnostic not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[component_container_mt-1] [W] [TRT] Engine is not initialized. Retrieving data from network
[component_container_mt-1] [W] [TRT] Engine is not initialized. Retrieving data from network
[component_container_mt-1] [I] [TRT] Setting optimization profile for tensor: images {min [1, 3, 608, 960], opt [1, 3, 608, 960], max [1, 3, 608, 960]}
[component_container_mt-1] [I] [TRT] Loading engine
[component_container_mt-1] [I] [TRT] Loaded engine size: 12 MiB
[component_container_mt-1] [E] [TRT] IRuntime::deserializeCudaEngine: Error Code 6: API Usage Error (The engine plan file is not compatible with this version of TensorRT, expecting library version 10.3.0.26 got
[component_container_mt-1] ..)
[component_container_mt-1] [E] [TRT] Fail to create engine
[component_container_mt-1] [ERROR] [1759067508.485644120] [perception.object_detection.front_camera_container]: Component constructor threw an exception: Failed to setup TensorRT engine
^C[WARNING] [launch]: user interrupted with ctrl-c (SIGINT)
[component_container_mt-1] [INFO] [1759067616.384708965] [rclcpp]: signal_handler(signum=2)
[INFO] [component_container_mt-1]: process has finished cleanly [pid 102]
root@EAORA07AXFI:/workspace#
root@EAORA07AXFI:/workspace# ^C
root@EAORA07AXFI:/workspace# ^C
root@EAORA07AXFI:/workspace#
root@EAORA07AXFI:/workspace#
root@EAORA07AXFI:/workspace#
root@EAORA07AXFI:/workspace#
root@EAORA07AXFI:/workspace#
root@EAORA07AXFI:/workspace# ros2 launch self_sensor_kit_launch camera.launch.xml
[INFO] [launch]: All log files can be found below /root/.ros/log/2025-09-28-21-53-41-475122-EAORA07AXFI-131
[INFO] [launch]: Default logging verbosity is set to INFO
[INFO] [component_container_mt-1]: process started with pid [143]
[component_container_mt-1] [INFO] [1759067622.060557078] [perception.object_detection.front_camera_container]: Load Library: /workspace/install/my_camera_pkg/lib/libself_camera_component.so
[component_container_mt-1] [INFO] [1759067622.362815390] [perception.object_detection.front_camera_container]: Found class: rclcpp_components::NodeFactoryTemplate<my_camera_pkg::SelfCameraNode>
[component_container_mt-1] [INFO] [1759067622.362961791] [perception.object_detection.front_camera_container]: Instantiate class: rclcpp_components::NodeFactoryTemplate<my_camera_pkg::SelfCameraNode>
[component_container_mt-1] [INFO] [1759067622.366322460] [camera.self_camera_node]: Show window: false
[component_container_mt-1] [INFO] [1759067622.375634284] [camera.self_camera_node]: GStreamer pipeline: v4l2src device=/dev/video3 ! video/x-raw,width=1920,height=1080,framerate=30/1 ! videoconvert ! appsink name=sink emit-signals=true
[component_container_mt-1] [ERROR] [1759067622.388093879] [perception.object_detection.front_camera_container]: Component constructor threw an exception: Failed to create GStreamer pipeline
[ERROR] [launch_ros.actions.load_composable_nodes]: Failed to load node '' of type 'my_camera_pkg::SelfCameraNode' in container '/perception/object_detection/front_camera_container': Component constructor threw an exception: Failed to create GStreamer pipeline
[component_container_mt-1] [INFO] [1759067622.396090172] [perception.object_detection.front_camera_container]: Load Library: /workspace/install/autoware_tensorrt_yolox/lib/libautoware_tensorrt_yolox_node.so
[component_container_mt-1] [INFO] [1759067622.515160284] [perception.object_detection.front_camera_container]: Found class: rclcpp_components::NodeFactoryTemplateautoware::tensorrt_yolox::TrtYoloXNode
[component_container_mt-1] [INFO] [1759067622.515302621] [perception.object_detection.front_camera_container]: Instantiate class: rclcpp_components::NodeFactoryTemplateautoware::tensorrt_yolox::TrtYoloXNode
[component_container_mt-1] [TrtYoloX] CUDA Driver API version: 12.6
[component_container_mt-1] [TrtYoloX] CUDA Runtime version : 12.4
[component_container_mt-1] [TrtYoloX] Total visible GPUs: 1
[component_container_mt-1] GPU 0: Orin
[component_container_mt-1] [TrtYoloX] GPU 0 selected successfully
[component_container_mt-1] [I] [TRT] [MemUsageChange] Init CUDA: CPU +0, GPU +0, now: CPU 25, GPU 3682 (MiB)
[component_container_mt-1] [I] [TRT] [MemUsageChange] Init builder kernel library: CPU +1, GPU +2, now: CPU 181, GPU 3871 (MiB)
[component_container_mt-1] [I] [TRT] ----------------------------------------------------------------
[component_container_mt-1] [I] [TRT] Input filename: /autoware_data/tensorrt_yolox/yolox-tiny.onnx
[component_container_mt-1] [I] [TRT] ONNX IR version: 0.0.8
[component_container_mt-1] [I] [TRT] Opset version: 11
[component_container_mt-1] [I] [TRT] Producer name: pytorch
[component_container_mt-1] [I] [TRT] Producer version: 1.12.0
[component_container_mt-1] [I] [TRT] Domain:
[component_container_mt-1] [I] [TRT] Model version: 0
[component_container_mt-1] [I] [TRT] Doc string:
[component_container_mt-1] [I] [TRT] ----------------------------------------------------------------
[component_container_mt-1] [I] [TRT] No checker registered for op: EfficientNMS_TRT. Attempting to check as plugin.
[component_container_mt-1] [I] [TRT] No importer registered for op: EfficientNMS_TRT. Attempting to import as plugin.
[component_container_mt-1] [I] [TRT] Searching for plugin: EfficientNMS_TRT, plugin_version: 1, plugin_namespace:
[component_container_mt-1] [I] [TRT] Successfully created plugin: EfficientNMS_TRT
[component_container_mt-1] [W] [TRT] onnxOpImporters.cpp:6119: Attribute class_agnostic not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[component_container_mt-1] [W] [TRT] Engine is not initialized. Retrieving data from network
[component_container_mt-1] [W] [TRT] Engine is not initialized. Retrieving data from network
[component_container_mt-1] [I] [TRT] Setting optimization profile for tensor: images {min [1, 3, 608, 960], opt [1, 3, 608, 960], max [1, 3, 608, 960]}
[component_container_mt-1] [I] [TRT] Starting to build engine
[component_container_mt-1] [I] [TRT] Applying optimizations and building TensorRT CUDA engine. Please wait for a few minutes...
[component_container_mt-1] [I] [TRT] Local timing cache in use. Profiling results in this builder pass will not be stored.
[component_container_mt-1] [E] [TRT] Error Code: 9: Skipping tactic 0x3f243c490d502deb due to exception initDeviceReservedSpace
[component_container_mt-1] [E] [TRT] Error Code: 9: Skipping tactic 0xf067e6205da31c2e due to exception initDeviceReservedSpace
[component_container_mt-1] [E] [TRT] Error Code: 9: Skipping tactic 0xf64396b97c889179 due to exception initDeviceReservedSpace
[component_container_mt-1] [E] [TRT] Error Code: 9: Skipping tactic 0x503619c69ae500ff due to exception initDeviceReservedSpace
[component_container_mt-1] [E] [TRT] Error Code: 9: Skipping tactic 0x94119b4c514b211a due to exception Cask convolution execution
[component_container_mt-1] [E] [TRT] Error Code: 9: Skipping tactic 0xa8609adc4e0ceb90 due to exception initDeviceReservedSpace
[component_container_mt-1] [E] [TRT] Error Code: 9: Skipping tactic 0x5deb29b7a8e275f7 due to exception initDeviceReservedSpace
[component_container_mt-1] [E] [TRT] Error Code: 9: Skipping tactic 0xf90060ce8193b811 due to exception initDeviceReservedSpace
[component_container_mt-1] [E] [TRT] Error Code: 9: Skipping tactic 0x7bc32c782b800c48 due to exception initDeviceReservedSpace
[component_container_mt-1] [E] [TRT] Error Code: 9: Skipping tactic 0xbdfdef6b84f7ccc9 due to exception initDeviceReservedSpace
[component_container_mt-1] [E] [TRT] Error Code: 9: Skipping tactic 0x3e2b881168d9689d due to exception initDeviceReservedSpace
[component_container_mt-1] [E] [TRT] IBuilder::buildSerializedNetwork: Error Code 10: Internal Error (Could not find any implementation for node Conv_48 + PWN(PWN(Sigmoid_49), PWN(Mul_50)).)
[component_container_mt-1] [E] [TRT] [checkMacros.cpp::catchCudaError::212] Error Code 1: Cuda Runtime (no kernel image is available for execution on the device)
[component_container_mt-1] [E] [TRT] Fail to create host memory
[component_container_mt-1] [I] [TRT] Engine build completed
[component_container_mt-1] [ERROR] [1759067628.091976217] [perception.object_detection.front_camera_container]: Component constructor threw an exception: Failed to setup TensorRT engine
[ERROR] [launch_ros.actions.load_composable_nodes]: Failed to load node 'tensorrt_yolox' of type 'autoware::tensorrt_yolox::TrtYoloXNode' in container '/perception/object_detection/front_camera_container': Component constructor threw an exception: Failed to setup TensorRT engine
^C[WARNING] [launch]: user interrupted with ctrl-c (SIGINT)
[component_container_mt-1] [INFO] [1759067718.221890186] [rclcpp]: signal_handler(signum=2)
[INFO] [component_container_mt-1]: process has finished cleanly [pid 143]
root@EAORA07AXFI:/workspace# ^C
root@EAORA07AXFI:/workspace# nvidia-smi
Sun Sep 28 21:55:26 2025
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 540.4.0 Driver Version: 540.4.0 CUDA Version: 12.6 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 Orin (nvgpu) N/A | N/A N/A | N/A |
| N/A N/A N/A N/A / N/A | Not Supported | N/A N/A |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| No running processes found |
+---------------------------------------------------------------------------------------+
root@EAORA07AXFI:/workspace# nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2024 NVIDIA Corporation
Built on Thu_Mar_28_02:24:28_PDT_2024
Cuda compilation tools, release 12.4, V12.4.131
Build cuda_12.4.r12.4/compiler.34097967_0
root@EAORA07AXFI:/workspace# cat /usr/include/cudnn_version.h | grep CUDNN_MAJOR -A 2
#define CUDNN_MAJOR 8
#define CUDNN_MINOR 9
#define CUDNN_PATCHLEVEL 7
/* cannot use constexpr here since this is a C-only file */
root@EAORA07AXFI:/workspace# dpkg -l | grep libcudnn
hi libcudnn8 8.9.7.29-1+cuda12.2 arm64 cuDNN runtime libraries
hi libcudnn8-dev 8.9.7.29-1+cuda12.2 arm64 cuDNN development libraries and headers
root@EAORA07AXFI:/workspace# dpkg -l | grep TensorRT
hi libnvinfer-dev 10.3.0.26-1+cuda12.5 arm64 TensorRT development libraries
hi libnvinfer-headers-dev 10.3.0.26-1+cuda12.5 arm64 TensorRT development headers
hi libnvinfer-headers-plugin-dev 10.3.0.26-1+cuda12.5 arm64 TensorRT plugin headers
hi libnvinfer-plugin-dev 10.3.0.26-1+cuda12.5 arm64 TensorRT plugin libraries
hi libnvinfer-plugin10 10.3.0.26-1+cuda12.5 arm64 TensorRT plugin libraries
hi libnvinfer10 10.3.0.26-1+cuda12.5 arm64 TensorRT runtime libraries
hi libnvonnxparsers-dev 10.3.0.26-1+cuda12.5 arm64 TensorRT ONNX libraries
hi libnvonnxparsers10 10.3.0.26-1+cuda12.5 arm64 TensorRT ONNX libraries
ii ros-humble-tensorrt-cmake-module 0.0.4-1jammy.20250719.002719 arm64 Exports a CMake module to find TensorRT.
root@EAORA07AXFI:/workspace#
root@EAORA07AXFI:/workspace#
root@EAORA07AXFI:/workspace# ^C
root@EAORA07AXFI:/workspace#
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
I can load the ONNX model on the Jetson AGX Orin host, but it fails inside the Docker container. The container keeps throwing errors during ONNX parsing even though I've tried several combinations of CUDA, cuDNN and TensorRT versions. What could be the root cause?thanks!!!
nvidia@EAORA07AXFI:~/Desktop$ sudo docker run -it --net=host --runtime=nvidia --gpus all --privileged -e LOCAL_UID=0 -e LOCAL_GID=0 -e LOCAL_USER=root -e LOCAL_GROUP=root -e DISPLAY=:1 -v /tmp/.X11-unix/:/tmp/.X11-unix -e XAUTHORITY= -e XDG_RUNTIME_DIR= -e NVIDIA_DRIVER_CAPABILITIES=all -e TZ=Asia/Shanghai -v /media/nvidia/3cba374a-2b26-4700-92cc-42311800c957/new/autoware:/workspace -v /media/nvidia/mydisk/new/autoware_map:/autoware_map:ro -v /media/nvidia/mydisk/new/autoware_data:/autoware_data:rw -v /dev:/dev ghcr.io/autowarefoundation/autoware:universe-devel-cuda /bin/bash
Starting with user: root >> UID 0, GID: 0
groupadd: group 'root' already exists
useradd: user 'root' already exists
ln: failed to create symbolic link '/home/root/autoware_data': No such file or directory
Linked /autoware_data to /home/root/autoware_data
root@EAORA07AXFI:/autoware#
root@EAORA07AXFI:/autoware#
root@EAORA07AXFI:/autoware#
root@EAORA07AXFI:/autoware# source /opt/ros/${ROS_DISTRO}/setup.bash
source /workspace/install/setup.bash
cd /workspace
root@EAORA07AXFI:/workspace# ros2 launch self_sensor_kit_launch camera.launch.xml
[INFO] [launch]: All log files can be found below /root/.ros/log/2025-09-28-21-45-51-681288-EAORA07AXFI-90
[INFO] [launch]: Default logging verbosity is set to INFO
[INFO] [component_container_mt-1]: process started with pid [102]
[component_container_mt-1] [INFO] [1759067152.294379454] [perception.object_detection.front_camera_container]: Load Library: /workspace/install/my_camera_pkg/lib/libself_camera_component.so
[component_container_mt-1] [INFO] [1759067152.643411768] [perception.object_detection.front_camera_container]: Found class: rclcpp_components::NodeFactoryTemplate<my_camera_pkg::SelfCameraNode>
[component_container_mt-1] [INFO] [1759067152.643555577] [perception.object_detection.front_camera_container]: Instantiate class: rclcpp_components::NodeFactoryTemplate<my_camera_pkg::SelfCameraNode>
[component_container_mt-1] [INFO] [1759067152.646831541] [camera.self_camera_node]: Show window: false
[component_container_mt-1] (Argus) Error FileOperationFailed: Connecting to nvargus-daemon failed: No such file or directory (in src/rpc/socket/client/SocketClientDispatch.cpp, function openSocketConnection(), line 205)
[component_container_mt-1] (Argus) Error FileOperationFailed: Cannot create camera provider (in src/rpc/socket/client/SocketClientDispatch.cpp, function createCameraProvider(), line 107)
[component_container_mt-1] [INFO] [1759067153.135170717] [camera.self_camera_node]: GStreamer pipeline: v4l2src device=/dev/video3 ! video/x-raw,width=1920,height=1080,framerate=30/1 ! videoconvert ! appsink name=sink emit-signals=true
[component_container_mt-1] [ERROR] [1759067153.148678801] [perception.object_detection.front_camera_container]: Component constructor threw an exception: Failed to create GStreamer pipeline
[ERROR] [launch_ros.actions.load_composable_nodes]: Failed to load node '' of type 'my_camera_pkg::SelfCameraNode' in container '/perception/object_detection/front_camera_container': Component constructor threw an exception: Failed to create GStreamer pipeline
[component_container_mt-1] [INFO] [1759067153.158850281] [perception.object_detection.front_camera_container]: Load Library: /workspace/install/autoware_tensorrt_yolox/lib/libautoware_tensorrt_yolox_node.so
[component_container_mt-1] [INFO] [1759067153.300863470] [perception.object_detection.front_camera_container]: Found class: rclcpp_components::NodeFactoryTemplateautoware::tensorrt_yolox::TrtYoloXNode
[component_container_mt-1] [INFO] [1759067153.301009807] [perception.object_detection.front_camera_container]: Instantiate class: rclcpp_components::NodeFactoryTemplateautoware::tensorrt_yolox::TrtYoloXNode
[component_container_mt-1] [TrtYoloX] CUDA Driver API version: 12.6
[component_container_mt-1] [TrtYoloX] CUDA Runtime version : 12.4
[component_container_mt-1] [TrtYoloX] Total visible GPUs: 1
[component_container_mt-1] GPU 0: Orin
[component_container_mt-1] [TrtYoloX] GPU 0 selected successfully
[component_container_mt-1] [I] [TRT] [MemUsageChange] Init CUDA: CPU +0, GPU +0, now: CPU 25, GPU 3673 (MiB)
[component_container_mt-1] [I] [TRT] [MemUsageChange] Init builder kernel library: CPU +1, GPU -8, now: CPU 181, GPU 3839 (MiB)
[component_container_mt-1] [I] [TRT] ----------------------------------------------------------------
[component_container_mt-1] [I] [TRT] Input filename: /autoware_data/tensorrt_yolox/yolox-tiny.onnx
[component_container_mt-1] [I] [TRT] ONNX IR version: 0.0.8
[component_container_mt-1] [I] [TRT] Opset version: 11
[component_container_mt-1] [I] [TRT] Producer name: pytorch
[component_container_mt-1] [I] [TRT] Producer version: 1.12.0
[component_container_mt-1] [I] [TRT] Domain:
[component_container_mt-1] [I] [TRT] Model version: 0
[component_container_mt-1] [I] [TRT] Doc string:
[component_container_mt-1] [I] [TRT] ----------------------------------------------------------------
[component_container_mt-1] [I] [TRT] No checker registered for op: EfficientNMS_TRT. Attempting to check as plugin.
[component_container_mt-1] [I] [TRT] No importer registered for op: EfficientNMS_TRT. Attempting to import as plugin.
[component_container_mt-1] [I] [TRT] Searching for plugin: EfficientNMS_TRT, plugin_version: 1, plugin_namespace:
[component_container_mt-1] [I] [TRT] Successfully created plugin: EfficientNMS_TRT
[component_container_mt-1] [W] [TRT] onnxOpImporters.cpp:6119: Attribute class_agnostic not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[component_container_mt-1] [W] [TRT] Engine is not initialized. Retrieving data from network
[component_container_mt-1] [W] [TRT] Engine is not initialized. Retrieving data from network
[component_container_mt-1] [I] [TRT] Setting optimization profile for tensor: images {min [1, 3, 608, 960], opt [1, 3, 608, 960], max [1, 3, 608, 960]}
[component_container_mt-1] [I] [TRT] Loading engine
[component_container_mt-1] [I] [TRT] Loaded engine size: 12 MiB
[component_container_mt-1] [E] [TRT] IRuntime::deserializeCudaEngine: Error Code 6: API Usage Error (The engine plan file is not compatible with this version of TensorRT, expecting library version 10.3.0.26 got
[component_container_mt-1] ..)
[component_container_mt-1] [E] [TRT] Fail to create engine
[ERROR] [launch_ros.actions.load_composable_nodes]: Failed to load node 'tensorrt_yolox' of type 'autoware::tensorrt_yolox::TrtYoloXNode' in container '/perception/object_detection/front_camera_container': Component constructor threw an exception: Failed to setup TensorRT engine
[component_container_mt-1] [ERROR] [1759067153.928869959] [perception.object_detection.front_camera_container]: Component constructor threw an exception: Failed to setup TensorRT engine
[component_container_mt-1] [INFO] [1759067461.648516421] [perception.object_detection.front_camera_container]: Found class: rclcpp_components::NodeFactoryTemplate<my_camera_pkg::SelfCameraNode>
[component_container_mt-1] [INFO] [1759067461.648664902] [perception.object_detection.front_camera_container]: Instantiate class: rclcpp_components::NodeFactoryTemplate<my_camera_pkg::SelfCameraNode>
[component_container_mt-1] [INFO] [1759067461.652447111] [camera.self_camera_node]: Show window: false
[component_container_mt-1] [INFO] [1759067461.652745033] [camera.self_camera_node]: GStreamer pipeline: v4l2src device=/dev/video3 ! video/x-raw,width=1920,height=1080,framerate=30/1 ! videoconvert ! appsink name=sink emit-signals=true
[component_container_mt-1] [ERROR] [1759067461.665883162] [perception.object_detection.front_camera_container]: Component constructor threw an exception: Failed to create GStreamer pipeline
[component_container_mt-1] [INFO] [1759067461.673567101] [perception.object_detection.front_camera_container]: Found class: rclcpp_components::NodeFactoryTemplateautoware::tensorrt_yolox::TrtYoloXNode
[component_container_mt-1] [INFO] [1759067461.673690654] [perception.object_detection.front_camera_container]: Instantiate class: rclcpp_components::NodeFactoryTemplateautoware::tensorrt_yolox::TrtYoloXNode
[component_container_mt-1] [TrtYoloX] CUDA Driver API version: 12.6
[component_container_mt-1] [TrtYoloX] CUDA Runtime version : 12.4
[component_container_mt-1] [TrtYoloX] Total visible GPUs: 1
[component_container_mt-1] GPU 0: Orin
[component_container_mt-1] [TrtYoloX] GPU 0 selected successfully
[component_container_mt-1] [I] [TRT] [MemUsageChange] Init CUDA: CPU +0, GPU +0, now: CPU 182, GPU 3960 (MiB)
[component_container_mt-1] [I] [TRT] [MemUsageChange] Init builder kernel library: CPU +0, GPU -19, now: CPU 182, GPU 3941 (MiB)
[component_container_mt-1] [I] [TRT] ----------------------------------------------------------------
[component_container_mt-1] [I] [TRT] Input filename: /autoware_data/tensorrt_yolox/yolox-tiny.onnx
[component_container_mt-1] [I] [TRT] ONNX IR version: 0.0.8
[component_container_mt-1] [I] [TRT] Opset version: 11
[component_container_mt-1] [I] [TRT] Producer name: pytorch
[component_container_mt-1] [I] [TRT] Producer version: 1.12.0
[component_container_mt-1] [I] [TRT] Domain:
[component_container_mt-1] [I] [TRT] Model version: 0
[component_container_mt-1] [I] [TRT] Doc string:
[component_container_mt-1] [I] [TRT] ----------------------------------------------------------------
[component_container_mt-1] [I] [TRT] No checker registered for op: EfficientNMS_TRT. Attempting to check as plugin.
[component_container_mt-1] [I] [TRT] No importer registered for op: EfficientNMS_TRT. Attempting to import as plugin.
[component_container_mt-1] [I] [TRT] Searching for plugin: EfficientNMS_TRT, plugin_version: 1, plugin_namespace:
[component_container_mt-1] [I] [TRT] Successfully created plugin: EfficientNMS_TRT
[component_container_mt-1] [W] [TRT] onnxOpImporters.cpp:6119: Attribute class_agnostic not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[component_container_mt-1] [W] [TRT] Engine is not initialized. Retrieving data from network
[component_container_mt-1] [W] [TRT] Engine is not initialized. Retrieving data from network
[component_container_mt-1] [I] [TRT] Setting optimization profile for tensor: images {min [1, 3, 608, 960], opt [1, 3, 608, 960], max [1, 3, 608, 960]}
[component_container_mt-1] [I] [TRT] Loading engine
[component_container_mt-1] [I] [TRT] Loaded engine size: 12 MiB
[component_container_mt-1] [E] [TRT] IRuntime::deserializeCudaEngine: Error Code 6: API Usage Error (The engine plan file is not compatible with this version of TensorRT, expecting library version 10.3.0.26 got
[component_container_mt-1] ..)
[component_container_mt-1] [E] [TRT] Fail to create engine
[component_container_mt-1] [ERROR] [1759067461.885111544] [perception.object_detection.front_camera_container]: Component constructor threw an exception: Failed to setup TensorRT engine
[component_container_mt-1] [INFO] [1759067508.243727735] [perception.object_detection.front_camera_container]: Found class: rclcpp_components::NodeFactoryTemplate<my_camera_pkg::SelfCameraNode>
[component_container_mt-1] [INFO] [1759067508.243899545] [perception.object_detection.front_camera_container]: Instantiate class: rclcpp_components::NodeFactoryTemplate<my_camera_pkg::SelfCameraNode>
[component_container_mt-1] [INFO] [1759067508.247505592] [camera.self_camera_node]: Show window: false
[component_container_mt-1] [INFO] [1759067508.247840378] [camera.self_camera_node]: GStreamer pipeline: v4l2src device=/dev/video3 ! video/x-raw,width=1920,height=1080,framerate=30/1 ! videoconvert ! appsink name=sink emit-signals=true
[component_container_mt-1] [ERROR] [1759067508.260105572] [perception.object_detection.front_camera_container]: Component constructor threw an exception: Failed to create GStreamer pipeline
[component_container_mt-1] [INFO] [1759067508.267237985] [perception.object_detection.front_camera_container]: Found class: rclcpp_components::NodeFactoryTemplateautoware::tensorrt_yolox::TrtYoloXNode
[component_container_mt-1] [INFO] [1759067508.267380867] [perception.object_detection.front_camera_container]: Instantiate class: rclcpp_components::NodeFactoryTemplateautoware::tensorrt_yolox::TrtYoloXNode
[component_container_mt-1] [TrtYoloX] CUDA Driver API version: 12.6
[component_container_mt-1] [TrtYoloX] CUDA Runtime version : 12.4
[component_container_mt-1] [TrtYoloX] Total visible GPUs: 1
[component_container_mt-1] GPU 0: Orin
[component_container_mt-1] [TrtYoloX] GPU 0 selected successfully
[component_container_mt-1] [I] [TRT] [MemUsageChange] Init CUDA: CPU +0, GPU +0, now: CPU 182, GPU 4002 (MiB)
[component_container_mt-1] [I] [TRT] [MemUsageChange] Init builder kernel library: CPU +0, GPU +13, now: CPU 182, GPU 4015 (MiB)
[component_container_mt-1] [I] [TRT] ----------------------------------------------------------------
[component_container_mt-1] [I] [TRT] Input filename: /autoware_data/tensorrt_yolox/yolox-tiny.onnx
[component_container_mt-1] [I] [TRT] ONNX IR version: 0.0.8
[component_container_mt-1] [I] [TRT] Opset version: 11
[component_container_mt-1] [I] [TRT] Producer name: pytorch
[component_container_mt-1] [I] [TRT] Producer version: 1.12.0
[component_container_mt-1] [I] [TRT] Domain:
[component_container_mt-1] [I] [TRT] Model version: 0
[component_container_mt-1] [I] [TRT] Doc string:
[component_container_mt-1] [I] [TRT] ----------------------------------------------------------------
[component_container_mt-1] [I] [TRT] No checker registered for op: EfficientNMS_TRT. Attempting to check as plugin.
[component_container_mt-1] [I] [TRT] No importer registered for op: EfficientNMS_TRT. Attempting to import as plugin.
[component_container_mt-1] [I] [TRT] Searching for plugin: EfficientNMS_TRT, plugin_version: 1, plugin_namespace:
[component_container_mt-1] [I] [TRT] Successfully created plugin: EfficientNMS_TRT
[component_container_mt-1] [W] [TRT] onnxOpImporters.cpp:6119: Attribute class_agnostic not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[component_container_mt-1] [W] [TRT] Engine is not initialized. Retrieving data from network
[component_container_mt-1] [W] [TRT] Engine is not initialized. Retrieving data from network
[component_container_mt-1] [I] [TRT] Setting optimization profile for tensor: images {min [1, 3, 608, 960], opt [1, 3, 608, 960], max [1, 3, 608, 960]}
[component_container_mt-1] [I] [TRT] Loading engine
[component_container_mt-1] [I] [TRT] Loaded engine size: 12 MiB
[component_container_mt-1] [E] [TRT] IRuntime::deserializeCudaEngine: Error Code 6: API Usage Error (The engine plan file is not compatible with this version of TensorRT, expecting library version 10.3.0.26 got
[component_container_mt-1] ..)
[component_container_mt-1] [E] [TRT] Fail to create engine
[component_container_mt-1] [ERROR] [1759067508.485644120] [perception.object_detection.front_camera_container]: Component constructor threw an exception: Failed to setup TensorRT engine
^C[WARNING] [launch]: user interrupted with ctrl-c (SIGINT)
[component_container_mt-1] [INFO] [1759067616.384708965] [rclcpp]: signal_handler(signum=2)
[INFO] [component_container_mt-1]: process has finished cleanly [pid 102]
root@EAORA07AXFI:/workspace#
root@EAORA07AXFI:/workspace# ^C
root@EAORA07AXFI:/workspace# ^C
root@EAORA07AXFI:/workspace#
root@EAORA07AXFI:/workspace#
root@EAORA07AXFI:/workspace#
root@EAORA07AXFI:/workspace#
root@EAORA07AXFI:/workspace#
root@EAORA07AXFI:/workspace# ros2 launch self_sensor_kit_launch camera.launch.xml
[INFO] [launch]: All log files can be found below /root/.ros/log/2025-09-28-21-53-41-475122-EAORA07AXFI-131
[INFO] [launch]: Default logging verbosity is set to INFO
[INFO] [component_container_mt-1]: process started with pid [143]
[component_container_mt-1] [INFO] [1759067622.060557078] [perception.object_detection.front_camera_container]: Load Library: /workspace/install/my_camera_pkg/lib/libself_camera_component.so
[component_container_mt-1] [INFO] [1759067622.362815390] [perception.object_detection.front_camera_container]: Found class: rclcpp_components::NodeFactoryTemplate<my_camera_pkg::SelfCameraNode>
[component_container_mt-1] [INFO] [1759067622.362961791] [perception.object_detection.front_camera_container]: Instantiate class: rclcpp_components::NodeFactoryTemplate<my_camera_pkg::SelfCameraNode>
[component_container_mt-1] [INFO] [1759067622.366322460] [camera.self_camera_node]: Show window: false
[component_container_mt-1] [INFO] [1759067622.375634284] [camera.self_camera_node]: GStreamer pipeline: v4l2src device=/dev/video3 ! video/x-raw,width=1920,height=1080,framerate=30/1 ! videoconvert ! appsink name=sink emit-signals=true
[component_container_mt-1] [ERROR] [1759067622.388093879] [perception.object_detection.front_camera_container]: Component constructor threw an exception: Failed to create GStreamer pipeline
[ERROR] [launch_ros.actions.load_composable_nodes]: Failed to load node '' of type 'my_camera_pkg::SelfCameraNode' in container '/perception/object_detection/front_camera_container': Component constructor threw an exception: Failed to create GStreamer pipeline
[component_container_mt-1] [INFO] [1759067622.396090172] [perception.object_detection.front_camera_container]: Load Library: /workspace/install/autoware_tensorrt_yolox/lib/libautoware_tensorrt_yolox_node.so
[component_container_mt-1] [INFO] [1759067622.515160284] [perception.object_detection.front_camera_container]: Found class: rclcpp_components::NodeFactoryTemplateautoware::tensorrt_yolox::TrtYoloXNode
[component_container_mt-1] [INFO] [1759067622.515302621] [perception.object_detection.front_camera_container]: Instantiate class: rclcpp_components::NodeFactoryTemplateautoware::tensorrt_yolox::TrtYoloXNode
[component_container_mt-1] [TrtYoloX] CUDA Driver API version: 12.6
[component_container_mt-1] [TrtYoloX] CUDA Runtime version : 12.4
[component_container_mt-1] [TrtYoloX] Total visible GPUs: 1
[component_container_mt-1] GPU 0: Orin
[component_container_mt-1] [TrtYoloX] GPU 0 selected successfully
[component_container_mt-1] [I] [TRT] [MemUsageChange] Init CUDA: CPU +0, GPU +0, now: CPU 25, GPU 3682 (MiB)
[component_container_mt-1] [I] [TRT] [MemUsageChange] Init builder kernel library: CPU +1, GPU +2, now: CPU 181, GPU 3871 (MiB)
[component_container_mt-1] [I] [TRT] ----------------------------------------------------------------
[component_container_mt-1] [I] [TRT] Input filename: /autoware_data/tensorrt_yolox/yolox-tiny.onnx
[component_container_mt-1] [I] [TRT] ONNX IR version: 0.0.8
[component_container_mt-1] [I] [TRT] Opset version: 11
[component_container_mt-1] [I] [TRT] Producer name: pytorch
[component_container_mt-1] [I] [TRT] Producer version: 1.12.0
[component_container_mt-1] [I] [TRT] Domain:
[component_container_mt-1] [I] [TRT] Model version: 0
[component_container_mt-1] [I] [TRT] Doc string:
[component_container_mt-1] [I] [TRT] ----------------------------------------------------------------
[component_container_mt-1] [I] [TRT] No checker registered for op: EfficientNMS_TRT. Attempting to check as plugin.
[component_container_mt-1] [I] [TRT] No importer registered for op: EfficientNMS_TRT. Attempting to import as plugin.
[component_container_mt-1] [I] [TRT] Searching for plugin: EfficientNMS_TRT, plugin_version: 1, plugin_namespace:
[component_container_mt-1] [I] [TRT] Successfully created plugin: EfficientNMS_TRT
[component_container_mt-1] [W] [TRT] onnxOpImporters.cpp:6119: Attribute class_agnostic not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[component_container_mt-1] [W] [TRT] Engine is not initialized. Retrieving data from network
[component_container_mt-1] [W] [TRT] Engine is not initialized. Retrieving data from network
[component_container_mt-1] [I] [TRT] Setting optimization profile for tensor: images {min [1, 3, 608, 960], opt [1, 3, 608, 960], max [1, 3, 608, 960]}
[component_container_mt-1] [I] [TRT] Starting to build engine
[component_container_mt-1] [I] [TRT] Applying optimizations and building TensorRT CUDA engine. Please wait for a few minutes...
[component_container_mt-1] [I] [TRT] Local timing cache in use. Profiling results in this builder pass will not be stored.
[component_container_mt-1] [E] [TRT] Error Code: 9: Skipping tactic 0x3f243c490d502deb due to exception initDeviceReservedSpace
[component_container_mt-1] [E] [TRT] Error Code: 9: Skipping tactic 0xf067e6205da31c2e due to exception initDeviceReservedSpace
[component_container_mt-1] [E] [TRT] Error Code: 9: Skipping tactic 0xf64396b97c889179 due to exception initDeviceReservedSpace
[component_container_mt-1] [E] [TRT] Error Code: 9: Skipping tactic 0x503619c69ae500ff due to exception initDeviceReservedSpace
[component_container_mt-1] [E] [TRT] Error Code: 9: Skipping tactic 0x94119b4c514b211a due to exception Cask convolution execution
[component_container_mt-1] [E] [TRT] Error Code: 9: Skipping tactic 0xa8609adc4e0ceb90 due to exception initDeviceReservedSpace
[component_container_mt-1] [E] [TRT] Error Code: 9: Skipping tactic 0x5deb29b7a8e275f7 due to exception initDeviceReservedSpace
[component_container_mt-1] [E] [TRT] Error Code: 9: Skipping tactic 0xf90060ce8193b811 due to exception initDeviceReservedSpace
[component_container_mt-1] [E] [TRT] Error Code: 9: Skipping tactic 0x7bc32c782b800c48 due to exception initDeviceReservedSpace
[component_container_mt-1] [E] [TRT] Error Code: 9: Skipping tactic 0xbdfdef6b84f7ccc9 due to exception initDeviceReservedSpace
[component_container_mt-1] [E] [TRT] Error Code: 9: Skipping tactic 0x3e2b881168d9689d due to exception initDeviceReservedSpace
[component_container_mt-1] [E] [TRT] IBuilder::buildSerializedNetwork: Error Code 10: Internal Error (Could not find any implementation for node Conv_48 + PWN(PWN(Sigmoid_49), PWN(Mul_50)).)
[component_container_mt-1] [E] [TRT] [checkMacros.cpp::catchCudaError::212] Error Code 1: Cuda Runtime (no kernel image is available for execution on the device)
[component_container_mt-1] [E] [TRT] Fail to create host memory
[component_container_mt-1] [I] [TRT] Engine build completed
[component_container_mt-1] [ERROR] [1759067628.091976217] [perception.object_detection.front_camera_container]: Component constructor threw an exception: Failed to setup TensorRT engine
[ERROR] [launch_ros.actions.load_composable_nodes]: Failed to load node 'tensorrt_yolox' of type 'autoware::tensorrt_yolox::TrtYoloXNode' in container '/perception/object_detection/front_camera_container': Component constructor threw an exception: Failed to setup TensorRT engine
^C[WARNING] [launch]: user interrupted with ctrl-c (SIGINT)
[component_container_mt-1] [INFO] [1759067718.221890186] [rclcpp]: signal_handler(signum=2)
[INFO] [component_container_mt-1]: process has finished cleanly [pid 143]
root@EAORA07AXFI:/workspace# ^C
root@EAORA07AXFI:/workspace# nvidia-smi
Sun Sep 28 21:55:26 2025
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 540.4.0 Driver Version: 540.4.0 CUDA Version: 12.6 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 Orin (nvgpu) N/A | N/A N/A | N/A |
| N/A N/A N/A N/A / N/A | Not Supported | N/A N/A |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| No running processes found |
+---------------------------------------------------------------------------------------+
root@EAORA07AXFI:/workspace# nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2024 NVIDIA Corporation
Built on Thu_Mar_28_02:24:28_PDT_2024
Cuda compilation tools, release 12.4, V12.4.131
Build cuda_12.4.r12.4/compiler.34097967_0
root@EAORA07AXFI:/workspace# cat /usr/include/cudnn_version.h | grep CUDNN_MAJOR -A 2
#define CUDNN_MAJOR 8
#define CUDNN_MINOR 9
#define CUDNN_PATCHLEVEL 7
#define CUDNN_VERSION (CUDNN_MAJOR * 1000 + CUDNN_MINOR * 100 + CUDNN_PATCHLEVEL)
/* cannot use constexpr here since this is a C-only file */
root@EAORA07AXFI:/workspace# dpkg -l | grep libcudnn
hi libcudnn8 8.9.7.29-1+cuda12.2 arm64 cuDNN runtime libraries
hi libcudnn8-dev 8.9.7.29-1+cuda12.2 arm64 cuDNN development libraries and headers
root@EAORA07AXFI:/workspace# dpkg -l | grep TensorRT
hi libnvinfer-dev 10.3.0.26-1+cuda12.5 arm64 TensorRT development libraries
hi libnvinfer-headers-dev 10.3.0.26-1+cuda12.5 arm64 TensorRT development headers
hi libnvinfer-headers-plugin-dev 10.3.0.26-1+cuda12.5 arm64 TensorRT plugin headers
hi libnvinfer-plugin-dev 10.3.0.26-1+cuda12.5 arm64 TensorRT plugin libraries
hi libnvinfer-plugin10 10.3.0.26-1+cuda12.5 arm64 TensorRT plugin libraries
hi libnvinfer10 10.3.0.26-1+cuda12.5 arm64 TensorRT runtime libraries
hi libnvonnxparsers-dev 10.3.0.26-1+cuda12.5 arm64 TensorRT ONNX libraries
hi libnvonnxparsers10 10.3.0.26-1+cuda12.5 arm64 TensorRT ONNX libraries
ii ros-humble-tensorrt-cmake-module 0.0.4-1jammy.20250719.002719 arm64 Exports a CMake module to find TensorRT.
root@EAORA07AXFI:/workspace#
root@EAORA07AXFI:/workspace#
root@EAORA07AXFI:/workspace# ^C
root@EAORA07AXFI:/workspace#
Beta Was this translation helpful? Give feedback.
All reactions