You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When I run a self-built environment, the code will report an error after running for a while as follows😵
/buildAgent/work/99bede84aa0a52c2/source/physx/src/NpScene.cpp (3509) : internal error : PhysX Internal CUDA error. Simulation can not continue!
[Error] [carb.gym.plugin] Gym cuda error: an illegal memory access was encountered: ../../../source/plugins/carb/gym/impl/Gym/GymPhysX.cpp: 3480
[Error] [carb.gym.plugin] Gym cuda error: an illegal memory access was encountered: ../../../source/plugins/carb/gym/impl/Gym/GymPhysX.cpp: 3535
Traceback (most recent call last):
File "test/test_gym.py", line 42, in <module>
envs.step(random_actions)
File "/home/aaa/Codes/IsaacGymEnvs/isaacgymenvs/tasks/base/ma_vec_task.py", line 208, in step
self.timeout_buf = torch.where(self.progress_buf >= self.max_episode_length - 1,
RuntimeError: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
Segmentation fault (core dumped)
I've found that the length of time from the start of the run until the error is reported is inversely proportional to num_envs. When I watch the GPU's memory usage, I notice that after a while the memory usage increases a bit until this error is reported. I can't pinpoint exactly where the error is.🤔
The text was updated successfully, but these errors were encountered:
Hi. I had the same error. Check the collision filter setting when loading the handle. Allowing self-collisions seems to cause a memory shortage. I referred to other issues and changed the batch_size, but it didn't solve the problem. However, turning off self-collisions or setting the collision filter to 1 seemed to fix it. If you want to allow self-collisions, you may need to adjust batch_size or num_envs.
Hi. I had the same error. Check the collision filter setting when loading the handle. Allowing self-collisions seems to cause a memory shortage. I referred to other issues and changed the batch_size, but it didn't solve the problem. However, turning off self-collisions or setting the collision filter to 1 seemed to fix it. If you want to allow self-collisions, you may need to adjust batch_size or num_envs.
Thanks for your reply! But the collision filter is set to some number greater than 0. To me it doesn't look like this is causing the problem.🤦♂️
When I run a self-built environment, the code will report an error after running for a while as follows😵
I've found that the length of time from the start of the run until the error is reported is inversely proportional to
num_envs
. When I watch the GPU's memory usage, I notice that after a while the memory usage increases a bit until this error is reported. I can't pinpoint exactly where the error is.🤔The text was updated successfully, but these errors were encountered: