Skip to content

Pull requests: NVIDIA/TensorRT-LLM

Author
Filter by author
Loading
Label
Filter by label
Loading
Use alt + click/return to exclude labels
or + click/return for logical OR
Projects
Filter by project
Loading
Milestones
Filter by milestone
Loading
Reviews
Assignee
Filter by who’s assigned
Assigned to nobody Loading
Sort

Pull requests list

[https://nvbugs/5740377][fix] Prevent out-of-bounds read
#10868 opened Jan 21, 2026 by HuiGao-NV Loading…
1 task done
[None][feat] AutoDeploy: Flashinfer kernels bringup
#10867 opened Jan 21, 2026 by nvchenghaoz Loading…
1 task
[https://nvbugs/5821433][fix] fix test_auto_scaling for 2 GPUs
#10866 opened Jan 21, 2026 by reasonsolo Loading…
1 task done
[None][fix] Fix PD disaggregation for VLMs that use mrope
#10865 opened Jan 21, 2026 by 2ez4bz Loading…
1 task done
[None][fix] Enable offline mode for HF models
#10863 opened Jan 20, 2026 by FrankD412 Loading…
1 task done
[https://nvbugs/5688721][fix] unwaive NemotronH accuracy test
#10852 opened Jan 20, 2026 by lucaslie Loading…
1 task done
[None][chore] added AutoDeploy nano_v3_scale.yaml
#10845 opened Jan 20, 2026 by MrGeva Loading…
1 task done
[None][fix] Update RMSNorm custom op plumbing
#10843 opened Jan 20, 2026 by JintaoPengCS Loading…
1 task done
[https://nvbugs/5800646][fix] Fix hang issue by avoid exposing UB buf…
#10842 opened Jan 20, 2026 by liji-nv Loading…
1 task done
ProTip! What’s not been updated in a month: updated:<2025-12-20.