-
Notifications
You must be signed in to change notification settings - Fork 84
Issues: Lightning-AI/lightning-thunder
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
nvFuser has a faster RMSNorm fusion definition than thunder's RMSNorm decomposition
operators
performance
#1582
opened Dec 23, 2024 by
mruberry
Get dynamic shapes to work with Phi-3-mini-128k-instruct
enhancement
New feature or request
nemo
Issues needed to support NVIDIA NeMo models.
#1579
opened Dec 20, 2024 by
tfogal
Consider adding is_leaf attribute to TensorProxies
enhancement
New feature or request
#1577
opened Dec 20, 2024 by
beverlylytle
thunderfx : detecting parameters and buffers on thunderfx path
jit
thunderfx
for things that could be applicable to the dynamo+thunder frontend
#1575
opened Dec 19, 2024 by
kshitij12345
Strides of 2D column major Tensor seem to be unexpectedly changed
#1572
opened Dec 19, 2024 by
crcrpar
"requires_grad" attribute on intermediate TensorProxies is unused and misleading
autograd
developer efficiency
#1570
opened Dec 18, 2024 by
IvanYashchuk
re-enable rematerialization by default after a memory-aware way is found
fusion logic
rematerialization
#1562
opened Dec 17, 2024 by
t-vi
Add custom logsigmoid grad for PyTorch executor
autograd
operators
thunderfx
for things that could be applicable to the dynamo+thunder frontend
#1555
opened Dec 13, 2024 by
mruberry
Investigate Memory and Performance difference using Issues needed to support NVIDIA NeMo models.
performance
thunderfx
for things that could be applicable to the dynamo+thunder frontend
nvfuser
vs torch.compile
executor on Qwen2
high priority
memory use
nemo
#1552
opened Dec 13, 2024 by
kshitij12345
Feature: Provide a mechanism for practitioners to select different executors per FX graph when using ThunderFX
thunderfx
for things that could be applicable to the dynamo+thunder frontend
#1550
opened Dec 12, 2024 by
mruberry
UX: Don't validate tensor metadata for parameter tensors by default
performance
thunderfx
for things that could be applicable to the dynamo+thunder frontend
ux
#1542
opened Dec 11, 2024 by
mruberry
ThunderFX's splitter looks tad conservative for custom for things that could be applicable to the dynamo+thunder frontend
torch.autograd.Function
s by pushing them to the fallback path
thunderfx
#1539
opened Dec 11, 2024 by
crcrpar
decomposition for torch.minimum, torch.maximum
nvfuser
operators
program-coverage
Requests for model and program coverage
thunderfx
for things that could be applicable to the dynamo+thunder frontend
#1537
opened Dec 10, 2024 by
t-vi
[Regressions] ThunderFX is slower than 2 weeks ago for 3 models
#1534
opened Dec 10, 2024 by
wprazuch
High Peak Memory with CUDAGraphTransform
cudagraphs
transforms
#1533
opened Dec 9, 2024 by
kshitij12345
Documentation: Review docs for ThunderFX and thunder.jit entrypoints
documentation
Improvements or additions to documentation
ux
#1532
opened Dec 9, 2024 by
mruberry
ExtractionOnlyPrologueTransform for dropping checks form the prologue
enhancement
New feature or request
good first issue
Good for newcomers
jit
thunderfx
for things that could be applicable to the dynamo+thunder frontend
#1531
opened Dec 9, 2024 by
t-vi
HF Llama 3.2 1B slowness (training)
high priority
huggingface
For supporting HF models
performance
#1506
opened Dec 2, 2024 by
t-vi
Remove testing litgpt falcon 40b/7b
mixology
Issues that the mixology team has surfaced
#1504
opened Dec 2, 2024 by
t-vi
HF Transformers ViT slower than
torch.compile
and raw pytorch
performance
#1502
opened Dec 2, 2024 by
2catycm
[question]: can I use thunder for pytorch lightning modules to make it lightning fast?
documentation
Improvements or additions to documentation
#1491
opened Nov 30, 2024 by
2catycm
Previous Next
ProTip!
Follow long discussions with comments:>50.