-
Notifications
You must be signed in to change notification settings - Fork 504
Issues: pytorch/xla
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
GPU test failed again: AtenXlaTensorTest.TestDivInPlaceWithRoundingMode
#8956
opened Apr 10, 2025 by
tengyifei
Python tracing 1.5x slower in docker for Stable Diffusion
performance
#8947
opened Apr 7, 2025 by
bhavya01
Standardize XLA loop APIs
enhancement
New feature or request
#8918
opened Apr 1, 2025 by
rpsilva-aws
Sliced add returns wrong output
pytorch divergence
XLA behavior doesn't match Pytorch eager frontend
#8917
opened Apr 1, 2025 by
vealocia
[Deprecation Tracking] API deprecation timeline summary
usability
Bugs/features related to improving the usability of PyTorch/XLA
#8915
opened Apr 1, 2025 by
zpcore
Large number of graph break with flash_attention on dynamo openxla backend
dynamo
performance
#8913
opened Mar 31, 2025 by
bhavya01
Output shape from flash attention is not expected
bug
Something isn't working
pallas
#8910
opened Mar 31, 2025 by
lsy323
Profiler and SPMD and other distributed things.
documentation
use_spmd()
order.
distributed
#8906
opened Mar 31, 2025 by
ysiraichi
Torch-XLA gets stuck with large max_new_tokens when running HF CausalLM inference
performance
#8901
opened Mar 28, 2025 by
Zantares
The Stable Diffusion notebook is broken.
bug
Something isn't working
documentation
#8899
opened Mar 27, 2025 by
zhanyong-wan
2D linear upsample with XLA behavior doesn't match Pytorch eager frontend
xla:gpu
align_corners=False
doesn't match PyTorch.
pytorch divergence
#8897
opened Mar 27, 2025 by
ysiraichi
BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending. Error
bug
Something isn't working
needs reproduction
#8884
opened Mar 25, 2025 by
oayk23
Check "Autograd" code generation custom operation edge case is covered by tests.
lowering
ATen Operation lowering
tech debt
Technical Debt Is Evil
testing
Testing and coverage related issues.
#8880
opened Mar 24, 2025 by
pgmoka
Create a nightly torch_xla wheel without version name
enhancement
New feature or request
#8877
opened Mar 24, 2025 by
bhavya01
torch_xla.experimental.custom_kernel.flash_attention
output does not match F.scaled_dot_product_attention
on TPU
pallas
pytorch divergence
#8869
opened Mar 21, 2025 by
NickLucche
Previous Next
ProTip!
Updated in the last three days: updated:>2025-04-07.