Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Find a better way to get a buffer representation of a torch.Tensor (support bfloat and complex dtypes when faced with lift_fresh_copy) #39

Closed
123epsilon opened this issue Aug 29, 2023 · 4 comments

Comments

@123epsilon
Copy link
Contributor

Currently, we address issues stemming from lift_fresh_copy by creating a tensor literal op (#37), but this is problematic because in order to do so we need a buffer representation of a torch.Tensor. Unfortunately, torch.Tensor does not implement the python array interface fully which precludes us from directly grabbing the representation of the tensor in memory, rather we are forced to use an indirect route through numpy to get a python buffer that can be parsed by MLIR into a tensor literal. This has the unfortunate side effect that we can not support bfloat and complex<*> datatypes with this operation because 1) numpy has no bfloat datatype and hence no representation for such a buffer and 2) numpy's buffer format for complex<*> datatypes is incompatible with the buffer format that MLIR's DenseElementsAttr expects.

The best solution would be to have a first-class mechanism for getting a memoryview of a torch.Tensor by implementing the python array interface fully for this class. This is an issue tracking this shortcoming in pytorch: pytorch/pytorch#54138

Tracking the implementation of this interface: pytorch/pytorch#58743

@123epsilon
Copy link
Contributor Author

Actually the immediately relevant interface is the python buffer interface: pytorch/pytorch#19143

@vivekkhandelwal1
Copy link
Contributor

Hi @123epsilon, I'm working on clearing up the stale issues in Turbine and Torch-MLIR. Just wanted to confirm, if this issue is still relevant or not.

@vivekkhandelwal1
Copy link
Contributor

As reported by @123epsilon to me on DM, the issue is still valid but should be moved to Torch-MLIR. His GitHub access is blocked for now, once it's restored he will move the issue to Torch-MLIR.

@123epsilon
Copy link
Contributor Author

@vivekkhandelwal1 Just moved this issue, we can close or archive this now if we wish: llvm/torch-mlir#3653

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants