You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, we address issues stemming from lift_fresh_copy by creating a tensor literal op (#37), but this is problematic because in order to do so we need a buffer representation of a torch.Tensor. Unfortunately, torch.Tensor does not implement the python array interface fully which precludes us from directly grabbing the representation of the tensor in memory, rather we are forced to use an indirect route through numpy to get a python buffer that can be parsed by MLIR into a tensor literal. This has the unfortunate side effect that we can not support bfloat and complex<*> datatypes with this operation because 1) numpy has no bfloat datatype and hence no representation for such a buffer and 2) numpy's buffer format for complex<*> datatypes is incompatible with the buffer format that MLIR's DenseElementsAttr expects.
The best solution would be to have a first-class mechanism for getting a memoryview of a torch.Tensor by implementing the python array interface fully for this class. This is an issue tracking this shortcoming in pytorch: pytorch/pytorch#54138
As reported by @123epsilon to me on DM, the issue is still valid but should be moved to Torch-MLIR. His GitHub access is blocked for now, once it's restored he will move the issue to Torch-MLIR.
Currently, we address issues stemming from
lift_fresh_copy
by creating a tensor literal op (#37), but this is problematic because in order to do so we need a buffer representation of atorch.Tensor
. Unfortunately,torch.Tensor
does not implement the python array interface fully which precludes us from directly grabbing the representation of the tensor in memory, rather we are forced to use an indirect route through numpy to get a python buffer that can be parsed by MLIR into a tensor literal. This has the unfortunate side effect that we can not supportbfloat
andcomplex<*>
datatypes with this operation because 1) numpy has nobfloat
datatype and hence no representation for such a buffer and 2) numpy's buffer format forcomplex<*>
datatypes is incompatible with the buffer format that MLIR'sDenseElementsAttr
expects.The best solution would be to have a first-class mechanism for getting a memoryview of a
torch.Tensor
by implementing the python array interface fully for this class. This is an issue tracking this shortcoming in pytorch: pytorch/pytorch#54138Tracking the implementation of this interface: pytorch/pytorch#58743
The text was updated successfully, but these errors were encountered: