Skip to content

Conversation

@christopherzimmerman
Copy link
Contributor

Currently, reshaping respects existing memory layout, so two tensors with different contiguity can often differ after reshaping:

let
  a = toSeq(1..12).toTensor().reshape(3, 2, 2).asContiguous(rowMajor, force=true)
  b = toSeq(1..12).toTensor().reshape(3, 2, 2).asContiguous(colMajor, force=true)

echo a == b
echo a.reshape(2, 6)
echo b.reshape(2, 6)

# true
# Tensor[system.int] of shape [2, 6]" on backend "Cpu"
# |1	2	3	4	5	6|
# |7	8	9	10	11	12|
#
# Tensor[system.int] of shape [2, 6]" on backend "Cpu"
# |1	9	7	2	10	8|
# |5	3	11	6	4	12|

This PR adds another reshape method which allows reshaping to be done w.r.t. a provided memory layout.

echo a.reshape(2, 6, colMajor) == b.reshape(2, 6, colMajor)
# true

It also adds the ability to transpose a tensor using a provided permutation of axes, rather than just simply reversing axes and strides.

let
  a = toSeq(1..12).toTensor().reshape(3, 2, 2)

echo a.transpose()
echo a.transpose(@[2, 0, 1])

# Tensor[system.int] of shape [2, 2, 3]" on backend "Cpu"
# | | 	1	5	9 | 	2	6	10|
# | | 	3	7	11 | 	4	8	12|
# 
# Tensor[system.int] of shape [2, 3, 2]" on backend "Cpu"
# | | 	1	3 | 	2	4|
# | | 	5	7 | 	6	8|
# | | 	9	11 | 	10	12|

@mratsim
Copy link
Owner

mratsim commented Nov 23, 2020

Sorry was off coding for lng while, thanks for your PR

@Clonkk
Copy link
Contributor

Clonkk commented Sep 21, 2022

@mratsim This seem like a useful addition. Should it be merged ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants