Skip to content

Tensor size mismatch in Robin BC #1971

@StuvX

Description

@StuvX

Hi Prof. Lu, I seem to be hitting an issue with my code. I am using python 3.11 with the pytorch backend (torch 2.6.0+cu124). When calling the Robin Boundary condition I get a mismatch between the dxdy tensor and the n tensor, if the dxdy tensor shape is [a , 3], then the normal tensor shape is [2*a, 3].

As a sanity check if I use a dirichlet bc then there is no issue training (of course it is not taking the normal in this case), so I think that the issue is in how the normals are retrieved. If I call geom.boundary_normals[0:1] the tensor shape is (1,3) as expected, but if I run the following

X = data.train_points()
bc.boundary_normal(X, 0, 1, None)

it returns a tensor of shape (2,3) with the same values repeated.

Any help is greatly appreciated.

edit to add: if I modify the deepxde robin boundary definition as follows it works, but I expect this would be a breaking change for other backends:

def normal_derivative(self, X, inputs, outputs, beg, end):
     dydx = grad.jacobian(outputs, inputs, i=self.component, j=None)[beg:end]
     n = self.boundary_normal(X, beg, end, None)
     n = n[::2]
     return bkd.sum(dydx * n, 1, keepdims=True)

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions