Replies: 4 comments 1 reply
-
It seems pytorch would work really well as an alternative backend. Several math helpers would be required for input data with leading/trailing axes but that's quite straightforward. Here's a minimal code snippet with the neo-hookean material formulation: import torch
from torch.autograd import grad
from torch import zeros, rand
torch.set_default_dtype(torch.float64)
mu = 1
bulk = 200
dot = lambda A, B: torch.einsum("ik...,kj...->ij...", A, B)
trace = lambda A: torch.einsum("ii...->...", A)
transpose = lambda A: torch.einsum("ij...->ji...", A)
det = lambda A: torch.linalg.det(A.T).T
F = (rand(3, 3, 8, 200) - 1 / 2) / 10
for a in range(3):
F[a, a] += 1
F.requires_grad_(True)
C = dot(transpose(F), F)
J = det(F)
W = mu / 2 * (J ** (-2 / 3) * trace(C) - 3) + bulk * (J - 1) ** 2 / 2
P = grad(W.sum((-1, -2)), F, create_graph=True)[0]
A = zeros((3, 3, 3, 3, *F.shape[-2:]))
for i in range(3):
for j in range(3):
A[i, j] = grad(P[i, j].sum((-1, -2)), F, retain_graph=True)[0] The only disadvantage is the necessary loop for the hessian creation. The torch-hessian method can't be used because we have to reduce the gradient by a sum first in order to get cell-wise hessians. To sum up:
A Variable would be a torch-Tensor with |
Beta Was this translation helpful? Give feedback.
-
This one is for mixed-field problems import torch
from torch.autograd import grad
from torch import zeros, rand, Tensor
mu = 1
bulk = 200
torch.set_default_dtype(torch.float64)
dot = lambda A, B: torch.einsum("ik...,kj...->ij...", A, B)
trace = lambda A: torch.einsum("ii...->...", A)
transpose = lambda A: torch.einsum("ij...->ji...", A)
det = lambda A: torch.linalg.det(A.T).T.reshape(1, 1, *A.shape[-2:])
import numpy as np
F = Tensor((np.random.rand(3, 3, 8, 2000) - 1 / 2) / 10)
p = Tensor(np.random.rand(1, 1, 8, 2000))
for a in range(3):
F[a, a] += 1
F.requires_grad_(True)
p.requires_grad_(True)
C = dot(transpose(F), F)
J = det(F)
W = mu / 2 * (J ** (-2 / 3) * trace(C) - 3) + p * (J - 1) + p ** 2 / 2 / bulk
def gradient(fun, x):
return grad(fun.sum((-1, -2)), x, create_graph=True)
from itertools import product
def hessian(fun, x):
P = gradient(fun, x)
aa, bb = torch.triu_indices(len(P), len(P))
B = []
for a, b in zip(aa, bb):
A = zeros((*P[a.item()].shape[:-2], *x[b.item()].shape[:-2], *x[0].shape[2:]))
for i, j in product(range(len(P[a.item()])), range(len(P[a.item()]))):
A[i, j] = grad(
P[a.item()][i, j].sum((-1, -2)),
x[b.item()],
retain_graph=True
)[0]
A.requires_grad_ = False
B.append(A)
return B
P = gradient(W, [F, p])
A = hessian(W, [F, p]) |
Beta Was this translation helpful? Give feedback.
-
This is a simple demonstration using Autograd. First, some math-helpers are defined. from autograd import numpy as np, jacobian as jac
det = lambda A: np.linalg.det(A.T).T
dot = lambda A, B: np.einsum("ij...,jk...->ik...", A, B)
transpose = lambda A: np.einsum("ij...->ji...", A) Next, a function decorator and a new jacobian function enable the evaluation of jacobians on input data with trailing axes. def reduce(fun, argnum=0):
"Function decorator for jacobians on inputs with trailing axes."
def inner(*args, **kwargs):
"Sum the result over the trailing axes."
axes = tuple(-np.arange(1, len(args[argnum].shape) - 1))
return np.sum(fun(*args, **kwargs), axes)
return inner
def jacobian(fun, argnum=0):
"Apply `jacobian` from autograd on a `reduce`-decorated function."
return jac(reduce(fun, argnum=argnum), argnum=argnum) The Neo-Hookean material formulation is defined by its strain energy function. def W(F, mu=1.0, bulk=2.0):
"""Strain energy density function of the Neo-Hookean material formulation
based on the Deformation Gradient ``F``.
"""
# volume ratio
J = det(F)
# right Cauchy-Green deformation tensor
C = dot(transpose(F), F)
return mu * (J ** (-2 / 3) * np.trace(C) - 3) + bulk * (J - 1) ** 2 / 2 Test the implementation: # first Piola-Kirchhoff stress
P = jacobian(W)
# corresponding fourth-order elasticity tensor
A = jacobian(jacobian(W))
# sample Deformation Gradients (input data)
np.random.seed(2354)
F = (np.random.rand(3, 3, 8, 50 ** 3) - 0.5) / 5
for i in range(3):
F[i, i] += 1
print(P(F).shape, A(F).shape) |
Beta Was this translation helpful? Give feedback.
-
Using hyper dual numbers, hyperjet could also be a reasonable choice (too complicated and slow at the time of testing). import numpy as np
from hyperjet import DDScalar
def det(A):
return (
A[0] * A[1] * A[2] + 2 * A[3] * A[4] * A[5]
- A[1] * A[5] ** 2 - A[0] * A[4] **2 - A[2] * A[3] ** 2
)
def trace(A):
return A[:3].sum()
def neo_hooke(C):
return det(C) ** (-1 / 3) * trace(C) - 3
def tensor(x):
return np.array(DDScalar.variables(x.ravel())).reshape(*x.shape)
# init 100,000 random right Cauchy-Green deformation tensors
rCG = (np.random.rand(100000, 6) - 0.5) / 5
rCG[:, :3] += 1
from joblib import Parallel, delayed
hessian = lambda fun, x: fun(tensor(x)).hm()
A = Parallel(n_jobs=16, verbose=1, batch_size=1000)(
delayed(hessian)(neo_hooke, C) for C in rCG#
) |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Beside casADi, there are some interesting packages / modules which could be used as matADi backends in the future:
Beta Was this translation helpful? Give feedback.
All reactions