Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fea linear elements refactor #552

Open
wants to merge 8 commits into
base: main
Choose a base branch
from

Conversation

jibril-b-coulibaly
Copy link
Contributor

@jibril-b-coulibaly jibril-b-coulibaly commented Feb 27, 2025

This PR refactors the Spring, Bar, and some Beam elements.

This is ongoing work, we should decide when it is a good enough point to merge it

Springs

Modifications

  • precompute and store rest length as class attribute to avoid computation at each step (this length does not change during simulation)
  • refactor force calculation to limit number of length calculations and divisions

possible issues / bugs to be discussed

  • : the stiffness calculation for the damping does not take the timestep into account. if F = A * v, dF / dx = A / dt ?

Bars

Modifications

  • precompute and store axial stiffness k=EA/L to avoid computation at each step (this value does not change during simulation)
  • refactor force calculation to limit number of length calculations and divisions

possible issues / bugs to be discussed

  • : the stiffness calculation for the damping does not take the timestep into account. if F = A * v, dF / dx = A / dt ?

Euler-Bernoulli beams

Modifications

  • define the exact beam reference rotation using the end points and YDir vectors in the Builder, rather than using the average Y directions of the end nodes after the fact. This is more practical, but I also believe this was a bug since the average Y direction of the nodes after Gram-Schmidt has no guarantee to be colinear with user provided Y direction of the beams when the nodes pre-exist the beam.
  • Refactor the update rotation function: do nothing when corotation is disabled, replace rotation matrix operations with faster quaternion operations
  • Do once and store expensive calculations of the rotation of nodes wrt reference configuration since these never change but were performed on it every step.
  • refactor GetStateBlock with lambdas to avoid code duplication
  • rename relative rotation quaternions for clarity.
  • no need to normalize before sending into Gram-Schmidt, since normalization occurs there
  • Other things TBD

possible issues / bugs to be discussed

  • the Gram-Schmidt uses a char as an index, why not an int or size_t?
  • That index is incremented inside a while loop and used to access a 3-vector, there is a risk to access out of bounds
  • if for any reason the suggested vector is colinear to the first Gram-Schmidt axis, this process essentially makes the basis random, which seems wrong.
    template <class Real>
    inline void ChVector3<Real>::GetDirectionAxesAsX(ChVector3<Real>& Vx,
    ChVector3<Real>& Vy,
    ChVector3<Real>& Vz,
    ChVector3<Real> y_sugg) const {
    Vx = *this;
    bool success = Vx.Normalize();
    if (!success)
    Vx = ChVector3<Real>(1, 0, 0);
    Vz.Cross(Vx, y_sugg);
    success = Vz.Normalize();
    if (!success) {
    char idx = 0;
    while (!success) {
    y_sugg[idx] += 1.0;
    Vz.Cross(Vx, y_sugg);
    success = Vz.Normalize();
    ++idx;
    }
    }
    Vy.Cross(Vz, Vx);
    }

…ulation to limit number of expensive length calculations
This is done for simplicity but I also think there might be a bug in SetupInitial().
When the beam is created from 2 nodes, or 1 node and 1 position, the end nodes may
have different Y axes, and there is no reason why this Y axis should correspond to
that of the beam element since those are determined by who created these nodes.
do nothing if corotate is disabled.
use cheaper quaternion representation of rotation rather than matrix multiplication
@tasora
Copy link
Member

tasora commented Feb 27, 2025

Hi - question: in ChElementSpring, you changed
ChVector3d int_forceA = dir * internal_force_local;
ChVector3d int_forceB = -dir * internal_force_local;
into
ChVector3d int_forceA = dPos * internal_force_local;
ChVector3d int_forceB = -dPos * internal_force_local;
but I think that the correct form is the original one, where dir is a unit length vector representing direction, whereas dPos is not unit length.
Or have I misunderstood?

@tasora
Copy link
Member

tasora commented Feb 27, 2025

In ChElementBar, same question... you compute the two forces via
ChVector3d int_forceA = dPos * internal_force_local;
ChVector3d int_forceB = -dPos * internal_force_local;
but it is different from the original
ChVector3d int_forceA = dir * internal_force_local;
ChVector3d int_forceB = -dir * internal_force_local;
Right?
A.

@jibril-b-coulibaly
Copy link
Contributor Author

jibril-b-coulibaly commented Feb 27, 2025

Hi @tasora !

The changes in the force calculation were made to avoid computing the length and scaling multiple times. In the original code below

ChVector3d dir = (nodes[1]->GetPos() - nodes[0]->GetPos()).GetNormalized();
double L_ref = (nodes[1]->GetX0() - nodes[0]->GetX0()).Length();
double L = (nodes[1]->GetPos() - nodes[0]->GetPos()).Length();

GetNormalized internally computes the length, and then line 114 recomputes the same length. Inside GetNormalized the vector gets scaled by 1/length, i.e., each 3 components are multiplied by 1/length.

In the refactor below

ChVector3d dPos = nodes[1]->GetPos() - nodes[0]->GetPos();
double L_ref = length;
double L = dPos.Length();
double Linv = 1.0 / L;
ChVector3d dV = nodes[1]->GetPosDt() - nodes[0]->GetPosDt();
double internal_Kforce_local = this->kstiff * (L - L_ref);
double internal_Rforce_local = (this->rdamping * this->kstiff) * Vdot(dV, dPos) * Linv;
double internal_force_local = (internal_Kforce_local + internal_Rforce_local) * Linv;
ChVector3d int_forceA = dPos * internal_force_local;
ChVector3d int_forceB = -dPos * internal_force_local;

With the non-normalized node-to-node vector dPos, we only compute the length once on 102.
the internal_force_local in 107 is scaled by Linv, so "force" here is a bit of an abuse of language but the math ends up being the same as the original.
The difference is that instead of re-scaling every component of the vector, we rescale the magnitude, doing 1 rescaling instead of 3.

We actually do 2 rescalings, with the damping on 106, so this only saves 1 multiplication by Linv compared to the original and is insignificant in terms of performance.
However, once we decide to compute the length only once by not calling GetNormalized- which does save compute - this is the more natural way to write the rest of the code in my opinion.

I ran all CORE, COLLISION, PHYSICS, Chrono joints, SMC contact in CORE module, FEA, MODAL, VEHICLE, MULTICORE, SMC contact in MULTICORE, SYNCHRONO unit tests and they all pass.

@rserban
Copy link
Member

rserban commented Feb 27, 2025

@jibril-b-coulibaly Please note that it is not enough to run the FEA unit tests to ensure the changes you are making are correct and do not break things. Indeed, most FEA unit tests are for the ANCF formulation. Instead, you should compare results on all appropriate FEA demos.

@jibril-b-coulibaly
Copy link
Contributor Author

@rserban thank you for the information.

I additionally ran demo_FEA_beamsEuler, bin/demo_FEA_beams_constr bin/demo_FEA_beams_rotor bin/demo_FEA_beams_static bin/demo_MOD_slewing_beam that looks okay.

bin/demo_FEA_beams_extrude doesn't look like it is doing much, but that behavior is the same on main so it does not seem to be affected by these changes. Looking at the code, when the extruder is updated, a new beam element is created but no orientation (referecnce quaternion) is provided. Is that a bug? This looks like it would be restricted to only work if the beams are aligned with the "World" X,Y,Z coordinates system.

void ChExtruderBeamEuler::Update() {
auto node1 = beam_nodes.back();
ChVector3d P1 = node1->GetPos();
double d1 = (outlet.TransformPointParentToLocal(P1)).z();
// std::cout << " d1=" << d1 << std::endl;
if (d1 >= 0) {
double d0 = d1 - this->h;
ChCoordsys<> C0;
C0.rot = outlet.rot;
C0.pos = outlet.pos + outlet.TransformPointLocalToParent(VECT_Z * d0);
ChCoordsys<> C0_ref;
C0_ref.rot = node1->GetX0().GetRot();
C0_ref.pos = node1->GetX0().GetPos() - VECT_Z * this->h;
auto node0 = chrono_types::make_shared<ChNodeFEAxyzrot>(ChFrame<>(C0));
node0->SetPosDt(outlet.TransformDirectionLocalToParent(VECT_Z * this->speed));
node0->SetX0(ChFrame<>(C0_ref));
mesh->AddNode(node0);
beam_nodes.push_back(node0);
actuator->Initialize(node0, ground, false, ChFrame<>(C0), ChFrame<>(C0));
actuator->SetSpeedFunction(chrono_types::make_shared<ChFunctionConst>(this->speed));
actuator->SetMotionOffset(actuator->GetMotionOffset() - this->h);
/*
actuator->Initialize(node0, ground, false, ChFrame<>(C0), ChFrame<>(C0));
actuator->SetMotionFunction(chrono_types::make_shared<ChFunctionRamp>(0,this->speed));
actuator->SetMotionOffset(actuator->GetMotionOffset() - this->h);
*/
auto element = chrono_types::make_shared<ChElementBeamEuler>();
mesh->AddElement(element);
beam_elems.push_back(element);
element->SetNodes(node0, node1);
element->SetSection(this->beam_section);
element->SetupInitial(mysystem);
// add collision model to node
if (this->contactcloud) {
contactcloud->AddNode(node0, this->contact_radius);
}
}
mytime = mysystem->GetChTime();
}

Looking forward, I think the purpose of unit tests is to check that functions operate the way they are supposed to, and for a refactor like the one here, I believe we should be able to rely on them. Does that mean the unit tests coverage is insufficient to test such changes and should be expanded for Euler Bernoulli beams?

If the specific functions refactored have identical behavior as the original per unit testing, there should be no issue. There is always the possibility of functions side effects and I do agree that regression tests are also necessary to see how the changes propagate to the rest of the code.
I found the demos I ran above to mostly/only provide qualitative results. And for those that provide an output, there is no automated comparison to an analytical or "gold" solution, which makes it unpractical to get good comparisons on a large number of demos (run and pipe output for different versions, and diff the files?).
Should we move towards having quantified regression tests (or naming them as such when either current unit tests and demos are actually regression tests), in addition to unit tests and demos? Something that doesn't look off by the naked eye might still be wrong.

Thank you

@tasora
Copy link
Member

tasora commented Mar 7, 2025 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants