Skip to content

[OptApp / SIApp] Updating the Expressions to TensorAdaptors#14214

Open
sunethwarna wants to merge 115 commits intomasterfrom
optapp_siapp_tensor_adaptor_update
Open

[OptApp / SIApp] Updating the Expressions to TensorAdaptors#14214
sunethwarna wants to merge 115 commits intomasterfrom
optapp_siapp_tensor_adaptor_update

Conversation

@sunethwarna
Copy link
Member

📝 Description
This PR removes all the use cases of Expression in OptimizationApplication and SystemIdentificationApplication, and replaces them with new TensorAdaptors.

This requires following PRs to be merged first.

🆕 Changelog

  • Replaces the uses of Expression with TensorAdaptor

@sunethwarna sunethwarna marked this pull request as ready for review March 4, 2026 06:23
Copy link
Contributor

@talhah-ansari talhah-ansari left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very impressive. @sunethwarna Thanks

for i in range(20):
control_field *= 1.2
simp_control.Update(control_field)
control_field.data += 1.2
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this * -> + intentional? Also line 257

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, this is because control_field was zeros before.

2, 2.6811e+03, 1.5885e-03
3, 2.6677e+03, 1.5436e-03
4, 2.6191e+03, 1.5046e-03
4, 2.6841e+03, 1.5475e-03
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this value change intentional?

@sunethwarna sunethwarna enabled auto-merge March 5, 2026 12:02
talhah-ansari
talhah-ansari previously approved these changes Mar 5, 2026
Copy link
Contributor

@talhah-ansari talhah-ansari left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks very much @sunethwarna

@matekelemen matekelemen disabled auto-merge March 5, 2026 13:41
def ComputeControlUpdate(self, alpha: Kratos.TensorAdaptors.DoubleCombinedTensorAdaptor) -> None:
update: Kratos.TensorAdaptors.DoubleCombinedTensorAdaptor = self.algorithm_data.GetBufferedData()["search_direction"].Clone()
update.data[:] *= alpha.data[:]
Kratos.TensorAdaptors.DoubleCombinedTensorAdaptor(update, perform_store_data_recursively=False, copy=False).StoreData()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As I understood the documentation, we can simply say update.StoreData() or ??

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can, but if the CombinedTensorAdaptor is created with perform_store_data_recursively=true, then it will do the Store data recursively which will write the values to model part also.

So the way which I did is safer, it does not copy the data, it just perform the store data on existing array, without recursive calling.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is dangerous to update the MP var, as we are working with OptApp var here only ?

talhah-ansari
talhah-ansari previously approved these changes Mar 6, 2026
Copy link
Member

@Igarizza Igarizza left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please answer to my question that I can better understand TA functions

# scaling constraints grad
norm = KratosOA.ExpressionUtils.NormInf(constr_grad[i])
norm = numpy.linalg.norm(constr_grad[i].data.ravel(), ord=numpy.inf)
ta = Kratos.TensorAdaptors.DoubleCombinedTensorAdaptor(constr_grad[i], perform_collect_data_recursively=False, perform_store_data_recursively=False)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

give please more describtive name to ta. Is it Talhah Ansari?)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hehe.... This stands for TensorAdaptor... ;) [ Talhah can sell it as Talhah Ansari ;)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Haha, yes

m.def_submodule("LinearStrainEnergyResponseUtils")
.def("CalculateValue", &LinearStrainEnergyResponseUtils::CalculateValue)
.def("CalculateGradient", &LinearStrainEnergyResponseUtils::CalculateGradient, py::arg("list_of_gradient_variables"), py::arg("list_of_gradient_required_model_parts"), py::arg("list_of_gradient_computed_model_parts"), py::arg("list_of_container_expressions"), py::arg("perturbation_size"))
.def("CalculateGradient", &LinearStrainEnergyResponseUtils::CalculateGradient, py::arg("physical_variable"), py::arg("value_influencing_model_part"), py::arg("combined_tensor_adaptor"), py::arg("perturbation_size"))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now this function works with 1 physical variable only?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We never supported computing gradients at once for list of variables in this cpp function. It is done by the python level. So, I corrected it in here.

Following is the block, where we go through the list of variables...

for physical_variable, merged_model_part in merged_model_part_map.items():
KratosOA.ResponseUtils.LinearStrainEnergyResponseUtils.CalculateGradient(
physical_variable,
merged_model_part,
intersected_model_part_map[physical_variable],
physical_variable_collective_expressions[physical_variable].GetContainerExpressions(),
self.perturbation_size)

{
template<class TContainerType>
Expression::ConstPointer GetNodalDomainSizeExpression(
void GetNodalDomainSizes(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a bit strange design, get function return void. CalculateNodalDomainSize ?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

True.... I changed it to CalculateNodalDomainSize. :D

<< "Only scalar values are allowed for the filter radius tensor adaptor."
<< "Provided tensor adaptor = " << *pTensorAdaptor << ".\n";

if (std::holds_alternative<ModelPart::NodesContainerType::Pointer>(pTensorAdaptor->GetContainer())) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of this IF blocks we can use TMeshDependencyType may be ?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mmm.... I am not sure I am getting your point. TensorAdaptor can hold pointers to different types of containers such as nodes, elements, conditions, master_slave_constraints. This If is checking what type of a pointer the TensorAdaptor is having. TMeshDependencyType is a compile time thing, and this If block is a runtime thing.

<< "Filter radius container expression model part and filter model part mismatch."
<< "\n\tFilter = " << *this
<< "\n\tContainerExpression = " << rContainerExpression;
if (std::holds_alternative<ModelPart::NodesContainerType::Pointer>(rTensorAdaptor.GetContainer())) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

may be here as well we can simplify with TMeshDependencyType ?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same as the previous comment

* @see @ref DataValueContainer::GetValue Variable value retrieval/update method.
* @see @ref DataValueContainer::SetValue Variable value set method.
*/
class KRATOS_API(OPTIMIZATION_APPLICATION) PropertiesVariableTensorAdaptor: public TensorAdaptor<double> {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

where do you use these propoerties? I can't find it =(

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

const auto& sensitivity_variable = KratosComponents<Variable<double>>::Get(pVariable->Name() + "_SENSITIVITY");
PropertiesVariableTensorAdaptor(*p_tensor_adaptor, &sensitivity_variable, false).CollectData();

KRATOS_CATCH("");
}

TensorAdaptor<double>::Pointer OptimizationUtils::MapContainerDataToNodalData(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add please documentation how it is expected to work

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is there in the optimization_utils.h. We usually add documentation only in the header so that doxygen can read it and create appropriate html files. [ I improved it slightly ]

the documentation is

/**
* @brief Maps container data to nodal data
*
* @details This method returns a mapped container data given in @p rInput to nodal given by @p pNodes .
* It distributes the each entity's data to its nodes equally.
*
* @throws std::runtime_error if the container in @p rInput is not of type conditions or elements.
* @throws std::runtime_error if the entities in @p rInput 's container contain nodes which are not found in the @p pNodes.
*
* @param rInput Input containing values in which needs to be mapped to nodes (container should be conditions or elements).
* @param pNodes List of nodes to which the values will be mapped out.
* @returns Returns a tensor adaptor having the container @p pNodes with the mapped values.
*/
static TensorAdaptor<double>::Pointer MapContainerDataToNodalData(
const TensorAdaptor<double>& rInput,
ModelPart::NodesContainerType::Pointer pNodes);
/**
* @brief Maps nodal data to a specific container data
*
* @details This method returns a mapped nodal data given in @p rInput to container given by @p pEntities .
* It first divide each node's value by number of neighour entities and adds that value to each neighbouring
* entity.
*
* @throws std::runtime_error if the container in @p rInput is not a nodal container.
* @throws std::runtime_error if the nodal container in @p rInput is not same as nodal container in @p rNeighbourCount .
* @throws std::runtime_error if the entities in @p rInput 's container contain nodes which are not found in the nodal container of @p rInput .
*
* @param rInput Input containing values in which needs to be mapped to conditions / elements (container should be nodal).
* @param pEntities List of entities to which the values will be mapped out.
* @param rNeighbourCount Number of neighbour entities around each node.
* @returns Returns a tensor adaptor having the container @p pEntities with the mapped values.
*/
template<class TContainerPointerType>
static TensorAdaptor<double>::Pointer MapNodalDataToContainerData(


TensorAdaptor<double>::Pointer OptimizationUtils::MapContainerDataToNodalData(
const TensorAdaptor<double>& rInput,
ModelPart::NodesContainerType::Pointer pNodes)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we pass ModelPart and get nodes and conditions out of it??

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Adding this function ... how does it relate to TA ?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we pass ModelPart and get nodes and conditions out of it??

Sure, you can get nodes / conditions / elements from a model part easily by using MP.Nodes , MP.Conditions, and MP.Elements. Then you can convert elemental / condition data to nodal data using this mehtod.

Adding this function ... how does it relate to TA ?

Earlier this method is there with Expression which is used heavily by the ImplcitFilter. Now i updated it to use TensorAdaptors, hence it in this PR.

# model part
return self.model_part.GetRootModelPart()

def __ExtractTensorData(self, variable, tensor_adaptor: Kratos.TensorAdaptors.DoubleTensorAdaptor, nodes_container: Kratos.NodesArray) -> Kratos.TensorAdaptors.DoubleTensorAdaptor:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do you need this function ? And what it does? I can't follow. Add please some doc

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added comments. Could you please check if it is descriptive enough?

@sunethwarna
Copy link
Member Author

@Igarizza I addressed all of your comments. could you check?

@sunethwarna sunethwarna requested a review from Igarizza March 9, 2026 21:07
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Applications C++ Python Refactor When code is moved or rewrote keeping the same behavior

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants