[OptApp / SIApp] Updating the Expressions to TensorAdaptors#14214
[OptApp / SIApp] Updating the Expressions to TensorAdaptors#14214sunethwarna wants to merge 115 commits intomasterfrom
Conversation
talhah-ansari
left a comment
There was a problem hiding this comment.
Very impressive. @sunethwarna Thanks
applications/OptimizationApplication/python_scripts/controls/material/simp_control.py
Show resolved
Hide resolved
...cations/OptimizationApplication/python_scripts/controls/thickness/shell_thickness_control.py
Show resolved
Hide resolved
.../OptimizationApplication/python_scripts/processes/optimization_problem_vtu_output_process.py
Show resolved
Hide resolved
...alysis_based_tests/algorithm_relaxed_gradient_projection/test_relaxed_gradient_projection.py
Outdated
Show resolved
Hide resolved
...alysis_based_tests/algorithm_relaxed_gradient_projection/test_relaxed_gradient_projection.py
Outdated
Show resolved
Hide resolved
| for i in range(20): | ||
| control_field *= 1.2 | ||
| simp_control.Update(control_field) | ||
| control_field.data += 1.2 |
There was a problem hiding this comment.
is this * -> + intentional? Also line 257
There was a problem hiding this comment.
yeah, this is because control_field was zeros before.
| 2, 2.6811e+03, 1.5885e-03 | ||
| 3, 2.6677e+03, 1.5436e-03 | ||
| 4, 2.6191e+03, 1.5046e-03 | ||
| 4, 2.6841e+03, 1.5475e-03 |
There was a problem hiding this comment.
Is this value change intentional?
talhah-ansari
left a comment
There was a problem hiding this comment.
Thanks very much @sunethwarna
| def ComputeControlUpdate(self, alpha: Kratos.TensorAdaptors.DoubleCombinedTensorAdaptor) -> None: | ||
| update: Kratos.TensorAdaptors.DoubleCombinedTensorAdaptor = self.algorithm_data.GetBufferedData()["search_direction"].Clone() | ||
| update.data[:] *= alpha.data[:] | ||
| Kratos.TensorAdaptors.DoubleCombinedTensorAdaptor(update, perform_store_data_recursively=False, copy=False).StoreData() |
There was a problem hiding this comment.
As I understood the documentation, we can simply say update.StoreData() or ??
There was a problem hiding this comment.
You can, but if the CombinedTensorAdaptor is created with perform_store_data_recursively=true, then it will do the Store data recursively which will write the values to model part also.
So the way which I did is safer, it does not copy the data, it just perform the store data on existing array, without recursive calling.
There was a problem hiding this comment.
What is dangerous to update the MP var, as we are working with OptApp var here only ?
Igarizza
left a comment
There was a problem hiding this comment.
Please answer to my question that I can better understand TA functions
| # scaling constraints grad | ||
| norm = KratosOA.ExpressionUtils.NormInf(constr_grad[i]) | ||
| norm = numpy.linalg.norm(constr_grad[i].data.ravel(), ord=numpy.inf) | ||
| ta = Kratos.TensorAdaptors.DoubleCombinedTensorAdaptor(constr_grad[i], perform_collect_data_recursively=False, perform_store_data_recursively=False) |
There was a problem hiding this comment.
give please more describtive name to ta. Is it Talhah Ansari?)
There was a problem hiding this comment.
hehe.... This stands for TensorAdaptor... ;) [ Talhah can sell it as Talhah Ansari ;)
| m.def_submodule("LinearStrainEnergyResponseUtils") | ||
| .def("CalculateValue", &LinearStrainEnergyResponseUtils::CalculateValue) | ||
| .def("CalculateGradient", &LinearStrainEnergyResponseUtils::CalculateGradient, py::arg("list_of_gradient_variables"), py::arg("list_of_gradient_required_model_parts"), py::arg("list_of_gradient_computed_model_parts"), py::arg("list_of_container_expressions"), py::arg("perturbation_size")) | ||
| .def("CalculateGradient", &LinearStrainEnergyResponseUtils::CalculateGradient, py::arg("physical_variable"), py::arg("value_influencing_model_part"), py::arg("combined_tensor_adaptor"), py::arg("perturbation_size")) |
There was a problem hiding this comment.
Now this function works with 1 physical variable only?
There was a problem hiding this comment.
We never supported computing gradients at once for list of variables in this cpp function. It is done by the python level. So, I corrected it in here.
Following is the block, where we go through the list of variables...
| { | ||
| template<class TContainerType> | ||
| Expression::ConstPointer GetNodalDomainSizeExpression( | ||
| void GetNodalDomainSizes( |
There was a problem hiding this comment.
This is a bit strange design, get function return void. CalculateNodalDomainSize ?
There was a problem hiding this comment.
True.... I changed it to CalculateNodalDomainSize. :D
| << "Only scalar values are allowed for the filter radius tensor adaptor." | ||
| << "Provided tensor adaptor = " << *pTensorAdaptor << ".\n"; | ||
|
|
||
| if (std::holds_alternative<ModelPart::NodesContainerType::Pointer>(pTensorAdaptor->GetContainer())) { |
There was a problem hiding this comment.
Instead of this IF blocks we can use TMeshDependencyType may be ?
There was a problem hiding this comment.
Mmm.... I am not sure I am getting your point. TensorAdaptor can hold pointers to different types of containers such as nodes, elements, conditions, master_slave_constraints. This If is checking what type of a pointer the TensorAdaptor is having. TMeshDependencyType is a compile time thing, and this If block is a runtime thing.
| << "Filter radius container expression model part and filter model part mismatch." | ||
| << "\n\tFilter = " << *this | ||
| << "\n\tContainerExpression = " << rContainerExpression; | ||
| if (std::holds_alternative<ModelPart::NodesContainerType::Pointer>(rTensorAdaptor.GetContainer())) { |
There was a problem hiding this comment.
may be here as well we can simplify with TMeshDependencyType ?
There was a problem hiding this comment.
same as the previous comment
| * @see @ref DataValueContainer::GetValue Variable value retrieval/update method. | ||
| * @see @ref DataValueContainer::SetValue Variable value set method. | ||
| */ | ||
| class KRATOS_API(OPTIMIZATION_APPLICATION) PropertiesVariableTensorAdaptor: public TensorAdaptor<double> { |
There was a problem hiding this comment.
where do you use these propoerties? I can't find it =(
There was a problem hiding this comment.
| KRATOS_CATCH(""); | ||
| } | ||
|
|
||
| TensorAdaptor<double>::Pointer OptimizationUtils::MapContainerDataToNodalData( |
There was a problem hiding this comment.
add please documentation how it is expected to work
There was a problem hiding this comment.
It is there in the optimization_utils.h. We usually add documentation only in the header so that doxygen can read it and create appropriate html files. [ I improved it slightly ]
the documentation is
|
|
||
| TensorAdaptor<double>::Pointer OptimizationUtils::MapContainerDataToNodalData( | ||
| const TensorAdaptor<double>& rInput, | ||
| ModelPart::NodesContainerType::Pointer pNodes) |
There was a problem hiding this comment.
can we pass ModelPart and get nodes and conditions out of it??
There was a problem hiding this comment.
Adding this function ... how does it relate to TA ?
There was a problem hiding this comment.
can we pass ModelPart and get nodes and conditions out of it??
Sure, you can get nodes / conditions / elements from a model part easily by using MP.Nodes , MP.Conditions, and MP.Elements. Then you can convert elemental / condition data to nodal data using this mehtod.
Adding this function ... how does it relate to TA ?
Earlier this method is there with Expression which is used heavily by the ImplcitFilter. Now i updated it to use TensorAdaptors, hence it in this PR.
| # model part | ||
| return self.model_part.GetRootModelPart() | ||
|
|
||
| def __ExtractTensorData(self, variable, tensor_adaptor: Kratos.TensorAdaptors.DoubleTensorAdaptor, nodes_container: Kratos.NodesArray) -> Kratos.TensorAdaptors.DoubleTensorAdaptor: |
There was a problem hiding this comment.
Why do you need this function ? And what it does? I can't follow. Add please some doc
There was a problem hiding this comment.
Added comments. Could you please check if it is descriptive enough?
|
@Igarizza I addressed all of your comments. could you check? |
📝 Description
This PR removes all the use cases of
ExpressioninOptimizationApplicationandSystemIdentificationApplication, and replaces them with newTensorAdaptors.This requires following PRs to be merged first.
🆕 Changelog
ExpressionwithTensorAdaptor