-
-
Notifications
You must be signed in to change notification settings - Fork 193
Use less memory in multi_normal_cholesky_lpdf
#2983
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
Changes from 4 commits
0434a7b
ccf5783
1284e64
226e865
a08db5e
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change | ||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
@@ -53,6 +53,9 @@ return_type_t<T_y, T_loc, T_covar> multi_normal_cholesky_lpdf( | |||||||||||||||||||||
using T_partials_return = partials_return_t<T_y, T_loc, T_covar>; | ||||||||||||||||||||||
using matrix_partials_t | ||||||||||||||||||||||
= Eigen::Matrix<T_partials_return, Eigen::Dynamic, Eigen::Dynamic>; | ||||||||||||||||||||||
using vector_partials_t = Eigen::Matrix<T_partials_return, Eigen::Dynamic, 1>; | ||||||||||||||||||||||
using row_vector_partials_t | ||||||||||||||||||||||
= Eigen::Matrix<T_partials_return, 1, Eigen::Dynamic>; | ||||||||||||||||||||||
using T_y_ref = ref_type_t<T_y>; | ||||||||||||||||||||||
using T_mu_ref = ref_type_t<T_loc>; | ||||||||||||||||||||||
using T_L_ref = ref_type_t<T_covar>; | ||||||||||||||||||||||
|
@@ -119,59 +122,49 @@ return_type_t<T_y, T_loc, T_covar> multi_normal_cholesky_lpdf( | |||||||||||||||||||||
} | ||||||||||||||||||||||
|
||||||||||||||||||||||
if (include_summand<propto, T_y, T_loc, T_covar_elem>::value) { | ||||||||||||||||||||||
Eigen::Matrix<T_partials_return, Eigen::Dynamic, Eigen::Dynamic> | ||||||||||||||||||||||
y_val_minus_mu_val(size_y, size_vec); | ||||||||||||||||||||||
row_vector_partials_t half(size_vec); | ||||||||||||||||||||||
vector_partials_t y_val_minus_mu_val(size_vec); | ||||||||||||||||||||||
vector_partials_t scaled_diff(size_vec); | ||||||||||||||||||||||
matrix_partials_t L_val = value_of(L_ref); | ||||||||||||||||||||||
|
||||||||||||||||||||||
T_partials_return sum_lp_vec(0.0); | ||||||||||||||||||||||
|
||||||||||||||||||||||
for (size_t i = 0; i < size_vec; i++) { | ||||||||||||||||||||||
decltype(auto) y_val = as_value_column_vector_or_scalar(y_vec[i]); | ||||||||||||||||||||||
decltype(auto) mu_val = as_value_column_vector_or_scalar(mu_vec[i]); | ||||||||||||||||||||||
y_val_minus_mu_val.col(i) = y_val - mu_val; | ||||||||||||||||||||||
y_val_minus_mu_val = eval(y_val - mu_val); | ||||||||||||||||||||||
half = mdivide_left_tri<Eigen::Lower>(L_val, y_val_minus_mu_val) | ||||||||||||||||||||||
.transpose(); | ||||||||||||||||||||||
scaled_diff = mdivide_right_tri<Eigen::Lower>(half, L_val).transpose(); | ||||||||||||||||||||||
Comment on lines
+136
to
+138
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is the part that concerns me, since it's gone from a single solve each (with a matrix) to Is there enough of a memory hit to justify to extra operations? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Definitely agreed. We do this sort of thing already in
I'm really not sure what's best here. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. If it's alright with you, I'd prefer not to implement this. The current implementation is likely to scale better to larger inputs, and the changes would also reduce any benefits from OpenCL accelerated-ops. But also completely happy for you to call someone in for a tie-breaker if you feel strongly about it! There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'm fine with closing it but I want someone to weigh in on if we should change the other distributions. I can update the mvn derivatives pr to follow the same. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @SteveBronder - as the Chief of Memory Police, what do you think? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I have an M1max. Is there someone who could benchmark on a windows and linux machine? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I got the library setup but I don't have taskset. Also, how can I set up the script to run the two branches? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. You don't need taskset to run the benchmarks, only if you want to isolate in a single core. You can add another branch in your benchmarks cmake file like
Then you can include it in your executible build like There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Then how do I just run benchmarks for this distribution? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. There's an example of how to add a benchmark on the readme and an example benchmark folder below. You need to write a little Cmake file for compiling the benchmark and should be able to use that folder as an example https://github.com/SteveBronder/stan-perf/tree/main/benchmarks/matmul_aos_soa |
||||||||||||||||||||||
|
||||||||||||||||||||||
sum_lp_vec += dot_self(half); | ||||||||||||||||||||||
|
||||||||||||||||||||||
if (!is_constant_all<T_y>::value) { | ||||||||||||||||||||||
partials_vec<0>(ops_partials)[i] += -scaled_diff; | ||||||||||||||||||||||
} | ||||||||||||||||||||||
if (!is_constant_all<T_loc>::value) { | ||||||||||||||||||||||
partials_vec<1>(ops_partials)[i] += scaled_diff; | ||||||||||||||||||||||
} | ||||||||||||||||||||||
if (!is_constant<T_covar_elem>::value) { | ||||||||||||||||||||||
partials_vec<2>(ops_partials)[i] += scaled_diff * half; | ||||||||||||||||||||||
} | ||||||||||||||||||||||
} | ||||||||||||||||||||||
|
||||||||||||||||||||||
matrix_partials_t half; | ||||||||||||||||||||||
matrix_partials_t scaled_diff; | ||||||||||||||||||||||
logp += -0.5 * sum_lp_vec; | ||||||||||||||||||||||
|
||||||||||||||||||||||
// If the covariance is not autodiff, we can avoid computing a matrix | ||||||||||||||||||||||
// inverse | ||||||||||||||||||||||
if (is_constant<T_covar_elem>::value) { | ||||||||||||||||||||||
matrix_partials_t L_val = value_of(L_ref); | ||||||||||||||||||||||
|
||||||||||||||||||||||
half = mdivide_left_tri<Eigen::Lower>(L_val, y_val_minus_mu_val) | ||||||||||||||||||||||
.transpose(); | ||||||||||||||||||||||
|
||||||||||||||||||||||
scaled_diff = mdivide_right_tri<Eigen::Lower>(half, L_val).transpose(); | ||||||||||||||||||||||
|
||||||||||||||||||||||
if (include_summand<propto>::value) { | ||||||||||||||||||||||
logp -= sum(log(L_val.diagonal())) * size_vec; | ||||||||||||||||||||||
} | ||||||||||||||||||||||
} else { | ||||||||||||||||||||||
matrix_partials_t inv_L_val | ||||||||||||||||||||||
= mdivide_left_tri<Eigen::Lower>(value_of(L_ref)); | ||||||||||||||||||||||
|
||||||||||||||||||||||
half = (inv_L_val.template triangularView<Eigen::Lower>() | ||||||||||||||||||||||
* y_val_minus_mu_val) | ||||||||||||||||||||||
.transpose(); | ||||||||||||||||||||||
|
||||||||||||||||||||||
scaled_diff = (half * inv_L_val.template triangularView<Eigen::Lower>()) | ||||||||||||||||||||||
.transpose(); | ||||||||||||||||||||||
|
||||||||||||||||||||||
logp += sum(log(inv_L_val.diagonal())) * size_vec; | ||||||||||||||||||||||
partials<2>(ops_partials) -= size_vec * inv_L_val.transpose(); | ||||||||||||||||||||||
|
||||||||||||||||||||||
for (size_t i = 0; i < size_vec; i++) { | ||||||||||||||||||||||
partials_vec<2>(ops_partials)[i] += scaled_diff.col(i) * half.row(i); | ||||||||||||||||||||||
} | ||||||||||||||||||||||
} | ||||||||||||||||||||||
|
||||||||||||||||||||||
logp -= 0.5 * sum(columns_dot_self(half)); | ||||||||||||||||||||||
|
||||||||||||||||||||||
for (size_t i = 0; i < size_vec; i++) { | ||||||||||||||||||||||
if (!is_constant_all<T_y>::value) { | ||||||||||||||||||||||
partials_vec<0>(ops_partials)[i] -= scaled_diff.col(i); | ||||||||||||||||||||||
} | ||||||||||||||||||||||
if (!is_constant_all<T_loc>::value) { | ||||||||||||||||||||||
partials_vec<1>(ops_partials)[i] += scaled_diff.col(i); | ||||||||||||||||||||||
} | ||||||||||||||||||||||
partials<2>(ops_partials) -= size_vec * inv_L_val.transpose(); | ||||||||||||||||||||||
} | ||||||||||||||||||||||
} | ||||||||||||||||||||||
|
||||||||||||||||||||||
|
Uh oh!
There was an error while loading. Please reload this page.