Skip to content

Commit 5610f5d

Browse files
committed
Fix a logic of Weight::applyGradient in nntrainer/tesnor/weight.cpp
This commit fixes a logic in `Weight::applyGradient(double lr, Tensor &updated_grad)` in `nntrainer/tensor/weight.cpp` Previous version fallbacks to `applyGradient(lr)`, which does not use `updated_grad`. This version now use `updated_grad` to apply gradients for FP32 path Signed-off-by: PJH6029 <[email protected]>
1 parent 2470bbc commit 5610f5d

File tree

1 file changed

+2
-1
lines changed

1 file changed

+2
-1
lines changed

nntrainer/tensor/weight.cpp

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -137,7 +137,8 @@ void Weight::applyGradient(double lr, Tensor &updated_grad) {
137137
quantizeWeight();
138138
return;
139139
} else {
140-
return applyGradient(lr);
140+
/** FP32 (or matching dtype) path: apply the provided updated_grad directly */
141+
var->add_i(updated_grad, -lr);
141142
}
142143
}
143144

0 commit comments

Comments
 (0)