-
Notifications
You must be signed in to change notification settings - Fork 2
Open
Labels
Description
bug复现环境(bug reproduction environment)
2.2.0.rc0-gpu 版本
单机单卡
bug复现步骤及最小代码集(Bug reproduction steps and minimal code set)
以下方式赋值时,两个输入的梯度信息不能同步到输出上
tiou[i:] = inter/union
使用 list.append + concatenate 方法速度很慢
import paddle
import time
a = paddle.rand(shape=[1,4])
b = paddle.rand(shape=[1,4])
a.stop_gradient = False
b.stop_gradient = False
print('=====paddle=====')
d = paddle.zeros((4, 4))
print(d.stop_gradient)
c = a/b
d[0, :] = a/b
print(a.stop_gradient)
print(b.stop_gradient)
print(c.stop_gradient)
print('Is d requires grad: ', not d.stop_gradient)
#---------------------------------------------
import torch
a = torch.rand([1,4])
b = torch.rand([1,4])
a.requires_grad = True
b.requires_grad = True
print('====torch====')
d = torch.zeros((4, 4))
print(d.requires_grad)
c = a/b
d[0, :] = a/b
print(a.requires_grad)
print(b.requires_grad)
print(c.requires_grad)
print('Is d requires grad:' ,d.requires_grad)
期望结果(Desired result)
=====paddle=====
True
False
False
False
Is d requires grad: True
====torch====
False
True
True
True
Is d requires grad: True
实际结果(actual result)
=====paddle=====
True
False
False
False
Is d requires grad: False
====torch====
False
True
True
True
Is d requires grad: True
paddle与pytorch行为不一致,stop_gradient属性不会传递
其他补充
No response