You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, when I ran coref.py file, I encountered a RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation. I've tried pytorch 0.4.1 in the requirements.txt and pytorch 1.0 but they got same error. Could you please look into this? Thanks!
File "coref.py", line 692, in
trainer.train(150)
File "coref.py", line 458, in train
self.train_epoch(epoch, *args, **kwargs)
File "coref.py", line 488, in train_epoch
corefs_found, total_corefs, corefs_chosen = self.train_doc(doc)
File "coref.py", line 555, in train_doc
loss.backward()
File "/opt/conda/envs/mlkit36/lib/python3.6/site-packages/torch/tensor.py", line 93, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/opt/conda/envs/mlkit36/lib/python3.6/site-packages/torch/autograd/init.py", line 90, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation
The text was updated successfully, but these errors were encountered:
The inplace operation is defined here. I think I did it this way because in 0.4.1, dropout could only be applied to a packed sequence by first unpacking it, applying dropout, and then repacking it. This seems likely to have been fixed in pytorch 1.0...
Hi, when I ran coref.py file, I encountered a RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation. I've tried pytorch 0.4.1 in the requirements.txt and pytorch 1.0 but they got same error. Could you please look into this? Thanks!
The text was updated successfully, but these errors were encountered: