-
Notifications
You must be signed in to change notification settings - Fork 100
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RuntimeError: index out of range at /pytorch/aten/src/TH/generic/THTensorEvenMoreMath.cpp:191 #1
Comments
hi~ thanks! |
the same problem~ |
2 similar comments
the same problem~ |
the same problem~ |
same problem |
Same issue when using flair0.4.1, pythorch1.1.0 and BertEmbeddings on 2 x NVIDIA Tesla P100 |
the same problem |
I had an issue with the embeddings I fixed it initializing the embedding layer with the right size which is the size of the vocabulary I am using. when creating your Encoder/Model:
|
the same problem |
same issue |
hi, is anybody fix the problem? |
Hi, try to inspect the size of your vocabulary , if using the |
we have the same problem using LASER bi-LSTM model with |
Did anyone get the solution? I'm stuck! just wanted to confirm what vocab_size here means. Does it mean the length of the tokenized words set? |
It happened to me when I had out-of-vocabulary words which were assigned a -1 value, and also it happens when you set the vocab-size to a smaller value than the size of the vocabulary + 1 |
Sorry for answering so late! |
@chenxijun1029 in which version this has been fixed? Thank you. |
Hey guys, I had the same problem. In my case, what happened was that I was presenting the Input (X) and the Output (Y) to the model with len(X) != len(Y) due to an error in a third-party library. Best regards and good luck! |
Hi. I too resolved my issue by fixing what @mcszn suggested. |
Hi, this works. But would you mind providing an explanation for this? |
I guess it was a bug, which is now fixed by @chenxijun1029 . |
got it. thanks very much. |
same issue |
Why +1 will solve the problem,The initialization of embedding should not be used vocab_size rather than vocab_size+1? |
I have the same issue. See |
observing the following error while running deep ctr on the gpu:
Traceback (most recent call last):
File "main.py", line 31, in
model.fit(loader_train, loader_val, optimizer, epochs=5, verbose=True)
File "/root/deepctr/DeepFM_with_PyTorch/model/DeepFM.py", line 153, in fit
total = model(xi, xv)
File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 489, in call
result = self.forward(*input, **kwargs)
File "/root/deepctr/DeepFM_with_PyTorch/model/DeepFM.py", line 98, in forward
fm_first_order_emb_arr = [(torch.sum(emb(Xi[:, i, :]), 1).t() * Xv[:, i]).t() for i, emb in enumerate(self.fm_first_order_embeddings)]
File "/root/deepctr/DeepFM_with_PyTorch/model/DeepFM.py", line 98, in
fm_first_order_emb_arr = [(torch.sum(emb(Xi[:, i, :]), 1).t() * Xv[:, i]).t() for i, emb in enumerate(self.fm_first_order_embeddings)]
File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 489, in call
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/sparse.py", line 118, in forward
self.norm_type, self.scale_grad_by_freq, self.sparse)
File "/usr/local/lib/python3.5/dist-packages/torch/nn/functional.py", line 1454, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: index out of range at /pytorch/aten/src/TH/generic/THTensorEvenMoreMath.cpp:191
The text was updated successfully, but these errors were encountered: