Skip to content

fix issue #664 - inverted token and pos emb layers #665

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Jun 22, 2025

Conversation

casinca
Copy link
Contributor

@casinca casinca commented Jun 13, 2025

fixes #664

Copy link

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

@d-kleine
Copy link
Contributor

Maybe it would make sense removing everything after encoded_text = tokenizer.encode(raw_text) as it's not relevant for exercise, what do you think @rasbt?

@rasbt
Copy link
Owner

rasbt commented Jun 22, 2025

Thanks for the fix @casinca . And I also agree with you @d-kleine , the lines seem redundant and I removed them.

@rasbt rasbt merged commit 564e986 into rasbt:main Jun 22, 2025
13 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

pos_embedding_layer and token_embedding_layer definition in exercise_solutions chapter 2 incorrect
3 participants