@@ -101,16 +101,16 @@ Several folders contain optional materials as a bonus for interested readers:
101
101
- [ Python Setup Tips] ( setup/01_optional-python-setup-preferences )
102
102
- [ Installing Python Packages and Libraries Used In This Book] ( setup/02_installing-python-libraries )
103
103
- [ Docker Environment Setup Guide] ( setup/03_optional-docker-environment )
104
- - ** Chapter 2:**
104
+ - ** Chapter 2: Working with text data **
105
105
- [ Comparing Various Byte Pair Encoding (BPE) Implementations] ( ch02/02_bonus_bytepair-encoder )
106
106
- [ Understanding the Difference Between Embedding Layers and Linear Layers] ( ch02/03_bonus_embedding-vs-matmul )
107
107
- [ Dataloader Intuition with Simple Numbers] ( ch02/04_bonus_dataloader-intuition )
108
- - ** Chapter 3:**
108
+ - ** Chapter 3: Coding attention mechanisms **
109
109
- [ Comparing Efficient Multi-Head Attention Implementations] ( ch03/02_bonus_efficient-multihead-attention/mha-implementations.ipynb )
110
110
- [ Understanding PyTorch Buffers] ( ch03/03_understanding-buffers/understanding-buffers.ipynb )
111
- - ** Chapter 4:**
111
+ - ** Chapter 4: Implementing a GPT model from scratch **
112
112
- [ FLOPS Analysis] ( ch04/02_performance-analysis/flops-analysis.ipynb )
113
- - ** Chapter 5:**
113
+ - ** Chapter 5: Pretraining on unlabeled data: **
114
114
- [ Alternative Weight Loading from Hugging Face Model Hub using Transformers] ( ch05/02_alternative_weight_loading/weight-loading-hf-transformers.ipynb )
115
115
- [ Pretraining GPT on the Project Gutenberg Dataset] ( ch05/03_bonus_pretraining_on_gutenberg )
116
116
- [ Adding Bells and Whistles to the Training Loop] ( ch05/04_learning_rate_schedulers )
@@ -119,11 +119,11 @@ Several folders contain optional materials as a bonus for interested readers:
119
119
- [ Converting GPT to Llama] ( ch05/07_gpt_to_llama )
120
120
- [ Llama 3.2 From Scratch] ( ch05/07_gpt_to_llama/standalone-llama32.ipynb )
121
121
- [ Memory-efficient Model Weight Loading] ( ch05/08_memory_efficient_weight_loading/memory-efficient-state-dict.ipynb )
122
- - ** Chapter 6:**
122
+ - ** Chapter 6: Finetuning for classification **
123
123
- [ Additional experiments finetuning different layers and using larger models] ( ch06/02_bonus_additional-experiments )
124
124
- [ Finetuning different models on 50k IMDB movie review dataset] ( ch06/03_bonus_imdb-classification )
125
125
- [ Building a User Interface to Interact With the GPT-based Spam Classifier] ( ch06/04_user_interface )
126
- - ** Chapter 7:**
126
+ - ** Chapter 7: Finetuning to follow instructions **
127
127
- [ Dataset Utilities for Finding Near Duplicates and Creating Passive Voice Entries] ( ch07/02_dataset-utilities )
128
128
- [ Evaluating Instruction Responses Using the OpenAI API and Ollama] ( ch07/03_model-evaluation )
129
129
- [ Generating a Dataset for Instruction Finetuning] ( ch07/05_dataset-generation/llama3-ollama.ipynb )
0 commit comments