Skip to content

Commit 27a6a7e

Browse files
authored
Add chapter names
1 parent f4ed263 commit 27a6a7e

File tree

1 file changed

+6
-6
lines changed

1 file changed

+6
-6
lines changed

README.md

+6-6
Original file line numberDiff line numberDiff line change
@@ -101,16 +101,16 @@ Several folders contain optional materials as a bonus for interested readers:
101101
- [Python Setup Tips](setup/01_optional-python-setup-preferences)
102102
- [Installing Python Packages and Libraries Used In This Book](setup/02_installing-python-libraries)
103103
- [Docker Environment Setup Guide](setup/03_optional-docker-environment)
104-
- **Chapter 2:**
104+
- **Chapter 2: Working with text data**
105105
- [Comparing Various Byte Pair Encoding (BPE) Implementations](ch02/02_bonus_bytepair-encoder)
106106
- [Understanding the Difference Between Embedding Layers and Linear Layers](ch02/03_bonus_embedding-vs-matmul)
107107
- [Dataloader Intuition with Simple Numbers](ch02/04_bonus_dataloader-intuition)
108-
- **Chapter 3:**
108+
- **Chapter 3: Coding attention mechanisms**
109109
- [Comparing Efficient Multi-Head Attention Implementations](ch03/02_bonus_efficient-multihead-attention/mha-implementations.ipynb)
110110
- [Understanding PyTorch Buffers](ch03/03_understanding-buffers/understanding-buffers.ipynb)
111-
- **Chapter 4:**
111+
- **Chapter 4: Implementing a GPT model from scratch**
112112
- [FLOPS Analysis](ch04/02_performance-analysis/flops-analysis.ipynb)
113-
- **Chapter 5:**
113+
- **Chapter 5: Pretraining on unlabeled data:**
114114
- [Alternative Weight Loading from Hugging Face Model Hub using Transformers](ch05/02_alternative_weight_loading/weight-loading-hf-transformers.ipynb)
115115
- [Pretraining GPT on the Project Gutenberg Dataset](ch05/03_bonus_pretraining_on_gutenberg)
116116
- [Adding Bells and Whistles to the Training Loop](ch05/04_learning_rate_schedulers)
@@ -119,11 +119,11 @@ Several folders contain optional materials as a bonus for interested readers:
119119
- [Converting GPT to Llama](ch05/07_gpt_to_llama)
120120
- [Llama 3.2 From Scratch](ch05/07_gpt_to_llama/standalone-llama32.ipynb)
121121
- [Memory-efficient Model Weight Loading](ch05/08_memory_efficient_weight_loading/memory-efficient-state-dict.ipynb)
122-
- **Chapter 6:**
122+
- **Chapter 6: Finetuning for classification**
123123
- [Additional experiments finetuning different layers and using larger models](ch06/02_bonus_additional-experiments)
124124
- [Finetuning different models on 50k IMDB movie review dataset](ch06/03_bonus_imdb-classification)
125125
- [Building a User Interface to Interact With the GPT-based Spam Classifier](ch06/04_user_interface)
126-
- **Chapter 7:**
126+
- **Chapter 7: Finetuning to follow instructions**
127127
- [Dataset Utilities for Finding Near Duplicates and Creating Passive Voice Entries](ch07/02_dataset-utilities)
128128
- [Evaluating Instruction Responses Using the OpenAI API and Ollama](ch07/03_model-evaluation)
129129
- [Generating a Dataset for Instruction Finetuning](ch07/05_dataset-generation/llama3-ollama.ipynb)

0 commit comments

Comments
 (0)