Skip to content

Commit 3c66670

Browse files
committed
Add flexible padding bonus experiment
1 parent f61c008 commit 3c66670

File tree

2 files changed

+82
-34
lines changed

2 files changed

+82
-34
lines changed

ch06/02_bonus_additional-experiments/README.md

+12-10
Original file line numberDiff line numberDiff line change
@@ -26,9 +26,10 @@ For example,
2626
| 13 | gpt2-small (124M) | pretrained | last | last_block | context length (1024) | 83.08% | 87.92% | 78.33% | 2.46 min | A100 |
2727
| 14 | gpt2-small (124M) | pretrained | last | last_block | variable: no padding (batch size 1) | 100.00% | 98.66% | 98.00% | 1.75 min | A100 |
2828
| 15 | gpt2-small (124M) | pretrained | last | last_block | variable: no padding (batch size 8) | 99.33% | 98.66% | 98.33% | 1.70 min | A100 |
29-
| 16 | gpt2-small (124M) | pretrained | last | last_block | longest train ex. (120); but no causal mask | 99.23% | 98.66% | 95.33% | 0.29 min | A100 |
30-
| 17 | gpt2-small (124M) | pretrained | last | last_block | longest train ex. (120) and `ignore_index` for padding | 96.63% | 99.33% | 95.00% | 0.28 min | A100 |
31-
| 18 | gpt2-small (124M) | pretrained | last + pooled embeddings | last_block | longest train ex. (120) | 97.79% | 99.33% | 96.33% | 0.32 min | A100 |
29+
| 16 | gpt2-small (124M) | pretrained | last | last_block | flexible (last non-padding position) | 99.42% | 98.66% | 98.33% | 0.30 min | A100 |
30+
| 17 | gpt2-small (124M) | pretrained | last | last_block | longest train ex. (120); but no causal mask | 99.23% | 98.66% | 95.33% | 0.29 min | A100 |
31+
| 18 | gpt2-small (124M) | pretrained | last | last_block | longest train ex. (120) and `ignore_index` for padding | 96.63% | 99.33% | 95.00% | 0.28 min | A100 |
32+
| 19 | gpt2-small (124M) | pretrained | last + pooled embeddings | last_block | longest train ex. (120) | 97.79% | 99.33% | 96.33% | 0.32 min | A100 |
3233

3334
 
3435

@@ -51,9 +52,10 @@ You can use the following code to reproduce the experiments:
5152
- Row 13: `python additional_experiments.py --context_length "model_context_length"`
5253
- Row 14: `python additional_experiments.py --no_padding --batch_size 1`
5354
- Row 15: `python additional_experiments.py --no_padding --batch_size 1 --accumulation_steps 8`
54-
- Row 16: `python additional_experiments.py --disable_causal_mask`
55-
- Row 17: `python additional_experiments.py --ignore_index 50256`
56-
- Row 18: `python additional_experiments.py --average embeddings`
55+
- Row 16: `python additional_experiments.py --last_token_pos "flexible"`
56+
- Row 17: `python additional_experiments.py --disable_causal_mask`
57+
- Row 18: `python additional_experiments.py --ignore_index 50256`
58+
- Row 19: `python additional_experiments.py --average embeddings`
5759

5860
I've kept the LLM and dataset small on purpose, so you can run the training on a regular laptop like a MacBook Air M3 in about 15 minutes (for the default setting) in case you don't have access to a GPU.
5961

@@ -69,7 +71,7 @@ I've kept the LLM and dataset small on purpose, so you can run the training on a
6971
6. **Using a Model with Random Weights vs. Pretrained Weights (Row 1 and 5 vs. 10)**: Utilizing a model with random weights yields results that are only slightly worse (by 3% and 1.3%) compared to using pretrained weights.
7072
7. **Using LoRA (Low-Rank Adaptation) vs Training All Layers (Row 11 vs. 5, and row 12 vs. 9)**: Keeping the model frozen and adding trainable LoRA layers (see [Appendix E](../../appendix-E/01_main-chapter-code/appendix-E.ipynb) for details) is a viable alternative to training all model parameters and even improves the performance by 1% point (row 11 vs. 5). As it can be seen by the ~1% lower gap between the training and validation accuracy when using LoRA, this is likely due to less overfitting. Moreover, using LoRA is also more memory-efficient because fewer parameters have to be updated. When training the larger model (row 12 vs. 9), we can also see that LoRA trains much faster (5.79 min instead of 8.12 min).
7173
8. **Padding Input to Full Context Length vs. Longest Training Example (Row 1 vs. 13)**: Padding the input to the full supported context length results is significantly worse.
72-
9. **Padding vs no padding (Row 1 vs. 14 and 15)**: The `--no_padding` option disables the padding in the dataset, which requires training the model with a batch size of 1 since the inputs have variable lengths. This results in a better test accuracy but takes longer to train. In row 15, we additionally enable gradient accumulation with 8 steps to achieve the same batch size as in the other experiments, which helps reduce overfitting and slightly boost the test set accuracy.
73-
10. **Disabling the causal attention mask (Row 1 vs. 16)**: Disables the causal attention mask used in the multi-head attention module. This means all tokens can attend all other tokens. The model accuracy is slightly improved compared to the GPT model with causal mask.
74-
11. **Ignoring the padding indices in the loss and backpropagation (Row 1 vs. 17)**: Setting `--ignore_index 50256` excludes the `|endoftext|` padding tokens in the `cross_entropy` loss function in PyTorch. In this case, it does not have any effect because we replaced the output layers so that the token IDs are either 0 or 1 for the binary classification example. However, this setting is useful when instruction finetuning models in chapter 7.
75-
13. **Averaging the embeddings over all tokens (Row 1 vs. 18)**: Setting `--average_embeddings` will average the embeddings over all tokens. If this option is not used (the default), only the output embeddings at the chosen token position (specified by `--trainable_token_pos`) are considered; for example, the embeddings of the last token. Enabling `--average_embeddings` will mean-pool the embeddings of all tokens into the position chosen by `--trainable_token_pos` (the last token by default). As we can see, this improves the performance from 95.00% to 96.33% with only a minimal increase in run time (0.28 min to 0.32 min) and might be worthwhile considering in practice.
74+
9. **Padding vs no padding (Row 1 vs. 14 & 15, and 16)**: The `--no_padding` option disables the padding in the dataset, which requires training the model with a batch size of 1 since the inputs have variable lengths. This results in a better test accuracy but takes longer to train. In row 15, we additionally enable gradient accumulation with 8 steps to achieve the same batch size as in the other experiments, which helps reduce overfitting and slightly boost the test set accuracy. In row 16 we apply padding but select the token position based on the last non-padding token. Row 16 should be mathematically similar to row 15, which uses gradient accumulation. However, due to some challenges with gradient accumulation in cases of unequal token counts, there may be small discrepancies (this is discussed in [this](https://unsloth.ai/blog/gradient) blog post).
75+
10. **Disabling the causal attention mask (Row 1 vs. 17)**: Disables the causal attention mask used in the multi-head attention module. This means all tokens can attend all other tokens. The model accuracy is slightly improved compared to the GPT model with causal mask.
76+
11. **Ignoring the padding indices in the loss and backpropagation (Row 1 vs. 18)**: Setting `--ignore_index 50256` excludes the `|endoftext|` padding tokens in the `cross_entropy` loss function in PyTorch. In this case, it does not have any effect because we replaced the output layers so that the token IDs are either 0 or 1 for the binary classification example. However, this setting is useful when instruction finetuning models in chapter 7.
77+
13. **Averaging the embeddings over all tokens (Row 1 vs. 19)**: Setting `--average_embeddings` will average the embeddings over all tokens. If this option is not used (the default), only the output embeddings at the chosen token position (specified by `--trainable_token_pos`) are considered; for example, the embeddings of the last token. Enabling `--average_embeddings` will mean-pool the embeddings of all tokens into the position chosen by `--trainable_token_pos` (the last token by default). As we can see, this improves the performance from 95.00% to 96.33% with only a minimal increase in run time (0.28 min to 0.32 min) and might be worthwhile considering in practice.

ch06/02_bonus_additional-experiments/additional_experiments.py

+70-24
Original file line numberDiff line numberDiff line change
@@ -184,16 +184,34 @@ def calc_loss_batch(input_batch, target_batch, model, device,
184184
trainable_token_pos=-1, ignore_index=-100, average_embeddings=False):
185185
input_batch, target_batch = input_batch.to(device), target_batch.to(device)
186186

187-
model_output = model(input_batch)
188-
if average_embeddings:
189-
# Average over the sequence dimension (dim=1)
190-
logits = model_output.mean(dim=1)
187+
if trainable_token_pos == "flexible": # Selects the last tokens before the padding tokens
188+
# From https://github.com/rasbt/LLMs-from-scratch/discussions/434
189+
# Find the last non-padding token for each sequence in the batch
190+
pad_token_id = 50256 # <|endoftext|> token used for padding
191+
mask = input_batch != pad_token_id
192+
last_token_pos = mask.sum(dim=1) - 1 # Get position of last real token
193+
194+
# Get model outputs
195+
logits = model(input_batch) # shape: [batch_size, seq_len, num_classes]
196+
197+
# Select the logits corresponding to the last real token of each sequence
198+
batch_size = logits.size(0)
199+
selected_logits = logits[torch.arange(batch_size), last_token_pos]
200+
201+
loss = torch.nn.functional.cross_entropy(selected_logits, target_batch)
202+
return loss
203+
191204
else:
192-
# Select embeddings at the specified token position
193-
logits = model_output[:, trainable_token_pos, :]
205+
model_output = model(input_batch)
206+
if average_embeddings:
207+
# Average over the sequence dimension (dim=1)
208+
logits = model_output.mean(dim=1)
209+
else:
210+
# Select embeddings at the specified token position
211+
logits = model_output[:, trainable_token_pos, :]
194212

195-
loss = torch.nn.functional.cross_entropy(logits, target_batch, ignore_index=ignore_index)
196-
return loss
213+
loss = torch.nn.functional.cross_entropy(logits, target_batch, ignore_index=ignore_index)
214+
return loss
197215

198216

199217
def calc_loss_loader(data_loader, model, device,
@@ -231,24 +249,48 @@ def calc_accuracy_loader(data_loader, model, device, num_batches=None,
231249
num_batches = len(data_loader)
232250
else:
233251
num_batches = min(num_batches, len(data_loader))
234-
for i, (input_batch, target_batch) in enumerate(data_loader):
235-
if i < num_batches:
236-
input_batch, target_batch = input_batch.to(device), target_batch.to(device)
237252

238-
model_output = model(input_batch)
239-
if average_embeddings:
240-
# Average over the sequence dimension (dim=1)
241-
logits = model_output.mean(dim=1)
253+
if trainable_token_pos == "flexible":
254+
for i, (input_batch, target_batch) in enumerate(data_loader):
255+
if i < num_batches:
256+
input_batch, target_batch = input_batch.to(device), target_batch.to(device)
257+
258+
# Find the last non-padding token for each sequence in the batch
259+
pad_token_id = 50256 # <|endoftext|> token used for padding
260+
mask = input_batch != pad_token_id
261+
last_token_pos = mask.sum(dim=1) - 1 # Get position of last real token
262+
263+
with torch.no_grad():
264+
logits = model(input_batch) # Logits of last output token
265+
# Select the logits corresponding to the last real token of each sequence
266+
batch_size = logits.size(0)
267+
selected_logits = logits[torch.arange(batch_size), last_token_pos]
268+
predicted_labels = torch.argmax(selected_logits, dim=-1)
269+
270+
num_examples += predicted_labels.shape[0]
271+
correct_predictions += (predicted_labels == target_batch).sum().item()
242272
else:
243-
# Select embeddings at the specified token position
244-
logits = model_output[:, trainable_token_pos, :]
245-
246-
predicted_labels = torch.argmax(logits, dim=-1)
273+
break
247274

248-
num_examples += predicted_labels.shape[0]
249-
correct_predictions += (predicted_labels == target_batch).sum().item()
250-
else:
251-
break
275+
else:
276+
for i, (input_batch, target_batch) in enumerate(data_loader):
277+
if i < num_batches:
278+
input_batch, target_batch = input_batch.to(device), target_batch.to(device)
279+
280+
model_output = model(input_batch)
281+
if average_embeddings:
282+
# Average over the sequence dimension (dim=1)
283+
logits = model_output.mean(dim=1)
284+
else:
285+
# Select embeddings at the specified token position
286+
logits = model_output[:, trainable_token_pos, :]
287+
288+
predicted_labels = torch.argmax(logits, dim=-1)
289+
290+
num_examples += predicted_labels.shape[0]
291+
correct_predictions += (predicted_labels == target_batch).sum().item()
292+
else:
293+
break
252294
return correct_predictions / num_examples
253295

254296

@@ -386,7 +428,7 @@ def replace_linear_with_lora(model, rank, alpha, alternative=False):
386428
type=str,
387429
default="last",
388430
help=(
389-
"Which token position to train. Options: 'first', 'last'."
431+
"Which token position to train. Options: 'first', 'last', 'flexible'."
390432
)
391433
)
392434
parser.add_argument(
@@ -483,6 +525,10 @@ def replace_linear_with_lora(model, rank, alpha, alternative=False):
483525
args.trainable_token_pos = 0
484526
elif args.trainable_token_pos == "last":
485527
args.trainable_token_pos = -1
528+
# The "flexible" setting selects the last tokens before the padding tokens
529+
# See https://github.com/rasbt/LLMs-from-scratch/discussions/434
530+
elif args.trainable_token_pos == "flexible":
531+
args.trainable_token_pos = "flexible"
486532
else:
487533
raise ValueError("Invalid --trainable_token_pos argument")
488534

0 commit comments

Comments
 (0)