Skip to content

Commit 8f7b979

Browse files
committed
Updates docs for geneformer training
Signed-off-by: Jonathan Mitchell <[email protected]>
1 parent 04d23fb commit 8f7b979

9 files changed

+15
-33
lines changed
43.3 KB
Loading
44 KB
Loading
44 KB
Loading
43.3 KB
Loading
353 KB
Loading
239 KB
Loading
314 KB
Loading
232 KB
Loading

docs/docs/models/geneformer.md

Lines changed: 15 additions & 33 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,4 @@
11
# Geneformer
2-
!!! note "Current checkpoints trained in BioNeMo1"
3-
4-
This document references performance numbers and runtime engines that are from the bionemo v1 variant of the model.
5-
These numbers will be updated in a coming release to reflect the new bionemo v2 codebase. The model architecture and
6-
training information will be the same, as checkpoints are converted from bionemo v1 format to v2 format. Benchmarks below
7-
are annotated with which version of bionemo generated them. Accuracy should be the same within a small epsilon
8-
since we have tests in place showing model equivalency between the two versions.
92

103
## Model Overview
114

@@ -156,31 +149,20 @@ NVIDIA believes Trustworthy AI is a shared responsibility and we have establishe
156149
## Training diagnostics
157150

158151
### geneformer-10M-240530
152+
<!-- WandB Logs: https://wandb.ai/clara-discovery/Geneformer-pretraining-jsjconfigs/runs/i8LWOctg?nw=nwuserjomitchell -->
153+
Training was performed on 8 servers with 8 A100 GPUs each for a total of 81485 steps using the CELLxGENE split with a per-gpu micro batch size 32 and global batch size of 2048. Training took a total of 4 days, 8 hours of wallclock time. As can be seen in the following images, training and validation curves both decreased fairly smoothly throughout the course of training.
159154

160-
This checkpoint was trained for approximately 11 epochs through the CELLxGENE split. Training was performed on 8 servers with 8 A100 GPUs each for a total of 115430 steps of per-gpu micro batch size 32 and global batch size of 2048. Training took a total of 1 day, 20 hours and 19 minutes of wallclock time. As can be seen in the following image, training and validation curves both decreased fairly smoothly throughout the course of training. In fact validation (blue) and training (orange) loss were both still decreasing at the end of 11 epochs through the dataset. The model could likely be trained for more epochs without overfitting.
161-
![Validation and training losses both decreased smoothly through training](../assets/old_images/sc_fm/geneformer-10m-240530-val-train-loss.png)
162-
163-
!!! note "Training curves from BioNeMo1"
164-
165-
Note that these curves were generated on BioNeMo1. We see the same general training curves in our initial testing of
166-
BioNeMo2, however. In the following figure the blue line is the previous training run of the 10M model and the
167-
red curve is an equivalent training run on BioNeMo2. As we release new checkpoints they will be trained on BioNeMo2.
168-
169-
![Training curve equivalence](../assets/images/geneformer/loss_curve_new_v_old_geneformer_64_node_10M.png)
170-
155+
![Training Loss Geneformer 10M](../assets/images/geneformer/geneformer_10m_training_loss.png)
156+
![Validation Loss Geneformer 10M](../assets/images/geneformer/geneformer_10m_val_loss.png)
157+
171158

159+
232 minutes.
172160
### geneformer-106M-240530
161+
<!-- WandB Logs https://wandb.ai/clara-discovery/geneformer-pretraining-106m/runs/3uydymaa?nw=nwuserjomitchell -->
162+
This checkpoint was trained for approximately 76,549 steps using the CELLxGENE split. Training was performed on 1 server with 8 H100 GPUs each for a total of 76549 steps using the CELLxGENE split with a per-gpu micro batch size 16 and global batch size of 128. Training took a total of 3 hours and 9 minutes of wallclock time. As can be seen in the following image, training and validation curves both decreased fairly smoothly throughout the course of training.
173163

174-
This checkpoint was trained for approximately 11 epochs through the CELLxGENE split. Training was performed on 16 servers with 8 A100 GPUs each for a total of 115430 steps of per-gpu micro batch size 16 and global batch size of 2048. Training took a total of 3 days, 18 hours and 55 minutes of wallclock time. As can be seen in the following image, training and validation curves both decreased fairly smoothly throughout the course of training. In fact validation (blue) and training (orange) loss were both still decreasing at the end of 11 epochs through the dataset. The model could likely be trained for more epochs without overfitting.
175-
![Validation and training losses both decreased smoothly through training](../assets/old_images/sc_fm/geneformer-106m-240530-val-train-loss.png)
176-
177-
Additionally, validation loss decreased both faster and continued to decrease at the same improved rate throughout training in the 106M parameter model (red) as compared to the 10M parameter model (blue). It would be interesting to test even larger models to see if we continue to observe improved performance in larger models.
178-
![106M parameter model outperformed 10M parameter model](../assets/old_images/sc_fm/geneformer-240530-val-comparison.png)
179-
180-
!! note "Training curves from BioNeMo1"
181-
182-
As stated in the previous section, the figures are from our BioNeMo1 code base where these checkpoints were originally
183-
trained. As we release new checkpoints they will be trained on BioNeMo2.
164+
![Training Loss Geneformer 106M](../assets/images/geneformer/geneformer_105m_training_loss.png)
165+
![Validation Loss Geneformer 106M](../assets/images/geneformer/geneformer_106m_val_loss.png)
184166

185167
## Benchmarking
186168

@@ -192,9 +174,9 @@ The following describes the bert MLM token loss. Like in the original BERT paper
192174

193175
| Model Description | Token Loss (lower is better) |
194176
| ---------------------- | ---------------------------- |
195-
| Baseline geneformer | 2.26* |
196-
| geneformer-10M-240530 | 2.64 |
197-
| geneformer-106M-240530 | 2.34 |
177+
| Baseline geneformer | 3.206* |
178+
| geneformer-10M-240530 | 3.18 |
179+
| geneformer-106M-240530 | 2.89 |
198180

199181
!!! bug "Baseline Geneformer was recently updated on huggingface making loss comparisons challenging."
200182

@@ -222,8 +204,8 @@ Elmentaite et al. (2020), Developmental Cell. This dataset contains approximatel
222204

223205
For more details see the example notebook titled Geneformer-celltype-classification-example.ipynb
224206

225-
![F1-score for both released models, a random baseline, and a PCA based transformation of the raw expression.](../assets/images/geneformer/F1-score-models.png)
226-
![Average accuracy across cell types for both released models, a random baseline, and a PCA based transformation of the raw expression.](../assets/images/geneformer/average-accuracy-models.png)
207+
![F1-score for both released models, a random baseline, and a PCA based transformation of the raw expression.](../assets/images/geneformer/F1-score-models-04-18-25.png)
208+
![Average accuracy across cell types for both released models, a random baseline, and a PCA based transformation of the raw expression.](../assets/images/geneformer/average-accuracy-models-04-18-25.png)
227209

228210
### Performance Benchmarks
229211

0 commit comments

Comments
 (0)