You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Training was performed on 8 servers with 8 A100 GPUs each for a total of 81485 steps using the CELLxGENE split with a per-gpu micro batch size 32 and global batch size of 2048. Training took a total of 4 days, 8 hours of wallclock time. As can be seen in the following images, training and validation curves both decreased fairly smoothly throughout the course of training.
159
154
160
-
This checkpoint was trained for approximately 11 epochs through the CELLxGENE split. Training was performed on 8 servers with 8 A100 GPUs each for a total of 115430 steps of per-gpu micro batch size 32 and global batch size of 2048. Training took a total of 1 day, 20 hours and 19 minutes of wallclock time. As can be seen in the following image, training and validation curves both decreased fairly smoothly throughout the course of training. In fact validation (blue) and training (orange) loss were both still decreasing at the end of 11 epochs through the dataset. The model could likely be trained for more epochs without overfitting.
161
-

162
-
163
-
!!! note "Training curves from BioNeMo1"
164
-
165
-
Note that these curves were generated on BioNeMo1. We see the same general training curves in our initial testing of
166
-
BioNeMo2, however. In the following figure the blue line is the previous training run of the 10M model and the
167
-
red curve is an equivalent training run on BioNeMo2. As we release new checkpoints they will be trained on BioNeMo2.
This checkpoint was trained for approximately 35,650 steps using the CELLxGENE split. Training was performed on 16 servers with 8 A100 GPUs each for a total of 35,650 steps using the CELLxGENE split with a per-gpu micro batch size 16 and global batch size of 2,048. Training took a total of 8 hours of wallclock time. As can be seen in the following image, training and validation curves both decreased fairly smoothly throughout the course of training.
173
163
174
-
This checkpoint was trained for approximately 11 epochs through the CELLxGENE split. Training was performed on 16 servers with 8 A100 GPUs each for a total of 115430 steps of per-gpu micro batch size 16 and global batch size of 2048. Training took a total of 3 days, 18 hours and 55 minutes of wallclock time. As can be seen in the following image, training and validation curves both decreased fairly smoothly throughout the course of training. In fact validation (blue) and training (orange) loss were both still decreasing at the end of 11 epochs through the dataset. The model could likely be trained for more epochs without overfitting.
175
-

176
-
177
-
Additionally, validation loss decreased both faster and continued to decrease at the same improved rate throughout training in the 106M parameter model (red) as compared to the 10M parameter model (blue). It would be interesting to test even larger models to see if we continue to observe improved performance in larger models.
178
-

179
-
180
-
!! note "Training curves from BioNeMo1"
181
-
182
-
As stated in the previous section, the figures are from our BioNeMo1 code base where these checkpoints were originally
183
-
trained. As we release new checkpoints they will be trained on BioNeMo2.
164
+

165
+

184
166
185
167
## Benchmarking
186
168
@@ -192,9 +174,9 @@ The following describes the bert MLM token loss. Like in the original BERT paper
192
174
193
175
| Model Description | Token Loss (lower is better) |
!!! bug "Baseline Geneformer was recently updated on huggingface making loss comparisons challenging."
200
182
@@ -222,8 +204,8 @@ Elmentaite et al. (2020), Developmental Cell. This dataset contains approximatel
222
204
223
205
For more details see the example notebook titled Geneformer-celltype-classification-example.ipynb
224
206
225
-

226
-

207
+

208
+

0 commit comments