diff --git a/notebooks/camera_ready/virtual_eve/01 - Finetune Virtual EVE.ipynb b/notebooks/camera_ready/virtual_eve/01 - Finetune Virtual EVE.ipynb index 326f428..a84e2c6 100644 --- a/notebooks/camera_ready/virtual_eve/01 - Finetune Virtual EVE.ipynb +++ b/notebooks/camera_ready/virtual_eve/01 - Finetune Virtual EVE.ipynb @@ -241,7 +241,7 @@ "metadata": {}, "source": [ "## Create a finetuning model\n", - "Now time to get serious, this model will become our model's \"head.\" The objective of this component is to take now a set of finetuned embeddings and have them predict our true science task. This model was created during FDL-X 2023 and is used as an quick example. It has a switching mode that transitions the model from linear to influenced by a CNN after a defned number of epochs. We're going to do this with Pytorch Lighning for keep hardware agnostic. \n", + "The objective of this component is to take now a set of finetuned embeddings and have them predict our true science task. We'll use an existing SDO model (created during FDL-X 2023) with a switching mode that transitions the model from linear to influenced by a CNN after a defned number of epochs. This model will become our model's \"head.\" We use Pytorch Lighning hardware agnostic implementation. \n", "\n", "We first import necessary components." ] @@ -294,7 +294,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Then the CNN efficientnet model." + "Then the CNN efficientNet model." ] }, { @@ -625,7 +625,7 @@ "### Another option: Training only from the latents\n", "![Figure 3: Architectural Diagram of Virtual EVE Training with Latents](assets/architecture_diags_virtualeve_latents.svg)\n", "\n", - "As the this foundation model includes an autoencoder archetecture use of the decoder is optional. The latents created by the autoencoder can be used directly, the below is an naive implementation. For a real-world use case you'd want to design the model around these new input." + "As this foundation model includes an autoencoder archetecture use of the decoder is optional. The latents created by the autoencoder can be used directly, the below is an naive implementation. For a real-world use case you'd want to design the model around these new inputs." ] }, { @@ -794,7 +794,7 @@ "name": "stderr", "output_type": "stream", "text": [ - "Trainer will use only 1 of 4 GPUs because it is running inside an interactive / notebook environment. You may try to set `Trainer(devices=4)` but please note that multi-GPU inside interactive / notebook environments is considered experimental and unstable. Your mileage may vary.\n", + "Trainer will use only 1 of 4 GPUs because it is running inside an interactive / notebook environment. You may try to set `Trainer(devices=4)` but please note that multi-GPU inside interactive / notebook environments is considered experimental and unstable. \n", "GPU available: True (cuda), used: True\n", "TPU available: False, using: 0 TPU cores\n", "HPU available: False, using: 0 HPUs\n", @@ -865,7 +865,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Now over to you? What would you like to see with models like these?" + "## If you have questions, please join the conversation on Hugging Face: https://huggingface.co/SpaceML/SDO-FM " ] }, {