Skip to content

Commit

Permalink
Update 01 - Finetune Virtual EVE.ipynb
Browse files Browse the repository at this point in the history
  • Loading branch information
aespaldi authored Oct 21, 2024
1 parent 38f2b06 commit c11f0fe
Showing 1 changed file with 5 additions and 5 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -241,7 +241,7 @@
"metadata": {},
"source": [
"## Create a finetuning model\n",
"Now time to get serious, this model will become our model's \"head.\" The objective of this component is to take now a set of finetuned embeddings and have them predict our true science task. This model was created during FDL-X 2023 and is used as an quick example. It has a switching mode that transitions the model from linear to influenced by a CNN after a defned number of epochs. We're going to do this with Pytorch Lighning for keep hardware agnostic. \n",
"The objective of this component is to take now a set of finetuned embeddings and have them predict our true science task. We'll use an existing SDO model (created during FDL-X 2023) with a switching mode that transitions the model from linear to influenced by a CNN after a defned number of epochs. This model will become our model's \"head.\" We use Pytorch Lighning hardware agnostic implementation. \n",
"\n",
"We first import necessary components."
]
Expand Down Expand Up @@ -294,7 +294,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Then the CNN efficientnet model."
"Then the CNN efficientNet model."
]
},
{
Expand Down Expand Up @@ -625,7 +625,7 @@
"### Another option: Training only from the latents\n",
"![Figure 3: Architectural Diagram of Virtual EVE Training with Latents](assets/architecture_diags_virtualeve_latents.svg)\n",
"\n",
"As the this foundation model includes an autoencoder archetecture use of the decoder is optional. The latents created by the autoencoder can be used directly, the below is an naive implementation. For a real-world use case you'd want to design the model around these new input."
"As this foundation model includes an autoencoder archetecture use of the decoder is optional. The latents created by the autoencoder can be used directly, the below is an naive implementation. For a real-world use case you'd want to design the model around these new inputs."
]
},
{
Expand Down Expand Up @@ -794,7 +794,7 @@
"name": "stderr",
"output_type": "stream",
"text": [
"Trainer will use only 1 of 4 GPUs because it is running inside an interactive / notebook environment. You may try to set `Trainer(devices=4)` but please note that multi-GPU inside interactive / notebook environments is considered experimental and unstable. Your mileage may vary.\n",
"Trainer will use only 1 of 4 GPUs because it is running inside an interactive / notebook environment. You may try to set `Trainer(devices=4)` but please note that multi-GPU inside interactive / notebook environments is considered experimental and unstable. \n",
"GPU available: True (cuda), used: True\n",
"TPU available: False, using: 0 TPU cores\n",
"HPU available: False, using: 0 HPUs\n",
Expand Down Expand Up @@ -865,7 +865,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Now over to you? What would you like to see with models like these?"
"## If you have questions, please join the conversation on Hugging Face: https://huggingface.co/SpaceML/SDO-FM "
]
},
{
Expand Down

0 comments on commit c11f0fe

Please sign in to comment.