From 938dd4bf2df1b326f69f936045631d0efc9e676e Mon Sep 17 00:00:00 2001 From: shaolong Date: Sat, 10 Aug 2024 23:50:35 +0200 Subject: [PATCH] initial change --- .../Autoencoder_EEG_Meeting_Minutes.jl | 74 +++++++----------- docs/literate/tutorials/gettingstarted.jl | 34 ++++++--- docs/make.jl | 66 ++++++++-------- .../Autoencoder_EEG_Meeting_Minutes.md | 76 +++++++------------ .../src/generated/tutorials/gettingstarted.md | 41 ++++++---- 5 files changed, 137 insertions(+), 154 deletions(-) diff --git a/docs/literate/tutorials/Autoencoder_EEG_Meeting_Minutes.jl b/docs/literate/tutorials/Autoencoder_EEG_Meeting_Minutes.jl index cbda347..5ee38f0 100644 --- a/docs/literate/tutorials/Autoencoder_EEG_Meeting_Minutes.jl +++ b/docs/literate/tutorials/Autoencoder_EEG_Meeting_Minutes.jl @@ -1,48 +1,26 @@ -# # Things to learn: -# 1. What is an autoencoder? -# 2. What is a deep autoencoder. -# 3. What is LSTM. - -# # Questions to be asked to the professor: -# 1. Do we have to build an autoencoder from scratch? -# 2. What do you mean by improvement? -# 3. Do we have to inculcate LSTM into our Autoencoder? - -# # 1. Learning rate -# - Too high a learning rate might cause the model to oscillate or even diverge during training, leading to poor convergence. This could result in poor quality of the graphics generated by the autoencoder, with significant loss of detail. -# - Too low a learning rate might lead to excessively slow training and the model might get stuck in local minima. This could cause the autoencoder to generate overly smooth images that miss some important features. - -# # 2. Epoch Number -# - Too few epochs might prevent the model from adequately learning the features in the data, affecting the quality and accuracy of the reconstructed graphics. -# - Too many epochs might lead to overfitting where the model performs well on the training data but poorly on new, unseen data. An overfitted autoencoder might generate images that are too reliant on specific noise and details of the training data, instead of learning a more general representation of the data. - -# # 3. Batch Size -# - Smaller batch sizes generally provide a more accurate gradient estimation but might lead to a less stable training process and longer training times. Smaller batches might enable the model to learn more details, potentially leading to better performance in image reconstruction tasks. -# - Larger batch sizes can speed up training and stabilize gradient estimations but may reduce the generalization ability of the model during training. In autoencoders, too large a batch might result in reconstructed images that lack detail and appear more blurred or smooth. - -# # Example Literate File -# This is an example of a literate Julia file. -# You can include code and text together to create documentation. - -# ## Section 1: Basic Arithmetic -# Let's start with some basic arithmetic operations. - -x = 1 + 1 -println("1 + 1 = ", x) - -y = 2 * 3 -println("2 * 3 = ", y) - -# ## Section 2: Using Functions -# We can also define and use functions. - -function greet(name) - return "Hello, $name" -end - -println(greet("world")) - -# ## Section 3: Plotting (requires Plots.jl) -# We can include plots as well. - - +# Autoencoder +# Team:Shaolong, Ramnath Rao Bekal, Rahul Bhaskaram +# 31/8/2024 + +# 1.Definition +# 1.Learning rate +# Too high a learning rate might cause the model to oscillate or even diverge during training, leading to poor convergence. This could result in poor quality of the graphics generated by the autoencoder, with significant loss of detail. +# Too low a learning rate might lead to excessively slow training and the model might get stuck in local minima. This could cause the autoencoder to generate overly smooth images that miss some important features. +# 2. Epoch Number +# Too few epochs might prevent the model from adequately learning the features in the data, affecting the quality and accuracy of the reconstructed graphics. +# Too many epochs might lead to overfitting where the model performs well on the training data but poorly on new, unseen data. An overfitted autoencoder might generate images that are too reliant on specific noise and details of the training data, instead of learning a more general representation of the data. +# 3. Batch Size +# Smaller batch sizes generally provide a more accurate gradient estimation but might lead to a less stable training process and longer training times. Smaller batches might enable the model to learn more details, potentially leading to better performance in image reconstruction tasks. +# Larger batch sizes can speed up training and stabilize gradient estimations but may reduce the generalization ability of the model during training. In autoencoders, too large a batch might result in reconstructed images that lack detail and appear more blurred or smooth. +# 4. Mean Squared Error (MSE) +# a commonly used metric to measure the average squared difference between the observed actual values (targets) and the values predicted by a model. The MSE loss rate is calculated as the mean of the squares of these differences. It quantifies how well the model's predictions match the actual data, with a lower MSE indicating better performance. +# 5.R-Squared error(R²) +# also known as the coefficient of determination, is a statistical measure that indicates how well the variance in the dependent variable is explained by the independent variables in the model. It ranges from 0 to 1, with values closer to 1 indicating a better fit. R² loss rate is often used to describe the proportion of the variance that is not explained by the model. + +# 2.Code explanation +# 1. First Formula: Includes the effects of both sight and hearing +# f = @formula 0 ~ 0 + sight + hearing +# 2.Second Formula: Includes only the effect of hearing +# f_hearing = @formula 0 ~ 0 + hearing +# 3.Third Formula: Includes only the effect of sight +# f_sight = @formula 0 ~ 0 + sight diff --git a/docs/literate/tutorials/gettingstarted.jl b/docs/literate/tutorials/gettingstarted.jl index 52617f9..5ee38f0 100644 --- a/docs/literate/tutorials/gettingstarted.jl +++ b/docs/literate/tutorials/gettingstarted.jl @@ -1,12 +1,26 @@ -# Getting started - -# # this is a markdown headline +# Autoencoder +# Team:Shaolong, Ramnath Rao Bekal, Rahul Bhaskaram +# 31/8/2024 -## this is a code comment - -1 == 1 - - -# text will make a new code cell with new output_data -bla = "shaolong shi2" +# 1.Definition +# 1.Learning rate +# Too high a learning rate might cause the model to oscillate or even diverge during training, leading to poor convergence. This could result in poor quality of the graphics generated by the autoencoder, with significant loss of detail. +# Too low a learning rate might lead to excessively slow training and the model might get stuck in local minima. This could cause the autoencoder to generate overly smooth images that miss some important features. +# 2. Epoch Number +# Too few epochs might prevent the model from adequately learning the features in the data, affecting the quality and accuracy of the reconstructed graphics. +# Too many epochs might lead to overfitting where the model performs well on the training data but poorly on new, unseen data. An overfitted autoencoder might generate images that are too reliant on specific noise and details of the training data, instead of learning a more general representation of the data. +# 3. Batch Size +# Smaller batch sizes generally provide a more accurate gradient estimation but might lead to a less stable training process and longer training times. Smaller batches might enable the model to learn more details, potentially leading to better performance in image reconstruction tasks. +# Larger batch sizes can speed up training and stabilize gradient estimations but may reduce the generalization ability of the model during training. In autoencoders, too large a batch might result in reconstructed images that lack detail and appear more blurred or smooth. +# 4. Mean Squared Error (MSE) +# a commonly used metric to measure the average squared difference between the observed actual values (targets) and the values predicted by a model. The MSE loss rate is calculated as the mean of the squares of these differences. It quantifies how well the model's predictions match the actual data, with a lower MSE indicating better performance. +# 5.R-Squared error(R²) +# also known as the coefficient of determination, is a statistical measure that indicates how well the variance in the dependent variable is explained by the independent variables in the model. It ranges from 0 to 1, with values closer to 1 indicating a better fit. R² loss rate is often used to describe the proportion of the variance that is not explained by the model. +# 2.Code explanation +# 1. First Formula: Includes the effects of both sight and hearing +# f = @formula 0 ~ 0 + sight + hearing +# 2.Second Formula: Includes only the effect of hearing +# f_hearing = @formula 0 ~ 0 + hearing +# 3.Third Formula: Includes only the effect of sight +# f_sight = @formula 0 ~ 0 + sight diff --git a/docs/make.jl b/docs/make.jl index 2aef1af..c28ab3e 100644 --- a/docs/make.jl +++ b/docs/make.jl @@ -1,6 +1,7 @@ using DeepRecurrentEncoder using Documenter using Literate, Glob +import Documenter: makedocs DocMeta.setdocmeta!(DeepRecurrentEncoder, :DocTestSetup, :(using DeepRecurrentEncoder); recursive=true) @@ -15,32 +16,29 @@ for subfolder ∈ ["explanations", "howto", "tutorials", "reference"] end - -# makedocs(; -# modules=[DeepRecurrentEncoder], -# authors="Benedikt V. Ehinger", -# sitename="DeepRecurrentEncoder.jl", -# format=Documenter.HTML(; -# canonical="https://s-ccs.github.io/DeepRecurrentEncoder.jl", -# edit_link="main", -# assets=String[], -# ), -# pages=[ -# "Home" => "index.md", -# "Getting Started" => "generated/tutorials/gettingstarted.md", -# "Team_report" => "generated/tutorials/Autoencoder_EEG_Meeting_Minutes.md" -# ], -# ) - makedocs( - modules=[DeepRecurrentEncoder], - authors="Benedikt V. Ehinger", - sitename="DeepRecurrentEncoder.jl", - format=Documenter.HTML(; - canonical="https://s-ccs.github.io/DeepRecurrentEncoder.jl", - edit_link="main", - assets=String[], - ), + sitename = "DeepRecurrentEncoder,jl", + authors="Benedikt V. Ehinger", + modules = [DeepRecurrentEncoder], + repo = "https://github.com/s-ccs/DeepRecurrentEncoder.jl", # 仓库的URL + format = Documenter.HTML(; + canonical="https://s-ccs.github.io/DeepRecurrentEncoder.jl", + edit_link="main", + assets=String[], + ), + clean = true, +), + +# makedocs( +# modules=[DeepRecurrentEncoder], +# authors="Benedikt V. Ehinger", +# sitename="DeepRecurrentEncoder.jl", +# format=Documenter.HTML(; +# canonical="https://s-ccs.github.io/DeepRecurrentEncoder.jl", +# edit_link="main", +# assets=String[], +# ), + pages = Any[ "Home" => "index.md", "Tutorials" => [ @@ -48,10 +46,16 @@ makedocs( "Getting Started" => "generated/tutorials/gettingstarted.md" ] ] -) -deploydocs(; - repo="github.com/s-ccs/DeepRecurrentEncoder.jl", - devbranch="main", - push_preview = true, -) +# deploydocs(; +# repo="github.com/s-ccs/DeepRecurrentEncoder.jl", +# devbranch="main", +# push_preview = true, +# ) + +deploydocs( + repo = "github.com/s-ccs/DeepRecurrentEncoder.jl.git", + branch = "gh-pages", + devbranch = "main", + target = "build" +) \ No newline at end of file diff --git a/docs/src/generated/tutorials/Autoencoder_EEG_Meeting_Minutes.md b/docs/src/generated/tutorials/Autoencoder_EEG_Meeting_Minutes.md index 4b331e7..716cfd4 100644 --- a/docs/src/generated/tutorials/Autoencoder_EEG_Meeting_Minutes.md +++ b/docs/src/generated/tutorials/Autoencoder_EEG_Meeting_Minutes.md @@ -2,56 +2,32 @@ EditURL = "../../../literate/tutorials/Autoencoder_EEG_Meeting_Minutes.jl" ``` -# Things to learn: -1. What is an autoencoder? -2. What is a deep autoencoder. -3. What is LSTM. - -# Questions to be asked to the professor: -1. Do we have to build an autoencoder from scratch? -2. What do you mean by improvement? -3. Do we have to inculcate LSTM into our Autoencoder? - -# 1. Learning rate -- Too high a learning rate might cause the model to oscillate or even diverge during training, leading to poor convergence. This could result in poor quality of the graphics generated by the autoencoder, with significant loss of detail. -- Too low a learning rate might lead to excessively slow training and the model might get stuck in local minima. This could cause the autoencoder to generate overly smooth images that miss some important features. - -# 2. Epoch Number -- Too few epochs might prevent the model from adequately learning the features in the data, affecting the quality and accuracy of the reconstructed graphics. -- Too many epochs might lead to overfitting where the model performs well on the training data but poorly on new, unseen data. An overfitted autoencoder might generate images that are too reliant on specific noise and details of the training data, instead of learning a more general representation of the data. - -# 3. Batch Size -- Smaller batch sizes generally provide a more accurate gradient estimation but might lead to a less stable training process and longer training times. Smaller batches might enable the model to learn more details, potentially leading to better performance in image reconstruction tasks. -- Larger batch sizes can speed up training and stabilize gradient estimations but may reduce the generalization ability of the model during training. In autoencoders, too large a batch might result in reconstructed images that lack detail and appear more blurred or smooth. - -# Example Literate File -This is an example of a literate Julia file. -You can include code and text together to create documentation. - -## Section 1: Basic Arithmetic -Let's start with some basic arithmetic operations. - -````@example Autoencoder_EEG_Meeting_Minutes -x = 1 + 1 -println("1 + 1 = ", x) - -y = 2 * 3 -println("2 * 3 = ", y) -```` - -## Section 2: Using Functions -We can also define and use functions. - -````@example Autoencoder_EEG_Meeting_Minutes -function greet(name) - return "Hello, $name" -end - -println(greet("world")) -```` - -## Section 3: Plotting (requires Plots.jl) -We can include plots as well. +Autoencoder + Team:Shaolong, Ramnath Rao Bekal, Rahul Bhaskaram + 31/8/2024 + +1.Definition +1.Learning rate +Too high a learning rate might cause the model to oscillate or even diverge during training, leading to poor convergence. This could result in poor quality of the graphics generated by the autoencoder, with significant loss of detail. +Too low a learning rate might lead to excessively slow training and the model might get stuck in local minima. This could cause the autoencoder to generate overly smooth images that miss some important features. +2. Epoch Number +Too few epochs might prevent the model from adequately learning the features in the data, affecting the quality and accuracy of the reconstructed graphics. +Too many epochs might lead to overfitting where the model performs well on the training data but poorly on new, unseen data. An overfitted autoencoder might generate images that are too reliant on specific noise and details of the training data, instead of learning a more general representation of the data. +3. Batch Size +Smaller batch sizes generally provide a more accurate gradient estimation but might lead to a less stable training process and longer training times. Smaller batches might enable the model to learn more details, potentially leading to better performance in image reconstruction tasks. +Larger batch sizes can speed up training and stabilize gradient estimations but may reduce the generalization ability of the model during training. In autoencoders, too large a batch might result in reconstructed images that lack detail and appear more blurred or smooth. +4. Mean Squared Error (MSE) +a commonly used metric to measure the average squared difference between the observed actual values (targets) and the values predicted by a model. The MSE loss rate is calculated as the mean of the squares of these differences. It quantifies how well the model's predictions match the actual data, with a lower MSE indicating better performance. +5.R-Squared error(R²) +also known as the coefficient of determination, is a statistical measure that indicates how well the variance in the dependent variable is explained by the independent variables in the model. It ranges from 0 to 1, with values closer to 1 indicating a better fit. R² loss rate is often used to describe the proportion of the variance that is not explained by the model. + +2.Code explanation +1. First Formula: Includes the effects of both sight and hearing +f = @formula 0 ~ 0 + sight + hearing +2.Second Formula: Includes only the effect of hearing +f_hearing = @formula 0 ~ 0 + hearing +3.Third Formula: Includes only the effect of sight +f_sight = @formula 0 ~ 0 + sight --- diff --git a/docs/src/generated/tutorials/gettingstarted.md b/docs/src/generated/tutorials/gettingstarted.md index ef14c5f..43240ef 100644 --- a/docs/src/generated/tutorials/gettingstarted.md +++ b/docs/src/generated/tutorials/gettingstarted.md @@ -2,21 +2,32 @@ EditURL = "../../../literate/tutorials/gettingstarted.jl" ``` -Getting started - -# this is a markdown headline - -````@example gettingstarted -# this is a code comment - -1 == 1 -```` - -text will make a new code cell with new output_data - -````@example gettingstarted -bla = "shaolong shi2" -```` +Autoencoder + Team:Shaolong, Ramnath Rao Bekal, Rahul Bhaskaram + 31/8/2024 + +1.Definition +1.Learning rate +Too high a learning rate might cause the model to oscillate or even diverge during training, leading to poor convergence. This could result in poor quality of the graphics generated by the autoencoder, with significant loss of detail. +Too low a learning rate might lead to excessively slow training and the model might get stuck in local minima. This could cause the autoencoder to generate overly smooth images that miss some important features. +2. Epoch Number +Too few epochs might prevent the model from adequately learning the features in the data, affecting the quality and accuracy of the reconstructed graphics. +Too many epochs might lead to overfitting where the model performs well on the training data but poorly on new, unseen data. An overfitted autoencoder might generate images that are too reliant on specific noise and details of the training data, instead of learning a more general representation of the data. +3. Batch Size +Smaller batch sizes generally provide a more accurate gradient estimation but might lead to a less stable training process and longer training times. Smaller batches might enable the model to learn more details, potentially leading to better performance in image reconstruction tasks. +Larger batch sizes can speed up training and stabilize gradient estimations but may reduce the generalization ability of the model during training. In autoencoders, too large a batch might result in reconstructed images that lack detail and appear more blurred or smooth. +4. Mean Squared Error (MSE) +a commonly used metric to measure the average squared difference between the observed actual values (targets) and the values predicted by a model. The MSE loss rate is calculated as the mean of the squares of these differences. It quantifies how well the model's predictions match the actual data, with a lower MSE indicating better performance. +5.R-Squared error(R²) +also known as the coefficient of determination, is a statistical measure that indicates how well the variance in the dependent variable is explained by the independent variables in the model. It ranges from 0 to 1, with values closer to 1 indicating a better fit. R² loss rate is often used to describe the proportion of the variance that is not explained by the model. + +2.Code explanation +1. First Formula: Includes the effects of both sight and hearing +f = @formula 0 ~ 0 + sight + hearing +2.Second Formula: Includes only the effect of hearing +f_hearing = @formula 0 ~ 0 + hearing +3.Third Formula: Includes only the effect of sight +f_sight = @formula 0 ~ 0 + sight ---