Skip to content

Commit

Permalink
initial change
Browse files Browse the repository at this point in the history
  • Loading branch information
shaolong5 committed Aug 10, 2024
1 parent bb97857 commit 938dd4b
Show file tree
Hide file tree
Showing 5 changed files with 137 additions and 154 deletions.
74 changes: 26 additions & 48 deletions docs/literate/tutorials/Autoencoder_EEG_Meeting_Minutes.jl
Original file line number Diff line number Diff line change
@@ -1,48 +1,26 @@
# # Things to learn:
# 1. What is an autoencoder?
# 2. What is a deep autoencoder.
# 3. What is LSTM.

# # Questions to be asked to the professor:
# 1. Do we have to build an autoencoder from scratch?
# 2. What do you mean by improvement?
# 3. Do we have to inculcate LSTM into our Autoencoder?

# # 1. Learning rate
# - Too high a learning rate might cause the model to oscillate or even diverge during training, leading to poor convergence. This could result in poor quality of the graphics generated by the autoencoder, with significant loss of detail.
# - Too low a learning rate might lead to excessively slow training and the model might get stuck in local minima. This could cause the autoencoder to generate overly smooth images that miss some important features.

# # 2. Epoch Number
# - Too few epochs might prevent the model from adequately learning the features in the data, affecting the quality and accuracy of the reconstructed graphics.
# - Too many epochs might lead to overfitting where the model performs well on the training data but poorly on new, unseen data. An overfitted autoencoder might generate images that are too reliant on specific noise and details of the training data, instead of learning a more general representation of the data.

# # 3. Batch Size
# - Smaller batch sizes generally provide a more accurate gradient estimation but might lead to a less stable training process and longer training times. Smaller batches might enable the model to learn more details, potentially leading to better performance in image reconstruction tasks.
# - Larger batch sizes can speed up training and stabilize gradient estimations but may reduce the generalization ability of the model during training. In autoencoders, too large a batch might result in reconstructed images that lack detail and appear more blurred or smooth.

# # Example Literate File
# This is an example of a literate Julia file.
# You can include code and text together to create documentation.

# ## Section 1: Basic Arithmetic
# Let's start with some basic arithmetic operations.

x = 1 + 1
println("1 + 1 = ", x)

y = 2 * 3
println("2 * 3 = ", y)

# ## Section 2: Using Functions
# We can also define and use functions.

function greet(name)
return "Hello, $name"
end

println(greet("world"))

# ## Section 3: Plotting (requires Plots.jl)
# We can include plots as well.


# Autoencoder
# Team:Shaolong, Ramnath Rao Bekal, Rahul Bhaskaram
# 31/8/2024

# 1.Definition
# 1.Learning rate
# Too high a learning rate might cause the model to oscillate or even diverge during training, leading to poor convergence. This could result in poor quality of the graphics generated by the autoencoder, with significant loss of detail.
# Too low a learning rate might lead to excessively slow training and the model might get stuck in local minima. This could cause the autoencoder to generate overly smooth images that miss some important features.
# 2. Epoch Number
# Too few epochs might prevent the model from adequately learning the features in the data, affecting the quality and accuracy of the reconstructed graphics.
# Too many epochs might lead to overfitting where the model performs well on the training data but poorly on new, unseen data. An overfitted autoencoder might generate images that are too reliant on specific noise and details of the training data, instead of learning a more general representation of the data.
# 3. Batch Size
# Smaller batch sizes generally provide a more accurate gradient estimation but might lead to a less stable training process and longer training times. Smaller batches might enable the model to learn more details, potentially leading to better performance in image reconstruction tasks.
# Larger batch sizes can speed up training and stabilize gradient estimations but may reduce the generalization ability of the model during training. In autoencoders, too large a batch might result in reconstructed images that lack detail and appear more blurred or smooth.
# 4. Mean Squared Error (MSE)
# a commonly used metric to measure the average squared difference between the observed actual values (targets) and the values predicted by a model. The MSE loss rate is calculated as the mean of the squares of these differences. It quantifies how well the model's predictions match the actual data, with a lower MSE indicating better performance.
# 5.R-Squared error(R²)
# also known as the coefficient of determination, is a statistical measure that indicates how well the variance in the dependent variable is explained by the independent variables in the model. It ranges from 0 to 1, with values closer to 1 indicating a better fit. R² loss rate is often used to describe the proportion of the variance that is not explained by the model.

# 2.Code explanation
# 1. First Formula: Includes the effects of both sight and hearing
# f = @formula 0 ~ 0 + sight + hearing
# 2.Second Formula: Includes only the effect of hearing
# f_hearing = @formula 0 ~ 0 + hearing
# 3.Third Formula: Includes only the effect of sight
# f_sight = @formula 0 ~ 0 + sight
34 changes: 24 additions & 10 deletions docs/literate/tutorials/gettingstarted.jl
Original file line number Diff line number Diff line change
@@ -1,12 +1,26 @@
# Getting started

# # this is a markdown headline
# Autoencoder
# Team:Shaolong, Ramnath Rao Bekal, Rahul Bhaskaram
# 31/8/2024

## this is a code comment

1 == 1


# text will make a new code cell with new output_data
bla = "shaolong shi2"
# 1.Definition
# 1.Learning rate
# Too high a learning rate might cause the model to oscillate or even diverge during training, leading to poor convergence. This could result in poor quality of the graphics generated by the autoencoder, with significant loss of detail.
# Too low a learning rate might lead to excessively slow training and the model might get stuck in local minima. This could cause the autoencoder to generate overly smooth images that miss some important features.
# 2. Epoch Number
# Too few epochs might prevent the model from adequately learning the features in the data, affecting the quality and accuracy of the reconstructed graphics.
# Too many epochs might lead to overfitting where the model performs well on the training data but poorly on new, unseen data. An overfitted autoencoder might generate images that are too reliant on specific noise and details of the training data, instead of learning a more general representation of the data.
# 3. Batch Size
# Smaller batch sizes generally provide a more accurate gradient estimation but might lead to a less stable training process and longer training times. Smaller batches might enable the model to learn more details, potentially leading to better performance in image reconstruction tasks.
# Larger batch sizes can speed up training and stabilize gradient estimations but may reduce the generalization ability of the model during training. In autoencoders, too large a batch might result in reconstructed images that lack detail and appear more blurred or smooth.
# 4. Mean Squared Error (MSE)
# a commonly used metric to measure the average squared difference between the observed actual values (targets) and the values predicted by a model. The MSE loss rate is calculated as the mean of the squares of these differences. It quantifies how well the model's predictions match the actual data, with a lower MSE indicating better performance.
# 5.R-Squared error(R²)
# also known as the coefficient of determination, is a statistical measure that indicates how well the variance in the dependent variable is explained by the independent variables in the model. It ranges from 0 to 1, with values closer to 1 indicating a better fit. R² loss rate is often used to describe the proportion of the variance that is not explained by the model.

# 2.Code explanation
# 1. First Formula: Includes the effects of both sight and hearing
# f = @formula 0 ~ 0 + sight + hearing
# 2.Second Formula: Includes only the effect of hearing
# f_hearing = @formula 0 ~ 0 + hearing
# 3.Third Formula: Includes only the effect of sight
# f_sight = @formula 0 ~ 0 + sight
66 changes: 35 additions & 31 deletions docs/make.jl
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
using DeepRecurrentEncoder
using Documenter
using Literate, Glob
import Documenter: makedocs
DocMeta.setdocmeta!(DeepRecurrentEncoder, :DocTestSetup, :(using DeepRecurrentEncoder); recursive=true)


Expand All @@ -15,43 +16,46 @@ for subfolder ∈ ["explanations", "howto", "tutorials", "reference"]
end



# makedocs(;
# modules=[DeepRecurrentEncoder],
# authors="Benedikt V. Ehinger",
# sitename="DeepRecurrentEncoder.jl",
# format=Documenter.HTML(;
# canonical="https://s-ccs.github.io/DeepRecurrentEncoder.jl",
# edit_link="main",
# assets=String[],
# ),
# pages=[
# "Home" => "index.md",
# "Getting Started" => "generated/tutorials/gettingstarted.md",
# "Team_report" => "generated/tutorials/Autoencoder_EEG_Meeting_Minutes.md"
# ],
# )

makedocs(
modules=[DeepRecurrentEncoder],
authors="Benedikt V. Ehinger",
sitename="DeepRecurrentEncoder.jl",
format=Documenter.HTML(;
canonical="https://s-ccs.github.io/DeepRecurrentEncoder.jl",
edit_link="main",
assets=String[],
),
sitename = "DeepRecurrentEncoder,jl",
authors="Benedikt V. Ehinger",
modules = [DeepRecurrentEncoder],
repo = "https://github.com/s-ccs/DeepRecurrentEncoder.jl", # 仓库的URL
format = Documenter.HTML(;
canonical="https://s-ccs.github.io/DeepRecurrentEncoder.jl",
edit_link="main",
assets=String[],
),
clean = true,
),

# makedocs(
# modules=[DeepRecurrentEncoder],
# authors="Benedikt V. Ehinger",
# sitename="DeepRecurrentEncoder.jl",
# format=Documenter.HTML(;
# canonical="https://s-ccs.github.io/DeepRecurrentEncoder.jl",
# edit_link="main",
# assets=String[],
# ),

pages = Any[
"Home" => "index.md",
"Tutorials" => [
"Autoencoder EEG Meeting Minutes" => "generated/tutorials/Autoencoder_EEG_Meeting_Minutes.md",
"Getting Started" => "generated/tutorials/gettingstarted.md"
]
]
)

deploydocs(;
repo="github.com/s-ccs/DeepRecurrentEncoder.jl",
devbranch="main",
push_preview = true,
)
# deploydocs(;
# repo="github.com/s-ccs/DeepRecurrentEncoder.jl",
# devbranch="main",
# push_preview = true,
# )

deploydocs(
repo = "github.com/s-ccs/DeepRecurrentEncoder.jl.git",
branch = "gh-pages",
devbranch = "main",
target = "build"
)
76 changes: 26 additions & 50 deletions docs/src/generated/tutorials/Autoencoder_EEG_Meeting_Minutes.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,56 +2,32 @@
EditURL = "../../../literate/tutorials/Autoencoder_EEG_Meeting_Minutes.jl"
```

# Things to learn:
1. What is an autoencoder?
2. What is a deep autoencoder.
3. What is LSTM.

# Questions to be asked to the professor:
1. Do we have to build an autoencoder from scratch?
2. What do you mean by improvement?
3. Do we have to inculcate LSTM into our Autoencoder?

# 1. Learning rate
- Too high a learning rate might cause the model to oscillate or even diverge during training, leading to poor convergence. This could result in poor quality of the graphics generated by the autoencoder, with significant loss of detail.
- Too low a learning rate might lead to excessively slow training and the model might get stuck in local minima. This could cause the autoencoder to generate overly smooth images that miss some important features.

# 2. Epoch Number
- Too few epochs might prevent the model from adequately learning the features in the data, affecting the quality and accuracy of the reconstructed graphics.
- Too many epochs might lead to overfitting where the model performs well on the training data but poorly on new, unseen data. An overfitted autoencoder might generate images that are too reliant on specific noise and details of the training data, instead of learning a more general representation of the data.

# 3. Batch Size
- Smaller batch sizes generally provide a more accurate gradient estimation but might lead to a less stable training process and longer training times. Smaller batches might enable the model to learn more details, potentially leading to better performance in image reconstruction tasks.
- Larger batch sizes can speed up training and stabilize gradient estimations but may reduce the generalization ability of the model during training. In autoencoders, too large a batch might result in reconstructed images that lack detail and appear more blurred or smooth.

# Example Literate File
This is an example of a literate Julia file.
You can include code and text together to create documentation.

## Section 1: Basic Arithmetic
Let's start with some basic arithmetic operations.

````@example Autoencoder_EEG_Meeting_Minutes
x = 1 + 1
println("1 + 1 = ", x)
y = 2 * 3
println("2 * 3 = ", y)
````

## Section 2: Using Functions
We can also define and use functions.

````@example Autoencoder_EEG_Meeting_Minutes
function greet(name)
return "Hello, $name"
end
println(greet("world"))
````

## Section 3: Plotting (requires Plots.jl)
We can include plots as well.
Autoencoder
Team:Shaolong, Ramnath Rao Bekal, Rahul Bhaskaram
31/8/2024

1.Definition
1.Learning rate
Too high a learning rate might cause the model to oscillate or even diverge during training, leading to poor convergence. This could result in poor quality of the graphics generated by the autoencoder, with significant loss of detail.
Too low a learning rate might lead to excessively slow training and the model might get stuck in local minima. This could cause the autoencoder to generate overly smooth images that miss some important features.
2. Epoch Number
Too few epochs might prevent the model from adequately learning the features in the data, affecting the quality and accuracy of the reconstructed graphics.
Too many epochs might lead to overfitting where the model performs well on the training data but poorly on new, unseen data. An overfitted autoencoder might generate images that are too reliant on specific noise and details of the training data, instead of learning a more general representation of the data.
3. Batch Size
Smaller batch sizes generally provide a more accurate gradient estimation but might lead to a less stable training process and longer training times. Smaller batches might enable the model to learn more details, potentially leading to better performance in image reconstruction tasks.
Larger batch sizes can speed up training and stabilize gradient estimations but may reduce the generalization ability of the model during training. In autoencoders, too large a batch might result in reconstructed images that lack detail and appear more blurred or smooth.
4. Mean Squared Error (MSE)
a commonly used metric to measure the average squared difference between the observed actual values (targets) and the values predicted by a model. The MSE loss rate is calculated as the mean of the squares of these differences. It quantifies how well the model's predictions match the actual data, with a lower MSE indicating better performance.
5.R-Squared error(R²)
also known as the coefficient of determination, is a statistical measure that indicates how well the variance in the dependent variable is explained by the independent variables in the model. It ranges from 0 to 1, with values closer to 1 indicating a better fit. R² loss rate is often used to describe the proportion of the variance that is not explained by the model.

2.Code explanation
1. First Formula: Includes the effects of both sight and hearing
f = @formula 0 ~ 0 + sight + hearing
2.Second Formula: Includes only the effect of hearing
f_hearing = @formula 0 ~ 0 + hearing
3.Third Formula: Includes only the effect of sight
f_sight = @formula 0 ~ 0 + sight

---

Expand Down
41 changes: 26 additions & 15 deletions docs/src/generated/tutorials/gettingstarted.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,21 +2,32 @@
EditURL = "../../../literate/tutorials/gettingstarted.jl"
```

Getting started

# this is a markdown headline

````@example gettingstarted
# this is a code comment
1 == 1
````

text will make a new code cell with new output_data

````@example gettingstarted
bla = "shaolong shi2"
````
Autoencoder
Team:Shaolong, Ramnath Rao Bekal, Rahul Bhaskaram
31/8/2024

1.Definition
1.Learning rate
Too high a learning rate might cause the model to oscillate or even diverge during training, leading to poor convergence. This could result in poor quality of the graphics generated by the autoencoder, with significant loss of detail.
Too low a learning rate might lead to excessively slow training and the model might get stuck in local minima. This could cause the autoencoder to generate overly smooth images that miss some important features.
2. Epoch Number
Too few epochs might prevent the model from adequately learning the features in the data, affecting the quality and accuracy of the reconstructed graphics.
Too many epochs might lead to overfitting where the model performs well on the training data but poorly on new, unseen data. An overfitted autoencoder might generate images that are too reliant on specific noise and details of the training data, instead of learning a more general representation of the data.
3. Batch Size
Smaller batch sizes generally provide a more accurate gradient estimation but might lead to a less stable training process and longer training times. Smaller batches might enable the model to learn more details, potentially leading to better performance in image reconstruction tasks.
Larger batch sizes can speed up training and stabilize gradient estimations but may reduce the generalization ability of the model during training. In autoencoders, too large a batch might result in reconstructed images that lack detail and appear more blurred or smooth.
4. Mean Squared Error (MSE)
a commonly used metric to measure the average squared difference between the observed actual values (targets) and the values predicted by a model. The MSE loss rate is calculated as the mean of the squares of these differences. It quantifies how well the model's predictions match the actual data, with a lower MSE indicating better performance.
5.R-Squared error(R²)
also known as the coefficient of determination, is a statistical measure that indicates how well the variance in the dependent variable is explained by the independent variables in the model. It ranges from 0 to 1, with values closer to 1 indicating a better fit. R² loss rate is often used to describe the proportion of the variance that is not explained by the model.

2.Code explanation
1. First Formula: Includes the effects of both sight and hearing
f = @formula 0 ~ 0 + sight + hearing
2.Second Formula: Includes only the effect of hearing
f_hearing = @formula 0 ~ 0 + hearing
3.Third Formula: Includes only the effect of sight
f_sight = @formula 0 ~ 0 + sight

---

Expand Down

0 comments on commit 938dd4b

Please sign in to comment.