Skip to content

Commit fce6c11

Browse files
documentation
1 parent 62687e6 commit fce6c11

File tree

2 files changed

+136
-9
lines changed

2 files changed

+136
-9
lines changed

docs/literate/tutorials/Autoencoder_EEG_Meeting_Minutes.jl

Lines changed: 64 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,70 @@
66
#
77
# # 1. Definition
88

9+
# # What is Autoencoder?
10+
#
11+
# An autoencoder is a type of artificial neural network used to learn efficient representations of data,
12+
# typically for the purpose of dimensionality reduction or feature learning. It consists of two main parts:
13+
#
14+
# 1. **Encoder**: This part compresses the input data into a lower-dimensional representation,
15+
# capturing the essential features while discarding noise and redundancy.
16+
# The encoder typically reduces the input dimensions to a much smaller size.
17+
# 2. **Decoder**: This part reconstructs the original input from the compressed representation.
18+
# The goal of the decoder is to produce an output as close as possible to the original input data.
19+
#
20+
# Autoencoders are trained using unsupervised learning, where the network learns to minimize the difference
21+
# between the input and the output (reconstruction error). They are widely used in applications such as data compression,
22+
# denoising, anomaly detection, and feature extraction.
23+
#
24+
# # A List of Examples
25+
#
26+
# ### 1. Medical Imaging
27+
#
28+
# - **Scenario**: A doctor needs to analyze a large set of medical
29+
# images, such as MRIs, to detect any abnormalities.
30+
# - **Application**: Autoencoders can help enhance the image quality
31+
# or highlight areas of concern, making it easier for doctors
32+
# to identify potential health issues, such as tumors or fractures.
33+
#
34+
# ### 2. Face Recognition on Social Media
35+
#
36+
# - **Scenario**: Social media platforms automatically tag people in photos.
37+
# - **Application**: Autoencoders can compress and extract essential
38+
# features from images to recognize faces, making it easier
39+
# to tag friends in photos without manual input.
40+
#
41+
# ### 3. Photo and Video Compression
42+
#
43+
# - **Scenario**: When you want to save space on your smartphone or computer
44+
# by compressing photos or videos.
45+
# - **Application**: An autoencoder can reduce the file size of images or
46+
# videos while preserving important details, allowing you to store
47+
# more files without significantly losing quality.
48+
#
49+
# ### 4. Noise Reduction in Audio
50+
#
51+
# - **Scenario**: You have a recording from a crowded place, like a lecture
52+
# or meeting, with a lot of background noise.
53+
# - **Application**: A denoising autoencoder can clean up the audio by
54+
# filtering out the background noise, making the speech clearer
55+
# and easier to understand.
56+
#
57+
# ### 5. Detecting Fraudulent Transactions
58+
#
59+
# - **Scenario**: Banks want to monitor transactions for potential fraud,
60+
# such as unusual spending patterns on a credit card.
61+
# - **Application**: An autoencoder trained on normal transaction data can
62+
# identify transactions that significantly deviate from typical
63+
# behavior, flagging them as potentially fraudulent.
64+
#
65+
# ## Key Concept
66+
#
67+
# Autoencoders are a versatile tool in machine learning for tasks
68+
# such as dimensionality reduction, anomaly detection, and data compression.
69+
# They work by learning to encode the input data into a lower-dimensional
70+
# representation and then decode it back to its original form.
71+
72+
973
#
1074
# 1.1 Learning Rate
1175
#

docs/src/generated/tutorials/Autoencoder_EEG_Meeting_Minutes.md

Lines changed: 72 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,69 @@ EditURL = "../../../literate/tutorials/Autoencoder_EEG_Meeting_Minutes.jl"
88

99
# 1. Definition
1010

11+
# What is Autoencoder?
12+
13+
An autoencoder is a type of artificial neural network used to learn efficient representations of data,
14+
typically for the purpose of dimensionality reduction or feature learning. It consists of two main parts:
15+
16+
1. **Encoder**: This part compresses the input data into a lower-dimensional representation,
17+
capturing the essential features while discarding noise and redundancy.
18+
The encoder typically reduces the input dimensions to a much smaller size.
19+
2. **Decoder**: This part reconstructs the original input from the compressed representation.
20+
The goal of the decoder is to produce an output as close as possible to the original input data.
21+
22+
Autoencoders are trained using unsupervised learning, where the network learns to minimize the difference
23+
between the input and the output (reconstruction error). They are widely used in applications such as data compression,
24+
denoising, anomaly detection, and feature extraction.
25+
26+
# A List of Examples
27+
28+
### 1. Medical Imaging
29+
30+
- **Scenario**: A doctor needs to analyze a large set of medical
31+
images, such as MRIs, to detect any abnormalities.
32+
- **Application**: Autoencoders can help enhance the image quality
33+
or highlight areas of concern, making it easier for doctors
34+
to identify potential health issues, such as tumors or fractures.
35+
36+
### 2. Face Recognition on Social Media
37+
38+
- **Scenario**: Social media platforms automatically tag people in photos.
39+
- **Application**: Autoencoders can compress and extract essential
40+
features from images to recognize faces, making it easier
41+
to tag friends in photos without manual input.
42+
43+
### 3. Photo and Video Compression
44+
45+
- **Scenario**: When you want to save space on your smartphone or computer
46+
by compressing photos or videos.
47+
- **Application**: An autoencoder can reduce the file size of images or
48+
videos while preserving important details, allowing you to store
49+
more files without significantly losing quality.
50+
51+
### 4. Noise Reduction in Audio
52+
53+
- **Scenario**: You have a recording from a crowded place, like a lecture
54+
or meeting, with a lot of background noise.
55+
- **Application**: A denoising autoencoder can clean up the audio by
56+
filtering out the background noise, making the speech clearer
57+
and easier to understand.
58+
59+
### 5. Detecting Fraudulent Transactions
60+
61+
- **Scenario**: Banks want to monitor transactions for potential fraud,
62+
such as unusual spending patterns on a credit card.
63+
- **Application**: An autoencoder trained on normal transaction data can
64+
identify transactions that significantly deviate from typical
65+
behavior, flagging them as potentially fraudulent.
66+
67+
## Key Concept
68+
69+
Autoencoders are a versatile tool in machine learning for tasks
70+
such as dimensionality reduction, anomaly detection, and data compression.
71+
They work by learning to encode the input data into a lower-dimensional
72+
representation and then decode it back to its original form.
73+
1174
1.1 Learning Rate
1275

1376
- Too high a learning rate might cause the model to oscillate or even diverge during
@@ -146,53 +209,53 @@ push!(loss_test_rsquared_hearing_0, l)
146209
# 3. Graphs and Results Explanation
147210
Output of Predicted model
148211

149-
![figure1](image/figure1.png)
212+
![](image/figure1.png)
150213

151214
Output of Actual model
152215

153-
![figure2](image/figure2.png)
216+
![](image/figure2.png)
154217

155218
The red shape corresponds to a sight effect of 0, while the blue shape corresponds to a sight
156219
effect of 10. As we can clearly observe, the blue shape is fuller and resembles a butterfly
157220
more due to the larger sight effect.
158221

159-
![figure3](image/figure3.png)
222+
![](image/figure3.png)
160223

161224
The hearing + sight curve outperforms the hearing-only curve in terms of MSE loss,
162225
as the hearing + sight curve is closer to 0.
163226

164-
![figure4](image/figure4.png)
227+
![](image/figure4.png)
165228

166229
As we can see, the hearing + sight*0 curve is similar to the hearing-only curve.
167230
The small differences are due to the randomly generated dataset.
168231

169-
![figure5](image/figure5.png)
232+
![](image/figure5.png)
170233

171234
Test curve
172235
R-square error curves for different hidden channels
173236

174-
![figure6](image/figure6.png)
237+
![](image/figure6.png)
175238

176239

177240

178241
Predict hearing with the effect value of 10
179242

180-
![figure7](image/figure7.png)
243+
![](image/figure7.png)
181244

182245

183246

184247
Training curve
185248
MSE loss improves with more epochs or more hidden layers, as it gets closer to 0.
186249

187-
![figure8](image/figure8.png)
250+
![](image/figure8.png)
188251

189252

190253

191254
Training curve
192255
R-squared loss improves with more hidden channels or more epochs. The R-squared value
193256
hasn't reached 1 due to the presence of pink noise.
194257

195-
![figure9](image/figure9.png)
258+
![](image/figure9.png)
196259

197260
---
198261

0 commit comments

Comments
 (0)