@@ -8,6 +8,69 @@ EditURL = "../../../literate/tutorials/Autoencoder_EEG_Meeting_Minutes.jl"
88
99# 1. Definition
1010
11+ # What is Autoencoder?
12+
13+ An autoencoder is a type of artificial neural network used to learn efficient representations of data,
14+ typically for the purpose of dimensionality reduction or feature learning. It consists of two main parts:
15+
16+ 1 . ** Encoder** : This part compresses the input data into a lower-dimensional representation,
17+ capturing the essential features while discarding noise and redundancy.
18+ The encoder typically reduces the input dimensions to a much smaller size.
19+ 2 . ** Decoder** : This part reconstructs the original input from the compressed representation.
20+ The goal of the decoder is to produce an output as close as possible to the original input data.
21+
22+ Autoencoders are trained using unsupervised learning, where the network learns to minimize the difference
23+ between the input and the output (reconstruction error). They are widely used in applications such as data compression,
24+ denoising, anomaly detection, and feature extraction.
25+
26+ # A List of Examples
27+
28+ ### 1. Medical Imaging
29+
30+ - ** Scenario** : A doctor needs to analyze a large set of medical
31+ images, such as MRIs, to detect any abnormalities.
32+ - ** Application** : Autoencoders can help enhance the image quality
33+ or highlight areas of concern, making it easier for doctors
34+ to identify potential health issues, such as tumors or fractures.
35+
36+ ### 2. Face Recognition on Social Media
37+
38+ - ** Scenario** : Social media platforms automatically tag people in photos.
39+ - ** Application** : Autoencoders can compress and extract essential
40+ features from images to recognize faces, making it easier
41+ to tag friends in photos without manual input.
42+
43+ ### 3. Photo and Video Compression
44+
45+ - ** Scenario** : When you want to save space on your smartphone or computer
46+ by compressing photos or videos.
47+ - ** Application** : An autoencoder can reduce the file size of images or
48+ videos while preserving important details, allowing you to store
49+ more files without significantly losing quality.
50+
51+ ### 4. Noise Reduction in Audio
52+
53+ - ** Scenario** : You have a recording from a crowded place, like a lecture
54+ or meeting, with a lot of background noise.
55+ - ** Application** : A denoising autoencoder can clean up the audio by
56+ filtering out the background noise, making the speech clearer
57+ and easier to understand.
58+
59+ ### 5. Detecting Fraudulent Transactions
60+
61+ - ** Scenario** : Banks want to monitor transactions for potential fraud,
62+ such as unusual spending patterns on a credit card.
63+ - ** Application** : An autoencoder trained on normal transaction data can
64+ identify transactions that significantly deviate from typical
65+ behavior, flagging them as potentially fraudulent.
66+
67+ ## Key Concept
68+
69+ Autoencoders are a versatile tool in machine learning for tasks
70+ such as dimensionality reduction, anomaly detection, and data compression.
71+ They work by learning to encode the input data into a lower-dimensional
72+ representation and then decode it back to its original form.
73+
11741.1 Learning Rate
1275
1376- Too high a learning rate might cause the model to oscillate or even diverge during
@@ -146,53 +209,53 @@ push!(loss_test_rsquared_hearing_0, l)
146209# 3. Graphs and Results Explanation
147210Output of Predicted model
148211
149- ![ figure1 ] ( image/figure1.png )
212+ ![ ] ( image/figure1.png )
150213
151214Output of Actual model
152215
153- ![ figure2 ] ( image/figure2.png )
216+ ![ ] ( image/figure2.png )
154217
155218The red shape corresponds to a sight effect of 0, while the blue shape corresponds to a sight
156219effect of 10. As we can clearly observe, the blue shape is fuller and resembles a butterfly
157220more due to the larger sight effect.
158221
159- ![ figure3 ] ( image/figure3.png )
222+ ![ ] ( image/figure3.png )
160223
161224The hearing + sight curve outperforms the hearing-only curve in terms of MSE loss,
162225as the hearing + sight curve is closer to 0.
163226
164- ![ figure4 ] ( image/figure4.png )
227+ ![ ] ( image/figure4.png )
165228
166229As we can see, the hearing + sight* 0 curve is similar to the hearing-only curve.
167230The small differences are due to the randomly generated dataset.
168231
169- ![ figure5 ] ( image/figure5.png )
232+ ![ ] ( image/figure5.png )
170233
171234Test curve
172235R-square error curves for different hidden channels
173236
174- ![ figure6 ] ( image/figure6.png )
237+ ![ ] ( image/figure6.png )
175238
176239
177240
178241Predict hearing with the effect value of 10
179242
180- ![ figure7 ] ( image/figure7.png )
243+ ![ ] ( image/figure7.png )
181244
182245
183246
184247Training curve
185248MSE loss improves with more epochs or more hidden layers, as it gets closer to 0.
186249
187- ![ figure8 ] ( image/figure8.png )
250+ ![ ] ( image/figure8.png )
188251
189252
190253
191254Training curve
192255R-squared loss improves with more hidden channels or more epochs. The R-squared value
193256hasn't reached 1 due to the presence of pink noise.
194257
195- ![ figure9 ] ( image/figure9.png )
258+ ![ ] ( image/figure9.png )
196259
197260---
198261
0 commit comments