moabb.datasets.Thielen2015#
+-
+
- +class moabb.datasets.Thielen2015[source]# +
c-VEP dataset from Thielen et al. (2015)
+Dataset [1] from the study on reconvolution for c-VEP [2].
+++Dataset summary
++ +
+ + + + +Name
+#Subj
+#Chan
+#Classes
+#Trials / class
+#Epochs / class
+Trials length
+Sampling rate
+#Sessions
+ + +Thielen2015
+12
+64
+36
+3
+27216 NT / 27216 T
+4.2s
+2048Hz
+1
Dataset description
+EEG recordings were obtained with a sampling rate of 2048 Hz, using a setup comprising 64 Ag/AgCl electrodes, and +amplified by a Biosemi ActiveTwo EEG amplifier. Electrode placement followed the international 10-10 system.
+During the experimental sessions, participants actively operated a 6 x 6 visual speller brain-computer interface +(BCI) with real-time feedback, encompassing 36 distinct classes. Each cell within the symbol grid underwent +luminance modulation at full contrast, achieved through the application of pseudo-random noise-codes derived from a +set of modulated Gold codes. These binary codes have a balanced distribution of ones and zeros while adhering to a +limited run-length pattern, with a maximum run-length of 2 bits. Codes were presented at a rate of 120 Hz. Given +that one cycle of these modulated Gold codes comprises 126 bits, the duration of a complete cycle spans 1.05 +seconds.
+Throughout the experiment, participants underwent four distinct blocks: an initial practice block consisting of two +runs, followed by a training block of one run. Subsequently, they engaged in a copy-spelling block comprising six +runs, and finally, a free-spelling block consisting of one run. Between the training and copy-spelling block, a +classifier was calibrated using data from the training block. This calibrated classifier was then applied during +both the copy-spelling and free-spelling runs. Additionally, during calibration, the stimulation codes were +tailored and optimized specifically for each individual participant.
+Among the six copy-spelling runs, there were three fixed-length runs. Trials in these runs started with a cueing +phase, where the target symbol was highlighted in a green hue for 1 second. Participants maintained their gaze +fixated on the target symbol as all symbols flashed in sync with their corresponding pseudo-random noise-codes for a +duration of 4.2 seconds (equivalent to 4 code cycles). Immediately following this stimulation, the output of the +classifier was shown by coloring the cell blue for 1 second. Each run consisted of 36 trials, presented in a +randomized order.
+Here, our focus is solely on the three copy-spelling runs characterized by fixed-length trials lasting 4.2 seconds +(equivalent to four code cycles). The other three runs utilized a dynamic stopping procedure, resulting in trials of +varying durations, rendering them unsuitable for benchmarking purposes. Similarly, the practice and free-spelling +runs included dynamic stopping and are ignored in this dataset. The training dataset, comprising 36 trials, used a +different noise-code set, and is therefore also ignored in this dataset. In total, this dataset should contain 108 +trials of 4.2 seconds each, with 3 repetitions for each of the 36 codes.
+References
++++[1] ++Thielen, J. (Jordy), Jason Farquhar, Desain, P.W.M. (Peter) (2023): Broad-Band Visually Evoked Potentials: +Re(con)volution in Brain-Computer Interfacing. Version 2. Radboud University. (dataset). +DOI: https://doi.org/10.34973/1ecz-1232
++[2] ++Thielen, J., Van Den Broek, P., Farquhar, J., & Desain, P. (2015). Broad-Band visually evoked potentials: +re(con)volution in brain-computer interfacing. PLOS ONE, 10(7), e0133797. +DOI: https://doi.org/10.1371/journal.pone.0133797
+Notes
+++ + +New in version 1.0.0.
+