Skip to content

Commit af9fc57

Browse files
author
Sylvain Chevallier
committed
Merge branch 'develop'
2 parents a7f66ea + 68bdeba commit af9fc57

File tree

24 files changed

+1272
-574
lines changed

24 files changed

+1272
-574
lines changed

.github/workflows/docs.yml

Lines changed: 47 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,9 @@ on:
44
push:
55
branches: [master, develop]
66
pull_request:
7-
branches: [master]
7+
branches: [master, develop]
8+
paths:
9+
- "docs/**"
810

911
jobs:
1012
build_docs:
@@ -13,7 +15,7 @@ jobs:
1315
fail-fast: true
1416
matrix:
1517
os: [ubuntu-18.04]
16-
python-version: ["3.6"]
18+
python-version: ["3.9"]
1719

1820
steps:
1921
- uses: actions/checkout@v2
@@ -91,7 +93,7 @@ jobs:
9193
path: moabb-ghio
9294
token: ${{ secrets.MOABB_GHIO }}
9395

94-
- name: Deploy on moabb.github.io
96+
- name: Deploy on moabb.neurotechx.com
9597
run: |
9698
git config --global user.email "[email protected]"
9799
git config --global user.name "Github Actions"
@@ -102,6 +104,48 @@ jobs:
102104
git commit -m "GH Actions update of docs ($GITHUB_RUN_ID - $GITHUB_RUN_NUMBER)"
103105
git push origin master
104106
107+
deploy_gh_pages:
108+
if: ${{ github.ref == 'refs/heads/develop' }}
109+
needs: build_docs
110+
runs-on: ${{ matrix.os }}
111+
strategy:
112+
fail-fast: false
113+
matrix:
114+
os: [ubuntu-18.04]
115+
116+
steps:
117+
- uses: actions/checkout@v2
118+
119+
- name: Create local data folder
120+
run: |
121+
mkdir ~/mne_data
122+
123+
- name: Cache datasets and docs
124+
id: cached-dataset-docs
125+
uses: actions/cache@v2
126+
with:
127+
key: doc-${{ github.head_ref }}-${{ hashFiles('moabb/datasets/**') }}
128+
path: |
129+
~/mne_data
130+
docs/build
131+
132+
- name: Checkout gh pages
133+
uses: actions/checkout@v2
134+
with:
135+
ref: gh-pages
136+
path: moabb-ghpages
137+
138+
- name: Deploy on gh-pages
139+
run: |
140+
git config --global user.email "[email protected]"
141+
git config --global user.name "Github Actions"
142+
cd ~/work/moabb/moabb/moabb-ghpages
143+
rm -Rf docs
144+
cp -a ~/work/moabb/moabb/docs/build/html ./docs
145+
git add -A
146+
git commit -m "GH Actions update of GH pages ($GITHUB_RUN_ID - $GITHUB_RUN_NUMBER)"
147+
git push origin gh-pages
148+
105149
# Previous test with moabb GH pages, official docs point to moabb.github.io
106150
###########################################################################
107151
# Since we want the URL to be neurotechx.github.io/docs/ the html output needs to be put in a ./docs subfolder of the publish_dir

.github/workflows/test-devel.yml

Lines changed: 19 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ jobs:
1414
fail-fast: true
1515
matrix:
1616
os: [ubuntu-18.04, windows-latest, macOS-latest]
17-
python-version: ["3.6"]
17+
python-version: ["3.7"]
1818
defaults:
1919
run:
2020
shell: bash
@@ -26,8 +26,6 @@ jobs:
2626
with:
2727
python-version: ${{ matrix.python-version }}
2828

29-
- uses: pre-commit/[email protected]
30-
3129
- name: Install Poetry
3230
uses: snok/[email protected]
3331
with:
@@ -70,3 +68,21 @@ jobs:
7068
verbose: true
7169
directory: /home/runner/work/moabb/moabb
7270
files: ./.coverage
71+
72+
lint:
73+
name: lint ${{ matrix.os }}, py-${{ matrix.python-version }}
74+
runs-on: ${{ matrix.os }}
75+
strategy:
76+
fail-fast: true
77+
matrix:
78+
os: [ubuntu-18.04]
79+
python-version: ["3.7"]
80+
steps:
81+
- uses: actions/checkout@v2
82+
83+
- name: Setup Python
84+
uses: actions/setup-python@v2
85+
with:
86+
python-version: ${{ matrix.python-version }}
87+
88+
- uses: pre-commit/[email protected]

.github/workflows/test.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ jobs:
1414
fail-fast: true
1515
matrix:
1616
os: [ubuntu-18.04, windows-latest, macOS-latest]
17-
python-version: ["3.6", "3.7", "3.8", "3.9"]
17+
python-version: ["3.7", "3.8", "3.9"]
1818
defaults:
1919
run:
2020
shell: bash

README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44
<img alt="banner" src="/images/M.png/">
55
</p>
66
<p align=center>
7-
Build a comprehensive benchmark of popular Brain-Computer Interface ([BCI](https://en.wikipedia.org/wiki/Brain%E2%80%93computer_interface)) algorithms applied on an extensive list of freely available EEG datasets.
7+
Build a comprehensive benchmark of popular Brain-Computer Interface (BCI) algorithms applied on an extensive list of freely available EEG datasets.
88
</p>
99

1010
## Disclaimer
@@ -31,7 +31,7 @@ one of the sections below, or just scroll down to find out more.
3131
- [Installation](#installation)
3232
- [Running](#running)
3333
- [Supported datasets](#supported-datasets)
34-
- [Who are we?](#who-are-we)
34+
- [Who are we? n](#who-are-we)
3535
- [Get in touch](#contact-us)
3636
- [Documentation](#documentation)
3737
- [Architecture and main concepts](#architecture-and-main-concepts)

ROADMAP.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,6 @@ are reported in the
1515

1616
- Backend features for dev: pre-commit using black, isort and prettier, with an updating
1717
CONTRIBUTING section.
18-
- Droping requirement for Python 3.6, support for Python 3.7 and 3.8
1918
- Including support for more datasets
2019
- Add more classification pipelines
2120

docs/source/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
![banner](images/M.png)
66

77
<p align=center>
8-
Build a comprehensive benchmark of popular Brain-Computer Interface ([BCI](https://en.wikipedia.org/wiki/Brain%E2%80%93computer_interface)) algorithms applied on an extensive list of freely available EEG datasets.
8+
n Build a comprehensive benchmark of popular Brain-Computer Interface (BCI) algorithms applied on an extensive list of freely available EEG datasets.
99
</p>
1010

1111
## Disclaimer

docs/source/whats_new.rst

Lines changed: 27 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ What's new
1313

1414

1515
Develop branch
16-
---------------
16+
----------------
1717

1818
Enhancements
1919
~~~~~~~~~~~~
@@ -31,13 +31,35 @@ API changes
3131
- None
3232

3333

34-
Version - 0.4.3 (Stable - PyPi)
34+
Version - 0.4.4 (Stable - PyPi)
3535
---------------
3636

3737
Enhancements
3838
~~~~~~~~~~~~
3939

40-
- Rewrite Lee2019 to add P300 and SSVEP datasets (:gh:`217` by `Pierre Guetchel`_)
40+
- Add TRCA algorithm for SSVEP (:gh:`238` by `Ludovic Darmet`_)
41+
42+
Bugs
43+
~~~~
44+
45+
- Remove unused argument from dataset_search (:gh:`243` by `Divyesh Narayanan`_)
46+
- Remove MNE call to `_fetch_dataset` and use MOABB `_fetch_file` (:gh:`235` by `Jan Sosulski`_)
47+
- Correct doc formatting (:gh:`232` by `Sylvain Chevallier`_)
48+
49+
API changes
50+
~~~~~~~~~~~
51+
52+
- Minimum supported Python version is now 3.7
53+
- MOABB now depends on scikit-learn >= 1.0
54+
55+
56+
Version - 0.4.3
57+
----------------
58+
59+
Enhancements
60+
~~~~~~~~~~~~
61+
62+
- Rewrite Lee2019 to add P300 and SSVEP datasets (:gh:`217` by `Pierre Guetschel`_)
4163

4264
Bugs
4365
~~~~
@@ -107,7 +129,7 @@ Enhancements
107129
- Broadening subject_list type for :func:`moabb.datasets.BaseDataset` (:gh:`198` by `Sylvain Chevallier`_)
108130
- Adding this what's new (:gh:`200` by `Sylvain Chevallier`_)
109131
- Improving cache usage and save computation time in CI (:gh:`200` by `Sylvain Chevallier`_)
110-
- Rewrite Lee2019 to add P300 and SSVEP datasets (:gh:`217` by `Pierre Guetchel`_)
132+
- Rewrite Lee2019 to add P300 and SSVEP datasets (:gh:`217` by `Pierre Guetschel`_)
111133

112134

113135
Bugs
@@ -221,3 +243,4 @@ API changes
221243
.. _Robin Schirrmeister: https://github.com/robintibor
222244
.. _Jan Sosulski: https://github.com/jsosulski
223245
.. _Pierre Guetschel: https://github.com/PierreGtch
246+
.. _Ludovic Darmet: https://github.com/ludovicdmt

examples/plot_cross_subject_ssvep.py

Lines changed: 21 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@
2828
from moabb.datasets import SSVEPExo
2929
from moabb.evaluations import CrossSubjectEvaluation
3030
from moabb.paradigms import SSVEP, FilterBankSSVEP
31-
from moabb.pipelines import SSVEP_CCA, ExtendedSSVEPSignal
31+
from moabb.pipelines import SSVEP_CCA, SSVEP_TRCA, ExtendedSSVEPSignal
3232

3333

3434
warnings.simplefilter(action="ignore", category=FutureWarning)
@@ -55,15 +55,17 @@
5555
# Choose paradigm
5656
# ---------------
5757
#
58-
# We define the paradigms (SSVEP and FilterBankSSVEP) and use the dataset
58+
# We define the paradigms (SSVEP, SSSVEP_TRCA and FilterBankSSVEP) and use the dataset
5959
# SSVEPExo. The SSVEP paradigm applied a bandpass filter (10-25 Hz) on
60-
# the data while the FilterBankSSVEP paradigm uses as many bandpass filters as
60+
# the data, SSVEP_TRCA applied a bandpass filter (1-110 Hz) which correspond to almost
61+
# no filtering, while the FilterBankSSVEP paradigm uses as many bandpass filters as
6162
# there are stimulation frequencies (here 2). For each stimulation frequency
6263
# the EEG is filtered with a 1 Hz-wide bandpass filter centered on the
6364
# frequency. This results in ``n_classes`` copies of the signal, filtered for each
6465
# class, as used in filterbank motor imagery paradigms.
6566

6667
paradigm = SSVEP(fmin=10, fmax=25, n_classes=3)
68+
paradigm_TRCA = SSVEP(fmin=1, fmax=110, n_classes=3)
6769
paradigm_fb = FilterBankSSVEP(filters=None, n_classes=3)
6870

6971
###############################################################################
@@ -83,6 +85,7 @@
8385
# covariance matrices from the signal filtered around the considered
8486
# frequency and applying a logistic regression in the tangent plane.
8587
# The second pipeline relies on the above defined CCA classifier.
88+
# The third pipeline relies on TRCA algorithm.
8689

8790
pipelines_fb = {}
8891
pipelines_fb["RG+LogReg"] = make_pipeline(
@@ -95,6 +98,11 @@
9598
pipelines = {}
9699
pipelines["CCA"] = make_pipeline(SSVEP_CCA(interval=interval, freqs=freqs, n_harmonics=3))
97100

101+
pipelines_TRCA = {}
102+
pipelines_TRCA["TRCA"] = make_pipeline(
103+
SSVEP_TRCA(interval=interval, freqs=freqs, n_fbands=5)
104+
)
105+
98106
##############################################################################
99107
# Evaluation
100108
# ----------
@@ -123,9 +131,17 @@
123131
results_fb = evaluation_fb.process(pipelines_fb)
124132

125133
###############################################################################
126-
# After processing the two, we simply concatenate the results.
134+
# TRCA processing also relies on filter bank that is automatically designed.
135+
136+
evaluation_TRCA = CrossSubjectEvaluation(
137+
paradigm=paradigm_TRCA, datasets=dataset, overwrite=overwrite
138+
)
139+
results_TRCA = evaluation_TRCA.process(pipelines_TRCA)
140+
141+
###############################################################################
142+
# After processing the three, we simply concatenate the results.
127143

128-
results = pd.concat([results, results_fb])
144+
results = pd.concat([results, results_fb, results_TRCA])
129145

130146
##############################################################################
131147
# Plot Results

moabb/__init__.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
11
# flake8: noqa
2-
__version__ = "0.4.3"
2+
__version__ = "0.4.4"
33

44
from moabb.utils import set_log_level

moabb/analysis/results.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -151,7 +151,7 @@ def to_list(res):
151151
f"Additional columns: {self.additional_columns} "
152152
f"were specified in the evaluation, but results"
153153
f" contain only these keys: {d.keys()}."
154-
)
154+
) from None
155155
dset["data"][-1, :] = np.asarray(
156156
[d["score"], d["time"], d["n_samples"], *add_cols]
157157
)

0 commit comments

Comments
 (0)