Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
30 commits
Select commit Hold shift + click to select a range
80a3ed8
add datatype argument to cli
sirexeclp Jan 20, 2020
2fcd53b
fix filterfreq for non eeg (hack)
sirexeclp Jan 20, 2020
602c563
experiment run script w/ multiprocessing
sirexeclp Jan 27, 2020
d56e0ba
add constants for ipc
sirexeclp Jan 30, 2020
f759d88
add selection for source_id+remove marker stream
sirexeclp Jan 30, 2020
61a426b
fix ipc and stuff
sirexeclp Jan 30, 2020
fc9ddf9
fix ipc
sirexeclp Jan 30, 2020
bf35b4e
add raspberrypi setup script
sirexeclp Feb 29, 2020
b692de7
+ clone repo w/ experiment code
sirexeclp Feb 29, 2020
f1d03bc
+ dont clone repo for now
sirexeclp Feb 29, 2020
95f6059
refactor to requirements.txt
sirexeclp Apr 10, 2020
21e95de
experiment instructions -- first draft
sirexeclp Apr 10, 2020
2ecd855
add marker generation
sirexeclp Apr 10, 2020
89bc6fc
refactor record method
sirexeclp Apr 10, 2020
568efb3
use more enums
sirexeclp Apr 10, 2020
057c423
remove record method from cli
sirexeclp Apr 10, 2020
e98e06b
select which muse to view, by mac
sirexeclp Apr 10, 2020
fad7991
add run_experiment.py to cli as record method
sirexeclp Apr 10, 2020
d65ce53
fix enum refactoring
sirexeclp Apr 10, 2020
9b8b051
format file
sirexeclp Apr 10, 2020
6bd45d5
finalize readme
sirexeclp Apr 10, 2020
f33c294
fix typos
sirexeclp Apr 10, 2020
1d80b67
add info about muselsl list, zoom in viewer and fix typos
sirexeclp Apr 15, 2020
40e4139
hardware description
sirexeclp Apr 16, 2020
f4328d3
typos
Apr 16, 2020
e0e32e3
Merge pull request #1 from Hollamak/feature/experiment
sirexeclp Apr 16, 2020
e9ca7cc
implement csv stream replay
sirexeclp Nov 18, 2021
783a7dc
add delay and more output
sirexeclp Nov 18, 2021
d3a434f
Update replay.py
Hollamak Nov 24, 2021
ed09ca8
Merge pull request #2 from Hollamak/feature/experiment
sirexeclp Nov 24, 2021
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
202 changes: 73 additions & 129 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,174 +1,118 @@

[![DOI](https://zenodo.org/badge/80209610.svg)](https://zenodo.org/badge/latestdoi/80209610)

# Muse LSL

A Python package for streaming, visualizing, and recording EEG data from the Muse 2016 headband.

![Blinks](blinks.png)

## Requirements

The code relies on [pygatt](https://github.com/peplin/pygatt) or [BlueMuse](https://github.com/kowalej/BlueMuse/tree/master/Dist) for BLE communication and works differently on different operating systems.

- Windows: On Windows 10, we recommend installing [BlueMuse](https://github.com/kowalej/BlueMuse/tree/master/Dist) and using its GUI to discover and connect to Muse devices. Alternatively, if you have a BLED112 dongle you can try Muse LSL's bgapi backend (`muselsl stream --backend bgapi`).
- Mac: On Mac, a **BLED112 dongle is required**. The bgapi backend is required and will be used by default when running Muse LSL from the command line
- Linux: No dongle required. However, you may need to run a command to enable root-level access to bluetooth hardware (see [Common Issues](#linux)). The pygatt backend is required and will be used by default from the command line. and make sure to read the

**Compatible with Python 2.7 and Python 3.x**

**Only compatible with Muse 2 and Muse 2016**

_Note: if you run into any issues, first check out out [Common Issues](#common-issues) and then the [Issues](https://github.com/alexandrebarachant/muse-lsl/issues) section of this repository_

## Getting Started

### Installation

Install Muse LSL with pip

pip install muselsl

### Setting Up a Stream

On Windows 10, we recommend using the [BlueMuse](https://github.com/kowalej/BlueMuse/tree/master/Dist) GUI to set up an LSL stream. On Mac and Linux, the easiest way to get Muse data is to use Muse LSL directly from the command line. Use the `-h` flag to get a comprehensive list of all commands and options.

To print a list of available muses:

$ muselsl list

To begin an LSL stream from the first available Muse:
# Muse LSL - Experiment

$ muselsl stream
This repo is forked from [https://github.com/alexandrebarachant/muse-lsl](https://github.com/alexandrebarachant/muse-lsl) [1].

To connect to a specific Muse you can pass the name of the device as an argument. Device names can be found on the inside of the left earpiece (e.g. Muse-41D2):
The original code was modified and extended to allow streaming and recording of two or more muses simultaneously.
We used two `2019 Muse 2 headsets` for developement and testing.

$ muselsl stream --name YOUR_DEVICE_NAME
## Installation

You can also directly pass the MAC address of your Muse. This provides the benefit of bypassing the device discovery step and can make connecting to devices quicker and more reliable:
Install this repository as a pip package with

$ muselsl stream --address YOUR_DEVICE_ADDRESS
pip3 install .

### Working with Streaming Data
or just install the dependencies using:

Once an LSL stream is created, you have access to the following commands.
pip3 install -r requirements.txt

*Note: the process running the `stream` command must be kept alive in order to maintain the LSL stream. These following commands should be run in another terminal or second process*
If you *did not* install the repo as a pip package, you need to replace `muselsl` with
`python3 -m muselsl` in all following examples.

To view data:
## Hardware Setup

$ muselsl view
We tested different hardware setups for multiple streams.
The most reliable setup, which allowed streaming of all data-streams from two muses was the following:

If the visualization freezes or is laggy, you can also try the alternate version 2 of the viewer. *Note: this will require the additional [vispy](https://github.com/vispy/vispy) and [mne](https://github.com/mne-tools/mne-python) dependencies*
- Two Muses
- Two Laptops (We used a MacBook with `BLED112 dongle` and a linux machine with integrated Bluetooth.)
- Wireless (or wired) network with peer to peer traffic allowed (HPI network might not work, we made a WIFI hotspot using an android phone)

$ muselsl view --version 2
One of the laptops acts as a *stream source*, while the other *records* its own stream and the stream from the stream source, which is sent over
the network.

To record EEG data into a CSV:
## Stream Source

$ muselsl record --duration 60
The stream source is started before the recorder and must run until after the recorder has stopped.
If the stream source crashes it should be restarted, the recorder can recover only if the stream source is restarted, otherwise the recorder will
crash, when attempting to stop the recording.

*Note: this command will also save data from any LSL stream containing 'Markers' data, such as from the stimulus presentation scripts in [EEG Notebooks](https://github.com/neurotechx/eeg-notebooks)*
The stream source is started with the following command:

Alternatively, you can record data directly without using LSL through the following command:
muselsl stream -pcg -a <MAC-ADRESS-OF-MUSE>

$ muselsl record_direct --duration 60
Where `<MAC-ADRESS-OF-MUSE>` must be replaced with the mac-address of one of the muses.
To find the mac addresses of your muses, you can turn on your muses and run

_Note: direct recording does not allow 'Markers' data to be recorded_
muselsl list

## Running Experiments
to get a list of the available headsets.
The last four letters (two bytes) of the MAC address are printed on each Muse headset.

Muse LSL was designed so that the Muse could be used to run a number of classic EEG experiments, including the [P300 event-related potential](http://alexandre.barachant.org/blog/2017/02/05/P300-with-muse.html) and the SSVEP and SSAEP evoked potentials.
In our setup one stream source is started on the *stream source* machine and one is started
for the other muse, on the *recording* machine.

The code to perform these experiments is still available, but is now maintained in the [EEG Notebooks](https://github.com/neurotechx/eeg-notebooks) repository by the [NeuroTechX](https://neurotechx.com) community.

## Usage as a Library
## View

If you want to integrate Muse LSL into your own Python project, you can import and use its functions as you would any Python library. Examples are available in the `examples` folder:
Before starting the recording, make sure all streams are coming through, and are of decent quality.
This can be done using the viewer.

```Python
from muselsl import stream, list_muses
muselsl view -v 2 -a <MAC-ADRESS-OF-MUSE> -t <DATA-TYPE>

muses = list_muses()
stream(muses[0]['address'])
Where data-type can be any of:
- PPG
- EEG
- ACC
- GYRO

# Note: Streaming is synchronous, so code here will not execute until after the stream has been closed
print('Stream has ended')
```
You can zoom in and out by holding your left mouse button and moving your mouse up or down.

## Alternate Data Types
> It is also a good idea, to at least monitor the EEG streams of all participant during the
> experiment, to be able to detect artifacts early and fix them (eg. loose electrodes, muscle noise, etc.).

In addition to EEG, the Muse also provides data from an accelerometer, gyroscope, and, in the case of the Muse 2, a photoplethysmography (PPG) sensor. These data types can be enabled via command line arguments or by passing the correct parameters to the `stream` function. Once enabled, PPG, accelerometer, and gyroscope data will streamed in their own separate LSL streams named "PPG", "ACC", and "GYRO", respectively.
## Recorder

To stream data from all sensors in a Muse 2 from the command line:
When the stream source is running, the recorder script can be started with the following command:

muselsl stream --ppg --acc --gyro
muselsl record -d <data-path> -n <number-of-participants> [-i <trial-id>]

As a library function:
The script will ask for the `MUSE-ID` of each participant, which is written on each muse device.
The `MUSE-ID` are the last two bytes (four letters) of the mac address.

```Python
from muselsl import stream, list_muses
After all IDs are set, the recording starts automatically.
During the recording, timestamps can be saved, by typing any text as a label and pressing enter.
These timestamps can be used to mark when different parts of the experiment begin, or when the experiment has ended.

muses = list_muses()
stream(muses[0]['address'], ppg_enabled=True, acc_enabled=True, gyro_enabled=True)
```
The recording will continue, until it is stopped by pressing `CTRL` `+` `C`.

To record data from an alternate data source:
All recordings are saved in the specified directory.
If no `trial-id` is provided, the current timestamp is used instead to create a new subdirectory,
in which all data is stored.
This directory contains the recorded markers, which are stored in `markers.csv` and a subdirectory for each participant.

muselsl record --type ACC
Each participant directory contains the following files:

*Note: The record process will only record from one data type at a time. However, multiple terminals or processes can be used to record from multiple data types simultaneously*

## What is LSL?
| File | Description |
|------|-------------|
|ACC.csv | Data from the Accelerometer |
|EEG.csv| EEG Data|
|GYRO.csv | Gyroscope Data |
|PPG.csv | Heart rate measured with PPG |

Lab Streaming Layer or LSL is a system designed to unify the collection of time series data for research experiments. It has become standard in the field of EEG-based brain-computer interfaces for its ability to make seperate streams of data available on a network with time synchronization and near real-time access. For more information, check out this [lecture from Modern Brain-Computer Interface Design](https://www.youtube.com/watch?v=Y1at7yrcFW0) or the [LSL repository](https://github.com/sccn/labstreaminglayer)
## Other Setups

## Common Issues
We tried using Raspberry Pi's as stream source, however
even the newest model 4 with a `BLED112 dongle`
it was not able to stream reliably for more than 10 minutes.
It might be possible, though to use Raspberry Pi's, if not all data
streams are enabled (eg. if only recording ppg).
See [this github issue](https://github.com/alexandrebarachant/muse-lsl/issues/55) for more information.

### Mac and Windows

1. Connection issues with BLED112 dongle:

- You may need to use the `--interface` argument to provide the appropriate COM port value for the BLED112 device. The default value is COM9. To setup or view the device's COM port go to your OS's system settings

### Linux

1. `pygatt.exceptions.BLEError: Unexpected error when scanning: Set scan parameters failed: Operation not permitted` (Linux)

- This is an issue with pygatt requiring root privileges to run a scan. Make sure you [have `libcap` installed](https://askubuntu.com/questions/347788/how-can-i-install-libpcap-header-files-on-ubuntu-12-04) and run `` sudo setcap 'cap_net_raw,cap_net_admin+eip' `which hcitool` ``

2. `pygatt.exceptions.BLEError: No characteristic found matching 273e0003-4c4d-454d-96be-f03bac821358` (Linux)

- There is a problem with the most recent version of pygatt. Work around this by downgrading to 3.1.1: `pip install pygatt==3.1.1`

3. `pygatt.exceptions.BLEError: No BLE adapter found` (Linux)

- Make sure your computer's Bluetooth is turned on.

4. `pygatt.exceptions.BLEError: Unexpected error when scanning: Set scan parameters failed: Connection timed out` (Linux)

- This seems to be due to a OS-level Bluetooth crash. Try turning your computer's bluetooth off and on again

5. `'RuntimeError: could not create stream outlet'` (Linux)

- This appears to be due to Linux-specific issues with the newest version of pylsl. Ensure that you have pylsl 1.10.5 installed in the environment in which you are trying to run Muse LSL

## Citing muse-lsl

```
@misc{muse-lsl,
author = {Alexandre Barachant and
Dano Morrison and
Hubert Banville and
Jason Kowaleski and
Uri Shaked and
Sylvain Chevallier and
Juan Jesús Torre Tresols},
title = {muse-lsl},
month = may,
year = 2019,
doi = {10.5281/zenodo.3228861},
url = {https://doi.org/10.5281/zenodo.3228861}
}
```

[1]
> Alexandre Barachant, Dano Morrison, Hubert Banville, Jason Kowaleski, Uri Shaked, Sylvain Chevallier, & Juan Jesús Torre Tresols. (2019, May 25). muse-lsl (Version v2.0.2). Zenodo. http://doi.org/10.5281/zenodo.3228861
[![DOI](https://zenodo.org/badge/80209610.svg)](https://zenodo.org/badge/latestdoi/80209610)
3 changes: 2 additions & 1 deletion muselsl/__init__.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
from .stream import stream, list_muses
from .record import record, record_direct
from .record import record
from .view import view
from .replay import replay
__version__ = "1.0.0"
18 changes: 7 additions & 11 deletions muselsl/__main__.py
Original file line number Diff line number Diff line change
Expand Up @@ -29,18 +29,14 @@ def main():
-f --figure Window size.
-v --version Viewer version (1 or 2) - 1 is the default stable version, 2 is in development (and takes no arguments).
-b --backend Matplotlib backend to use. Default: TkAgg

record Record EEG data from an LSL stream.
-d --duration Duration of the recording in seconds.
record Recording an experiment.
-d --directory Root-directory to store recorded data in.
-n --participants The number of participants in this run.
-i --trial-id The id of this trial. Data is stored in a subdirectory with this name.
If no id is provided, the current timestamp is used instead.

replay Replay data from a recorded CSV file into a new LSL stream.
-f --filename Name of the recording file.
-dj --dejitter Whether to apply dejitter correction to timestamps.
-t --type Data type to record from. Either EEG, PPG, ACC, or GYRO

record_direct Record data directly from Muse headset (no LSL).
-a --address Device MAC address.
-n --name Device name (e.g. Muse-41D2).
-b --backend BLE backend to use. can be auto, bluemuse, gatt or bgapi.
-i --interface The interface to use, 'hci0' for gatt or a com port for bgapi.
''')

parser.add_argument('command', help='Command to run.')
Expand Down
77 changes: 35 additions & 42 deletions muselsl/cli.py
Original file line number Diff line number Diff line change
Expand Up @@ -51,53 +51,31 @@ def stream(self):
args.interface, args.name, args.ppg, args.acc, args.gyro, args.disable_eeg)

def record(self):
parser = argparse.ArgumentParser(
description='Record data from an LSL stream.')
parser.add_argument("-d", "--duration",
dest="duration", type=int, default=60,
help="Duration of the recording in seconds.")
parser.add_argument("-f", "--filename",
dest="filename", type=str, default=None,
help="Name of the recording file.")
parser.add_argument("-dj", "--dejitter",
dest="dejitter", type=bool, default=True,
help="Whether to apply dejitter correction to timestamps.")
parser.add_argument("-t", "--type", type=str, default="EEG",
help="Data type to record from. Either EEG, PPG, ACC, or GYRO.")
from .run_experiment import ExperimentalRun

args = parser.parse_args(sys.argv[2:])
from . import record
record(args.duration, args.filename, args.dejitter, args.type)

def record_direct(self):
parser = argparse.ArgumentParser(
description='Record directly from Muse without LSL.')
parser.add_argument("-a", "--address",
dest="address", type=str, default=None,
help="Device MAC address.")
parser.add_argument("-n", "--name",
dest="name", type=str, default=None,
help="Name of the device.")
parser.add_argument("-b", "--backend",
dest="backend", type=str, default="auto",
help="BLE backend to use. Can be auto, bluemuse, gatt or bgapi.")
parser.add_argument("-i", "--interface",
dest="interface", type=str, default=None,
help="The interface to use, 'hci0' for gatt or a com port for bgapi.")
parser.add_argument("-d", "--duration",
dest="duration", type=int, default=60,
help="Duration of the recording in seconds.")
parser.add_argument("-f", "--filename",
dest="filename", type=str, default=None,
help="Name of the recording file.")
description='Start recording of an experiment.')
parser.add_argument("-d", "--directory",
dest="data_root", type=str, required=True,
help="Root-directory to store recorded data in.")
parser.add_argument("-n", "--participants",
dest="num_participants", type=int, required=True,
help="The number of participants in this run.")
parser.add_argument("-i", "--trial-id",
dest="trial_id", type=str, default=None,
help="The id of this trial. Data is stored in a subdirectory with this name."
+ "If no id is provided, the current timestamp is used instead.")
args = parser.parse_args(sys.argv[2:])
from . import record_direct
record_direct(args.address, args.backend,
args.interface, args.name, args.duration, args.filename)

experiment = ExperimentalRun(data_root=args.data_root,
num_participants=args.num_participants, trial_id=args.trial_id)
experiment.start()

def view(self):
parser = argparse.ArgumentParser(
description='View EEG data from an LSL stream.')
description='View data from an LSL stream.')
parser.add_argument("-t", "--type", type=str, default="EEG", dest="data_type",
help="Data type to view. Either EEG, PPG, ACC, or GYRO.")
parser.add_argument("-w", "--window",
dest="window", type=float, default=5.,
help="Window length to display in seconds.")
Expand All @@ -116,7 +94,22 @@ def view(self):
parser.add_argument("-b", "--backend",
dest="backend", type=str, default='TkAgg',
help="Matplotlib backend to use. Default: %(default)s")
parser.add_argument("-a", "--address",
dest="source_id", type=str, default=None,
help="Device MAC address.")
args = parser.parse_args(sys.argv[2:])
from . import view
view(args.window, args.scale, args.refresh,
args.figure, args.version, args.backend)
args.figure, args.version, args.backend, args.data_type, args.source_id)

def replay(self):
parser = argparse.ArgumentParser(
description='Replay data from a recorded CSV file into a new LSL stream.')
parser.add_argument("-f", "--filename",
dest="filename", type=str, default=None,
help="Name of the recording file.")
# parser.add_argument("-t", "--type", type=str, default="EEG",
# help="Data type to record from. Either EEG, PPG, ACC, or GYRO.")
args = parser.parse_args(sys.argv[2:])
from . import replay
replay(args.filename)
Loading