Code Monkey home page Code Monkey logo

auditory-eeg-challenge-2023-code's Introduction

Auditory-eeg-challenge-2023-code

This is the codebase for the 2023 ICASSP Auditory EEG challenge. This codebase contains baseline models and code to preprocess stimuli for both tasks.

Prerequisites

Python >= 3.6

General setup

Steps to get a working setup:

1. Clone this repository and install the requirements.txt

# Clone this repository
git clone https://github.com/exporl/auditory-eeg-challenge-2023-code

# Go to the root folder
cd auditory-eeg-challenge-2023-code

# Optional: install a virtual environment
python3 -m venv venv # Optional
source venv/bin/activate # Optional

# Install requirements.txt
python3 -m install requirements.txt

You will need a password, which you will receive when you register. The folder contains multiple folders (and zip files containing the same data as their corresponding folders). For bulk downloading, we recommend using the zip files, as there is a bug in Onedrive when it has to zip files >= 4GB resulting in "corrupt" zip archives.

  1. split_data(.zip) contains already preprocessed, split and normalized data; ready for model training/evaluation. If you want to get started quickly, you can opt to only download this folder/zipfile.

  2. preprocessed_eeg(.zip) and preprocessed_stimuli(.zip) contain preprocessed EEG and stimuli files (envelope and mel features) respectively. At this stage data is not yet split into different sets and normalized. To go from this to the data in split_data, you will have to run the speech_features.py script (task1_match_mismatch/create_data/speech_features.py for task 1 and task2_regression/create_data/speech_features.py for task 2).

  3. raw_eeg(.zip) and stimuli(.zip) contain the raw EEG and stimuli files. If you want to process the stimuli files, you can run split_and_normalize.py ((task1_match_mismatch/create_data/split_and_normalize.py for task 1 and task2_regression/create_data/split_and_normalize.py for task 2). The processed stimuli files will be stored in the processed_stimuli folder. Currently, no preprocessing code is made available to preprocess EEG, so you will have to write your own implementation or use the precomputed processed_eeg folder.

Make sure to download/unzip these folders into the same folder (e.g. challenge_folder_task1) for each task. Note that it is possible to use the same preprocessed (and split) dataset for both task 1 and task 2, but it is not required.

data_diagram

3. Adjust the config.json accordingly

Each task has a config.json defining the folder names and structure for the data (i.e. task1_match_mismatch/util/config.json and task2_regression/util/config.json). Adjust dataset_folder in the config.json file from null to the absolute path to the folder containing all data (The challenge_folder_task_1 from the previous point).

OK, you should be all setup now!

Running the tasks

Each task has already some ready-to-go experiments files defined to give you a baseline and make you acquainted with the problem. The experiment files live in the experiment subfolder for each task. The training log, best model and evaluation results will be stored in a folder called results_{experiment_name}.

Task1: Match-mismatch

By running task1_match_mismatch/experiments/dilated_convolutional_model.py, you can train the dilated convolutional model introduced by Accou et al. (2021a) and (2021b).

Other models you might find interesting are Decheveigné et al (2021), Monesi et al. (2020), Monesi et al. (2021),….

Task2: Regression (reconstructing envelope from EEG)

By running task2_regression/experiments/linear_baseline.py, you can train and evaluate a simple linear baseline model with Pearson correlation as a loss function, similar to the baseline model used in Accou et al (2022).

By running task2_regression/experiments/vlaai.py, you can train/evaluate the VLAAI model as proposed by Accou et al (2022). You can find a pre-trained model at VLAAI's github page.

Other models you might find interesting are: Thornton et al. (2022),...

Previous version

If you are still using a previous version of this example code, we recommend updating to this version, as the test-set code and data will be made compatible for this version. If you still like access to the previous version, you can find it here

auditory-eeg-challenge-2023-code's People

Contributors

jalilpour-m avatar berndie avatar liesisleuk avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.