Code Monkey home page Code Monkey logo

mibci-qcnns's Introduction

FPGA implementation of BCIs using QCNNs

EEGNet-based model architecture

This repo contains the source code of the project "FPGA implementation of BCIs using QCNNs" submitted to the Xilinx Open Hardware Design Competition 2021.

Index

Project submission details

Brief description of the project

In this project a Brain Computer Interface (BCI) is explored using Quantized Convolutional Neural Networks (QCNNs) that process Electroencephalograph (EEG) data in order to recognize a Motor Imagery (MI) task. Concretely, the model is based on the EEGNet and trained over the Physionet Motor Movement/Imagery dataset. This is a publicly available dataset composed of 64 EEG recordings of 109 subjects that performed the following motor imagery tasks:

  1. Opening and closing the right fist (R).
  2. Opening and closing the left fist (L).
  3. Opening and closing both fists (B).
  4. Opening and closing both feet (F).

Additionally a baseline data was acquired for the subjects when they were resting, forming a resting class (0). As other works that used this dataset, 2-, 3- and 4-classes classification task are explored using the tasks subsets L/R, L/R/0 and L/R/0/F, respectively.

The Red Pitaya STEMLab 125-10 board is the targeted platform to run the hardware design of the model. It mounts a Xilinx Zynq 7010 System on Chip (SoC) with the low-spec XC7Z010 FPGA and a dual-core ARM Cortex-A9 CPU. Three strategies are followed to reduce the FPGA's resources consumption:

  1. The reduction of the model's input data size using the data-reduction methods presented in this work that consist on the reduction of the input's time window, the downsampling of the EEG recordings and the reduction of the number of EEG channels.
  2. The use of fixed-point datatypes to represent the inputs, parameters, feature maps and outputs in the FPGA design. Due to it gets the best accuracy-resources footprint, a 16 bits fixed-point datatype with 8 bits for the integer part is selected. The Vivado HLS ap_fixed.h library was used for this purpose.
  3. The substitution of the original ELU activation fucntion for the LeakyReLU, a much cheaper when implementing in hardware.

The hardware has been developed in Vivado HLS using an own-developed of the model in C++. Once the design is synthesized, an IP is exported for its integration using the Vivado IP integrator. This allows the creation of a .bit file, the bitstream, that can be load in the FPGA. Taking advantage of the CPU present in the Red Pitaya and its Jupyter-based interface a Python driver has been created to interface to the custom QCNN accelerator from the CPU.

Description of the archive

This is the repository tree:

MIBCI-QCNNs
├── csim-launcher.tcl
├── csim-launcher-template.txt
├── directives.tcl
├── img
│   ├── EEGNet.jpg
│   ├── EEGNet.png
│   └── EEGNet.svg
├── implementation.ipynb
├── LICENSE
├── MIBCI-QCNN.cpp
├── MIBCI-QCNN.h
├── MIBCI-QCNN-h-template.txt
├── MIBCI-QCNN-tb.cpp
├── MIBCI-QCNN-tb-template.txt
├── MIBCI-QCNN-template.txt
├── README.md
├── requirements.txt
├── synth-launcher.tcl
├── training.ipynb
├── usage.ipynb
└── utils
    ├── accuracy_test.py
    ├── createnpys.py
    ├── get_data.py
    ├── hlsparser.py
    ├── hls_tools.py
    └── train_tools.py

The text you are reading is in the README.md file and the license that protects the code in the repository is available in LICENSE. Additionally, the header picture is included under the img/ folder.

The core code is under the utils/ folder, where there are six files:

  1. get_data.py. Adapted from this code, allows the user to download the data and apply the data reduction methods.
  2. train_tools.py. It embeds the training process: normalizing the data, splitting the validation from the training set and training the global model.
  3. createnpys.py. Writes in a folder containing the a global-trained model the validation dataset and the model's parameters for each fold
  4. hls_tools.py. Contains the Vivado HLS launchers for simulation and synthesis. If used, the functions must be called from a vivado_hls-enabled bash. The main version of the simulation launcher function, launch_csim splits the global model simulation per folds, so at least 5 CPU kernels must be free in that launch and the screen linux command must be available. Their functions depend on:
    1. The MIBCI-QCNN-tb-template.txt, MIBCI-QCNN-template.txt and MIBCI-QCNN-h-template.txt, i.e. the source and testbench formatted files, enabling their control from Python.
    2. The vivado_hls simulation launcher, csim_lancher.tcl and its template csim_lancher-template.txt.
    3. The vivado_hls simulation launcher, synth-launcher.tcl and its directives directives.tcl.
  5. hlsparser.py. This code is fully authored by Tiago Lascasas dos Santos and is also available here.
  6. accuracy_test.py. Here is a function that computes the validation accuracy of the HLS-simulated version of the model, saving a the validation accuracy-per-fold plot.

All of the functions contained in the six utils files depend on some popular Python libraries, available in requirements.txt

Most of the steps taken to develop the project ara available in the three Jupyter notebooks. All the global model training process is contained in the training.ipynb notebook, relying in the get_data.py and train_tools.py files. The simulation and synthetization steps of the HLS design are included in the implementation.ipynb. Which uses the remaining four utils, createnpys.py, hls_tools.py, hlsparser.py and accuracy_test.py. Inside the usage.ipynb there is the code to run test the FPGA, this is the only code that must be run on the Zynq SoC. Its first cell explains its dependencies.

Finally, the source files (MIBCI-QCNN.cpp and MIBCI-QCNN.h) and the testbench (MIBCI-QCNN-tb.cpp) built for the T=3, ds=2, Nchan=64 and Nclasses=4 has been also added.

Instructions to build and test project

All the steps previous to the Red Pitaya execution has been tested in Ubuntu 20.04.2 LTS.

The notebooks are self-explained, so use the training.ipynb to download the data and preprocess is according to the desired data reduction methods. At the end of the notebook you will get a folder called global_model with five subfolders containing the folds' parameters and their training details.

Then in implementation.ipynb you will find the details to check the results when the model is implemented and synthesizing its design. At the synthetization process finalization, you will get a folder MIBCI-QCNN-synth containing an HLS project.

Open the MIBCI-QCNN-synth project from the Vivado HLS GUI and export the design as an IP. To run the design on the FPGA you must integrate this IP in the Zynq Processing system with the Vivado IP integrator and then generate the bitstream. The description of this process is perfectly detailed in the fist minutes of this FPGA Developer video.

Once the bitstream is generated upload it to the Red Pitaya using a SFTP service, as Filezilla. You will like to save them in a Jupyter-accessible folder. In our case we created the /home/jupyter/MIBCI-QCNNs/ folder and uploaded the bitstream and the usage.ipynb notebook to there. To test the FPGA performance, usage.ipynb is created to load each model's fold parameters from the foldes called global_model/fold_i/npyparams and the validation dataset from global_model/fold_i/validationDS, so you can just get the global_model directory of your training computer and upload it inside a folder called global_model at the same root as the usage.ipynb notebook. Inside the notebooks all the steps are explained.

If there is any issue, post them in the Github's Issues tab or send us an email.

mibci-qcnns's People

Contributors

dependabot[bot] avatar eneriz-daniel avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

mibci-qcnns's Issues

Error

Using Windows operating system,Vivado HLS2020.1。 I encountered a problem while running the HLS simulation. like this

Error

Error2
Error4

I would like to know how to solve this problem.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.