Personal tools

Difference between revisions of "Mixed-Precision Neural Networks for Brain-Computer Interface Applications"

From iis-projects

Jump to: navigation, search
(Created page with "==Description== Brain-computer interfaces (BCI) are devices and applications which seek to enable direct communication between a user's brain and a computer, e.g., by means of...")
 
(Description)
Line 1: Line 1:
 
==Description==
 
==Description==
Brain-computer interfaces (BCI) are devices and applications which seek to enable direct communication between a user's brain and a computer, e.g., by means of electroencephalography (EEG). An example application which has seen intensive research is motor imagery, which has the goal of recognizing motion behavior visualized by the user by means of BCI devices. Once fully functional, such a system would be of immeasurable value in the design of, e.g., motorized prostheses.
+
Brain-computer interfaces (BCI) are devices and applications which seek to enable direct communication between a user's brain and a computer, e.g., by means of electroencephalography (EEG). An example application that has seen intensive research is motor imagery, which has the goal of recognizing motion behavior imagined by the user by means of BCI devices. Once fully functional, such a system would be of immeasurable value in the design of, e.g., motorized prostheses.
  
Researchers at IIS have performed in-depth research into bringing these capabilities to resource-constrained edge devices such as microcontrollers, setting the state of the art in terms of efficiency for low-power motor imagery systems. However, the deployed networks have so far all been run at a relatively high numerical precision of 8 bits.  
+
Researchers at IIS have performed in-depth studies into bringing these capabilities to resource-constrained edge devices such as microcontrollers, setting the state of the art in terms of efficiency for low-power motor imagery systems. However, the deployed networks have so far all been run at a relatively high numerical precision of 8 bits.  
  
 
Recent research has shown that neural networks' operands can be aggressively quantized, i.e., represented with as few as 2 bits, with only minor accuracy drops. This has the advantage of decreasing model size (as each parameter requires less storage to be represented), and, with appropriate hardware support, decreasing inference latency at comparable power consumption, leading to significantly lower energy consumption per inference. Thanks to these theoretical insights, combined with a new generation of MCU cores developed at IIS and the University of Bologna (namely, RISC-V cores of the PULP family supporting the xPULPnn ISA extension - see references), there is thus potential for improving the efficiency of these applications even further - this is where you come in!
 
Recent research has shown that neural networks' operands can be aggressively quantized, i.e., represented with as few as 2 bits, with only minor accuracy drops. This has the advantage of decreasing model size (as each parameter requires less storage to be represented), and, with appropriate hardware support, decreasing inference latency at comparable power consumption, leading to significantly lower energy consumption per inference. Thanks to these theoretical insights, combined with a new generation of MCU cores developed at IIS and the University of Bologna (namely, RISC-V cores of the PULP family supporting the xPULPnn ISA extension - see references), there is thus potential for improving the efficiency of these applications even further - this is where you come in!

Revision as of 11:02, 29 October 2021

Description

Brain-computer interfaces (BCI) are devices and applications which seek to enable direct communication between a user's brain and a computer, e.g., by means of electroencephalography (EEG). An example application that has seen intensive research is motor imagery, which has the goal of recognizing motion behavior imagined by the user by means of BCI devices. Once fully functional, such a system would be of immeasurable value in the design of, e.g., motorized prostheses.

Researchers at IIS have performed in-depth studies into bringing these capabilities to resource-constrained edge devices such as microcontrollers, setting the state of the art in terms of efficiency for low-power motor imagery systems. However, the deployed networks have so far all been run at a relatively high numerical precision of 8 bits.

Recent research has shown that neural networks' operands can be aggressively quantized, i.e., represented with as few as 2 bits, with only minor accuracy drops. This has the advantage of decreasing model size (as each parameter requires less storage to be represented), and, with appropriate hardware support, decreasing inference latency at comparable power consumption, leading to significantly lower energy consumption per inference. Thanks to these theoretical insights, combined with a new generation of MCU cores developed at IIS and the University of Bologna (namely, RISC-V cores of the PULP family supporting the xPULPnn ISA extension - see references), there is thus potential for improving the efficiency of these applications even further - this is where you come in!

In this project, you will enhance existing neural networks for BCI applications with mixed-precision features. The goal is to decrease the energy per inference while retaining the statistical accuracy of the original network by running layers at numerical precisions lower than 8 bits. To facilitate this process, we have developed QuantLab, a framework to make training quantized neural networks easy and simplify the exploration of the design space of topology, precision and training algorithms. A network trained and quantized in QuantLab can be exported and consumed by DORY, a deployment tool which automatically generates optimized C code to run the network in question on a PULP-family microcontroller.

In this project, you will perform the following steps:

  1. Select one or multiple BCI networks to quantize and map to PULP - e.g., EEGNet or EEG-TCNet
  2. Port this network into QuantLab and train full-precision and 8-bit baselines
  3. Select a quantization strategy to lower selected layers' precisions, taking into account hardware constraints such as memory hierarchy
  4. Tune the individual layers' precisions to find a low-precision network which achieves (close to) full-precision accuracy
  5. Map the final network to PULP using DORY (either a simulation or the physical Kraken chip - see references) and evaluate performance compared to the 8-bit baseline
  6. Determine performance bottlenecks and tune the performance - either by introducing improved kernels to DORY's mapping process, or by replacing the implementations of certain layers with hand-written kernels.


Status: Available

Looking for 1-2 students for a Semester project. If you have any questions, suggestions for a related (or even unrelated) project or are simply curious about what we do, please do not hesitate to contact us!

Supervision: Georg Rutishauser, Xiaying Wang

Prerequisites

  • Machine Learning
  • Python
  • C

Character

20% Theory
80% Implementation

Literature

  • [1] Kraken in the IIS chip gallery
  • [2] M. Rusci et al., Memory-Driven Mixed Low Precision Quantization For Enabling Deep Network Inference On Microcontrollers
  • [3] A. Garofalo et al., XpulpNN: Enabling Energy Efficient and Flexible Inference of Quantized Neural Network on RISC-V based IoT End Nodes
  • [4] T. Mar Ingolfsson et al., EEG-TCNet: An Accurate Temporal Convolutional Network for Embedded Motor-Imagery Brain-Machine Interfaces
  • [5] X. Wang et al., An Accurate EEGNet-based Motor-Imagery Brain-Computer Interface for Low-Power Edge Computing
  • [6] T. Schneider et al., Q-EEGNet: an Energy-Efficient 8-bit Quantized Parallel EEGNet Implementation for Edge Motor-Imagery Brain--Machine Interfaces


Professor

Luca Benini

↑ top


Practical Details

↑ top