Toward Superposition of Brain-Computer Interface Models
A brain–computer interface is a device that enables communication and control without movement. The device aims to recognize the human’s intentions from spatiotemporal neural activity typically recorded by a large set of electroencephalogram (EEG) electrodes. What makes it particularly challenging, however, is its susceptibility to errors in the recognition of human intentions. Indeed, the recent success of deep learning networks—based on the artificial neural nets of the past—is finding ever expanding applications suggesting its usage for a highly-accrue brain–computer interface. In order to cope with high inter-subject variance encountered in EEG signals, existing brain–computer interfaces train one model per subject. The main goal of this project is to provide a framework for superposing a number of trained BCI models into a single model ultimately enabling its use case beyond model compression.
- Machine Learning
- Python Programming
- 40% Theory
- 60% Programming