http://iis-projects.ee.ethz.ch/api.php?action=feedcontributions&user=Xiaywang&feedformat=atomiis-projects - User contributions [en]2024-03-28T19:25:48ZUser contributionsMediaWiki 1.28.0http://iis-projects.ee.ethz.ch/index.php?title=Benchmarking_a_RISC-V-based_Server_on_LLMs/Foundation_Models_(SA_or_MA)&diff=10262Benchmarking a RISC-V-based Server on LLMs/Foundation Models (SA or MA)2024-03-12T09:37:35Z<p>Xiaywang: </p>
<hr />
<div><!-- Benchmarking a RISC-V-based Server on LLMs/Foundation Models (SA or MA) --><br />
<br />
[[Category:Digital]]<br />
[[Category:High Performance SoCs]]<br />
[[Category:2023]]<br />
[[Category:Master Thesis]]<br />
[[Category:Hot]]<br />
[[Category:Xiaywang]]<br />
[[Category:Cykoenig]]<br />
[[Category:Available]]<br />
<br />
<br />
= Overview =<br />
<br />
== Status: Available ==<br />
<br />
* Type: Semester or Master Thesis (multiple students possible)<br />
* Professor: Prof. Dr. L. Benini<br />
* Supervisors:<br />
** [[:User:Xiaywang | Xiaying Wang]]: [mailto:xiaywang@iis.ee.ethz.ch xiaywang@iis.ee.ethz.ch]<br />
** [[:User:Cykoenig | Cyril Koenig]]: [mailto:cykoenig@iis.ee.ethz.ch cykoenig@iis.ee.ethz.ch]<br />
** [[:User: Vivianep | Viviane Potocnik]]: [mailto:vivianep@iis.ee.ethz.ch vivianep@iis.ee.ethz.ch]<br />
<br />
= Introduction =<br />
<br />
Milk-V is a company committed to delivering high-quality RISC-V products to developers, enterprises, and consumers. It focuses on the development of both hardware and software ecosystems around the RISC-V architecture. Milk-V strongly supports open-source initiatives and aims to enrich the RISC-V product landscape, hoping that through its efforts and those of the community, the future of RISC-V products will be as vast and luminous as the Milky Way.<br />
<br />
The Milk-V Pioneer is a developer motherboard utilizing the SOPHON SG2042 [1], designed in the standard microATX (mATX) form factor. It offers PC-like interfaces and compatibility with PC industrial standards, aiming to provide a native RISC-V development environment and desktop experience. The Pioneer is targeted at RISC-V developers and hardware pioneers, offering a platform to engage with cutting-edge RISC-V technology. This motherboard serves as an excellent choice for those interested in exploring and developing within the RISC-V architecture.<br />
<br />
[[File:Pioneer.jpg|400px|]] [2]<br />
<br />
= Project description =<br />
<br />
In this project, you will be executing LLMs and Foundation Models, e.g., Whisper AI, to Milk-V servers and benchmark their performance.<br />
<br />
You will first select a framework to execute LLMs in C/C++, for instance llama.cpp [3]. You will then evaluate one or several models using this framework on the SG2042 CPU. Finally, you will identify potential limitations or improvements of the code related to the microarchitecture.<br />
<br />
== Character ==<br />
<br />
* 20% Literature/architecture review<br />
* 60% Programming<br />
* 20% Evaluation<br />
<br />
== Prerequisites ==<br />
<br />
* Strong interest in computer architecture<br />
* Experience in C programming<br />
* Preferred: Knowledge or prior experience with RISC-V<br />
<br />
= References =<br />
<br />
[https://github.com/milkv-pioneer/pioneer-files/blob/main/hardware/SG2042-TRM.pdf](https://github.com/milkv-pioneer/pioneer-files/blob/main/hardware/SG2042-TRM.pdf)<br />
<br />
[https://milkv.io/docs/pioneer/](https://milkv.io/docs/pioneer/)<br />
<br />
[https://github.com/ggerganov/llama.cpp](https://github.com/ggerganov/llama.cpp)</div>Xiaywanghttp://iis-projects.ee.ethz.ch/index.php?title=Benchmarking_a_RISC-V-based_Server_on_LLMs/Foundation_Models_(SA_or_MA)&diff=10260Benchmarking a RISC-V-based Server on LLMs/Foundation Models (SA or MA)2024-03-12T09:36:48Z<p>Xiaywang: </p>
<hr />
<div><!-- Benchmarking a RISC-V-based Server on LLMs/Foundation Models (SA or MA) --><br />
<br />
[[Category:Digital]]<br />
[[Category:High Performance SoCs]]<br />
[[Category:2023]]<br />
[[Category:Master Thesis]]<br />
[[Category:Hot]]<br />
[[Category:Xiaywang]]<br />
[[Category:Cykoenig]]<br />
[[Category:Available]]<br />
<br />
<br />
= Overview =<br />
<br />
== Status: Available ==<br />
<br />
* Type: Semester or Master Thesis (multiple students possible)<br />
* Professor: Prof. Dr. L. Benini<br />
* Supervisors:<br />
** [[:User:Xiaywang | Xiaying Wang]]: [mailto:xiaywang@iis.ee.ethz.ch xiaywang@iis.ee.ethz.ch]<br />
** [[:User:Cykoenig | Cyril Koenig]]: [mailto:cykoenig@iis.ee.ethz.ch cykoenig@iis.ee.ethz.ch]<br />
** [[:User: Vivianep | Viviane Potocnik]]: [mailto: vivianep@iis.ee.ethz.ch vivianep@iis.ee.ethz.ch]<br />
<br />
= Introduction =<br />
<br />
Milk-V is a company committed to delivering high-quality RISC-V products to developers, enterprises, and consumers. It focuses on the development of both hardware and software ecosystems around the RISC-V architecture. Milk-V strongly supports open-source initiatives and aims to enrich the RISC-V product landscape, hoping that through its efforts and those of the community, the future of RISC-V products will be as vast and luminous as the Milky Way.<br />
<br />
The Milk-V Pioneer is a developer motherboard utilizing the SOPHON SG2042 [1], designed in the standard microATX (mATX) form factor. It offers PC-like interfaces and compatibility with PC industrial standards, aiming to provide a native RISC-V development environment and desktop experience. The Pioneer is targeted at RISC-V developers and hardware pioneers, offering a platform to engage with cutting-edge RISC-V technology. This motherboard serves as an excellent choice for those interested in exploring and developing within the RISC-V architecture.<br />
<br />
[[File:Pioneer.jpg|400px|]] [2]<br />
<br />
= Project description =<br />
<br />
In this project, you will be executing LLMs and Foundation Models, e.g., Whisper AI, to Milk-V servers and benchmark their performance.<br />
<br />
You will first select a framework to execute LLMs in C/C++, for instance llama.cpp [3]. You will then evaluate one or several models using this framework on the SG2042 CPU. Finally, you will identify potential limitations or improvements of the code related to the microarchitecture.<br />
<br />
== Character ==<br />
<br />
* 20% Literature/architecture review<br />
* 60% Programming<br />
* 20% Evaluation<br />
<br />
== Prerequisites ==<br />
<br />
* Strong interest in computer architecture<br />
* Experience in C programming<br />
* Preferred: Knowledge or prior experience with RISC-V<br />
<br />
= References =<br />
<br />
[https://github.com/milkv-pioneer/pioneer-files/blob/main/hardware/SG2042-TRM.pdf](https://github.com/milkv-pioneer/pioneer-files/blob/main/hardware/SG2042-TRM.pdf)<br />
<br />
[https://milkv.io/docs/pioneer/](https://milkv.io/docs/pioneer/)<br />
<br />
[https://github.com/ggerganov/llama.cpp](https://github.com/ggerganov/llama.cpp)</div>Xiaywanghttp://iis-projects.ee.ethz.ch/index.php?title=Benchmarking_a_RISC-V-based_Server_on_LLMs/Foundation_Models_(SA_or_MA)&diff=10258Benchmarking a RISC-V-based Server on LLMs/Foundation Models (SA or MA)2024-03-12T09:32:34Z<p>Xiaywang: </p>
<hr />
<div><!-- Benchmarking a RISC-V-based Server on LLMs/Foundation Models (SA or MA) --><br />
<br />
[[Category:Digital]]<br />
[[Category:High Performance SoCs]]<br />
[[Category:2023]]<br />
[[Category:Master Thesis]]<br />
[[Category:Hot]]<br />
[[Category:Xiaywang]]<br />
[[Category:Cykoenig]]<br />
[[Category:Available]]<br />
<br />
<br />
= Overview =<br />
<br />
== Status: Available ==<br />
<br />
* Type: Semester or Master Thesis (multiple students possible)<br />
* Professor: Prof. Dr. L. Benini<br />
* Supervisors:<br />
** [[:User:Xiaywang | Xiaying Wang]]: [mailto:xiaywang@iis.ee.ethz.ch xiaywang@iis.ee.ethz.ch]<br />
** [[:User:Cykoenig | Cyril Koenig]]: [mailto:cykoenig@iis.ee.ethz.ch cykoenig@iis.ee.ethz.ch]<br />
<br />
= Introduction =<br />
<br />
Milk-V is a company committed to delivering high-quality RISC-V products to developers, enterprises, and consumers. It focuses on the development of both hardware and software ecosystems around the RISC-V architecture. Milk-V strongly supports open-source initiatives and aims to enrich the RISC-V product landscape, hoping that through its efforts and those of the community, the future of RISC-V products will be as vast and luminous as the Milky Way.<br />
<br />
The Milk-V Pioneer is a developer motherboard utilizing the SOPHON SG2042 [1], designed in the standard microATX (mATX) form factor. It offers PC-like interfaces and compatibility with PC industrial standards, aiming to provide a native RISC-V development environment and desktop experience. The Pioneer is targeted at RISC-V developers and hardware pioneers, offering a platform to engage with cutting-edge RISC-V technology. This motherboard serves as an excellent choice for those interested in exploring and developing within the RISC-V architecture.<br />
<br />
[[File:Pioneer.jpg|400px|]] [2]<br />
<br />
= Project description =<br />
<br />
In this project, you will be executing LLMs and Foundation Models, e.g., Whisper AI, to Milk-V servers and benchmark their performance.<br />
<br />
You will first select a framework to execute LLMs in C/C++, for instance llama.cpp [3]. You will then evaluate one or several models using this framework on the SG2042 CPU. Finally, you will identify potential limitations or improvements of the code related to the microarchitecture.<br />
<br />
== Character ==<br />
<br />
* 20% Literature/architecture review<br />
* 60% Programming<br />
* 20% Evaluation<br />
<br />
== Prerequisites ==<br />
<br />
* Strong interest in computer architecture<br />
* Experience in C programming<br />
* Preferred: Knowledge or prior experience with RISC-V<br />
<br />
= References =<br />
<br />
[https://github.com/milkv-pioneer/pioneer-files/blob/main/hardware/SG2042-TRM.pdf](https://github.com/milkv-pioneer/pioneer-files/blob/main/hardware/SG2042-TRM.pdf)<br />
<br />
[https://milkv.io/docs/pioneer/](https://milkv.io/docs/pioneer/)<br />
<br />
[https://github.com/ggerganov/llama.cpp](https://github.com/ggerganov/llama.cpp)</div>Xiaywanghttp://iis-projects.ee.ethz.ch/index.php?title=Benchmarking_a_RISC-V-based_Server_on_LLMs/Foundation_Models_(SA_or_MA)&diff=10254Benchmarking a RISC-V-based Server on LLMs/Foundation Models (SA or MA)2024-03-10T18:22:10Z<p>Xiaywang: Created page with "<!-- Benchmarking a RISC-V-based Server on LLMs/Foundation Models (SA or MA) --> Category:Digital Category:High Performance SoCs Category:2023 Category:Master T..."</p>
<hr />
<div><!-- Benchmarking a RISC-V-based Server on LLMs/Foundation Models (SA or MA) --><br />
<br />
[[Category:Digital]]<br />
[[Category:High Performance SoCs]]<br />
[[Category:2023]]<br />
[[Category:Master Thesis]]<br />
[[Category:Hot]]<br />
[[Category:Xiaywang]]<br />
[[Category:Cykoenig]]<br />
[[Category:Available]]<br />
<br />
<br />
= Overview =<br />
<br />
== Status: Available ==<br />
<br />
* Type: Semester or Master Thesis (multiple students possible)<br />
* Professor: Prof. Dr. L. Benini<br />
* Supervisors:<br />
** [[:User:Xiaywang | Xiaying Wang]]: [mailto:xiaywang@iis.ee.ethz.ch xiaywang@iis.ee.ethz.ch]<br />
** [[:User:Cykoenig | Cyril Koenig]]: [mailto:cykoenig@iis.ee.ethz.ch cykoenig@iis.ee.ethz.ch]<br />
<br />
= Introduction =<br />
<br />
Milk-V is a company committed to delivering high-quality RISC-V products to developers, enterprises, and consumers. It focuses on the development of both hardware and software ecosystems around the RISC-V architecture. Milk-V strongly supports open-source initiatives and aims to enrich the RISC-V product landscape, hoping that through its efforts and those of the community, the future of RISC-V products will be as vast and luminous as the Milky Way.<br />
<br />
The Milk-V Pioneer is a developer motherboard utilizing the SOPHON SG2042, designed in the standard microATX (mATX) form factor. It offers PC-like interfaces and compatibility with PC industrial standards, aiming to provide a native RISC-V development environment and desktop experience. The Pioneer is targeted at RISC-V developers and hardware pioneers, offering a platform to engage with cutting-edge RISC-V technology. This motherboard serves as an excellent choice for those interested in exploring and developing within the RISC-V architecture.<br />
<br />
= Project description =<br />
<br />
In this project, you will be executing LLMs and Foundation Models, e.g., Whisper AI, to Milk-V servers and benchmark their performance.<br />
<br />
== Character ==<br />
<br />
* 20% Literature/architecture review<br />
* 60% Programming<br />
* 20% Evaluation<br />
<br />
== Prerequisites ==<br />
<br />
* Strong interest in computer architecture<br />
* Experience in C programming<br />
* Preferred: Knowledge or prior experience with RISC-V<br />
<br />
= References =</div>Xiaywanghttp://iis-projects.ee.ethz.ch/index.php?title=Human_Intranet&diff=10253Human Intranet2024-03-10T18:09:15Z<p>Xiaywang: </p>
<hr />
<div>[[File:HI.png|thumb|right|450px]]<br />
<br />
=What is Human Intranet?=<br />
The world around us is getting a lot smarter quickly: virtually every single component of our daily living environment is being equipped with sensors, actuators, processing, and connection into a network that will soon count billions of nodes and trillions of sensors. These devices only interact with the human through the traditional input and output channels. Hence, they only indirectly communicate with our brain—through our five sense modalities—forming two separate computing systems: biological versus physical. It could be made a lot more effective if a direct high bandwidth link existed between the two systems, allowing them to truly collaborate with each other and to offer opportunities for enhanced functionality that would otherwise be hard to accomplish. The emergence of miniaturized sense, compute and actuate devices as well as interfaces that are form-fitted to the human body opens the door for a symbiotic convergence between biological function and physical computing.<br />
<br />
Human Intranet is an open, scalable platform that seamlessly integrates an ever-increasing number of sensor, actuation, computation, storage, communication and energy nodes located on, in, or around the human body acting in symbiosis with the functions provided by the body itself. Human Intranet presents a system vision in which, for example, disease would be treated by chronically measuring biosignals deep in the body, or by providing targeted, therapeutic interventions that respond on demand and in situ. To gain a holistic view of a person’s health, these sensors and actuators must communicate and collaborate with each other. Most of such systems prototyped or envisioned today serve to address deficiencies in the human sensory or motor control systems due to birth defects, illnesses, or accidents (e.g., invasive brain-machine interfaces, cochlear implants, artificial retinas, etc.). While all these systems target defects, one can easily imagine that this could lead to many types of enhancement and/or enable direct interaction with the environment: to make us humans smarter!<br />
<br />
Here, in our projects, we mainly focus on '''sensor, computation, communication, and emerging storage''' aspects to develop very efficient closed-loop sense-interpret-actuate systems, enabling distributed autonomous behavior. <br />
<br />
<!--For example, to design the ''brain'' of our physical computing (i.e., the compute/interpret component), we rely on computing with ultra-wide words (e.g., 10,000 bits) that eases interfacing with various sensor modalities and actuators. This novel computing paradigm is called hyperdimensional (HD) computing that is inspired from the very size of the biological brain’s circuits: assuming 1 bit per synapse, the brain is made up of more than 24 billion of such ultra-wide words. You can watch some of our demos:<br />
* [https://bwrc.eecs.berkeley.edu/sites/default/files/files/u2630/flexemg_v2_lq.mp4#t=2 Video1]<br />
* [https://www.youtube.com/watch?time_continue=9&v=vTQGMQ6QaJE Video2]<br />
* [https://iis-people.ee.ethz.ch/~arahimi/papers/ISSCC18-Demo.pdf PDF] <br />
<br />
You can also find a collection of complemented projects with source codes/datasets here:<br />
* [https://github.com/HyperdimensionalComputing/collection Github link]<br />
--><br />
==Prerequisites and Focus==<br />
If you are an B.S. or M.S. student at the ETHZ, typically there is no prerequisite. You can come and talk to us and we adapt the projects based on your skills. The scope and focus of projects are wide. You can choose to work on:<br />
<br />
<!-- * '''Efficient hardware architectures in emerging technologies''' (e.g., [https://www.zurich.ibm.com/sto/memory/ the IBM computational memory])--><br />
* '''Exploring new Human Intranet/IoT applications''' <br />
* '''Algorithm design and optimizations''' (Python)<br />
* '''System-level design and testing''' (Altium, C-programming)<br />
* '''Sensory interfaces''' (analog and digital)<br />
<br />
<br />
<!-- <br />
* '''Theory''' of learning systems including HD computing, Hidden Markov Model (HMM), and clustering algorithms<br />
Overall, our projects cover '''algorithmic, hardware/software, and system level''' design and developments. <br />
However, if you have background in signal processing, VLSI or linear algebra is a super plus! --><br />
<br />
===Useful Reading===<br />
*[https://ieeexplore.ieee.org/document/7030200/ The Human Intranet--Where Swarms and Humans Meet]<br />
*[https://ieeexplore.ieee.org/abstract/document/8490896 Efficient Biosignal Processing Using Hyperdimensional Computing: Network Templates for Combined Learning and Classification of ExG Signals]<br />
*[https://iopscience.iop.org/article/10.1088/1741-2552/aab2f2/meta A review of classification algorithms for EEG-based brain–computer interfaces: a 10 year update]<br />
<br />
=Available Projects=<br />
Here, we provide a short description of the related projects for you to see the scope of our work. The directions and details of the projects can be adapted based on your interests and skills. Please do not hesitate to come and talk to us for more details.<br />
<br />
==Wearables for health and physiology==<br />
[[File:Cardiorespiratory.JPG|thumb|right|200px]]<br />
===Short Description===<br />
In this research area, we develop wearable systems, algorithms, and applications for monitoring health- and physiological-related parameters in innovative ways. Examples include (but are not limited to): heart rate and respiration rate monitoring, blood pressure monitoring, bladder monitoring, drowsiness detection, monitoring of muscle contractions and identification of innervations, ...<br />
<br />
For wearables based on ultrasound, see also the dedicated [[Digital_Medical_Ultrasound_Imaging | '''Ultrasound section''']]<br />
<br />
===Available Projects===<br />
<DynamicPageList><br />
category = Available<br />
category = Digital<br />
category = WearablesHealth<br />
</DynamicPageList><br />
<br />
==Brain-Machine Interfaces and wearables==<br />
<!-- [[File:BCI.png|thumb|center]] <br />
[[File:BCI-dryEEG.jpg|thumb|right]] --><br />
[[File:Emotiv-epoc-14-channel-mobile-eeg.jpg|thumb|right|200px]]<br />
[[File:In_ear_EEG.jpg|thumb|right|200px]]<br />
<br />
<br />
===Short Description===<br />
Noninvasive brain–machine interfaces (BMIs) and neuroprostheses aim to provide a communication and control channel based on the recognition of the subject’s intentions from spatiotemporal neural activity typically recorded by EEG electrodes. BMIs are a special kind of HMI, focused on the brain. What makes BMIs particularly challenging is their susceptibility to errors over time in the recognition of human intentions.<br />
<br />
In these projects, our goal is to develop efficient and fast learning algorithms that replace traditional signal processing and classification methods by directly operating with raw data from electrodes. Furthermore, we aim to efficiently deploy those algorithms on tightly resource-limited devices (e.g., Microcontroller units) for near sensor classification using artificial intelligence.<br />
<br />
*WATCH OUR DEMO: EEG-HEADBAND CONTROLLING A DRONE: https://www.youtube.com/watch?v=3-DysFptdRI<br />
<br />
===Links===<br />
* [https://iis-people.ee.ethz.ch/~herschmi/EdgeDL20.pdf Q-EEGNet: an Energy-Efficient 8-bit Quantized Parallel EEGNet Implementation for Edge Motor-Imagery Brain–Machine Interfaces]<br />
* [https://iis-people.ee.ethz.ch/~herschmi/MEMEA20.pdf An Accurate EEGNet-based Motor-Imagery Brain–Computer Interface for Low-Power Edge Computing]<br />
* [https://iis-people.ee.ethz.ch/~arahimi/papers/EUSIPCO18.pdf Fast and Accurate Multiclass Inference for Motor Imagery BCIs Using Large Multiscale Temporal and Spectral Features]<br />
* [https://iis-people.ee.ethz.ch/~arahimi/papers/MONET17.pdf Hyperdimensional Computing for Blind and One-Shot Classification of EEG Error-Related Potentials]<br />
* [https://arxiv.org/abs/1812.05705 Exploring Embedding Methods in Binary Hyperdimensional Computing: A Case Study for Motor-Imagery based Brain-Computer Interfaces]<br />
<br />
===Available Projects===<br />
<DynamicPageList><br />
category = Available<br />
category = Digital<br />
category = BCI<br />
</DynamicPageList><br />
<br />
==Epilepsy Seizure Detection Device==<br />
[[File:Non-EEG Seizure.jpg|border|text-top|300px]]<br />
[[File:NeuroPace.jpg|border|text-top|400px]]<br />
<!-- Seizure-prediction.png --><br />
===Short Description===<br />
Epilepsy is a brain disease that affects more than 50 million people worldwide. Conventional treatments are primarily pharmacological, but they can require surgery or invasive neurostimulation in the case of drug-resistant subjects. In these cases, personalized patient treatments are necessary and can be achieved with the help of long-term recording of brain activity. In this context, seizure detection systems hold promise for improving the quality of life for patients with epilepsy, providing non-stigmatizing and reliable continuous monitoring during real-life conditions. In this project, our goal is to develop efficient techniques for EEG as well as non-EEG signals to detect an upcoming seizure in an ultra-low-power device. <br />
In this project, our goal is to develop efficient techniques for EEG as well as non-EEG signals to detect an upcoming seizure in an ultra-low-power device. This covers a wide range of analog and digital techniques.<br />
<br />
===Links===<br />
* [http://ieeg-swez.ethz.ch/ The SWEC-ETHZ iEEG Database and Algorithms]<br />
* [https://www.wysscenter.ch/project/epilepsy-monitoring-seizure-forecasts/ Epilepsy monitoring and seizure forecasts at Wyss Center]<br />
* [https://www.youtube.com/watch?time_continue=87&v=ouyPXkEud40 Controlling tinnitus with neurofeedback]<br />
<br />
<br />
<br />
===Available Projects===<br />
<DynamicPageList><br />
category = Available<br />
category = Digital<br />
category = Epilepsy<br />
</DynamicPageList><br />
<br />
==Foundation models and LLMs for Health==<br />
[[File:EEG_ECG.png|border|text-top|400px]]<br />
[[File:LLM.png|border|text-top|400px]]<br />
<!-- Seizure-prediction.png --><br />
===Short Description===<br />
Incorporating Foundation Models and Large Language Models (LLMs) within artificial intelligence is gaining significant traction, particularly due to their potential applications in the health sector. This project is dedicated to developing sophisticated methodologies for utilizing foundation models and LLMs in health-related applications, specifically analyzing electroencephalogram (EEG) brain signals.<br />
<br />
In healthcare and biomedical research, implementing advanced computational models, notably Foundation Models and Large Language Models (LLMs), revolutionizes the understanding and interpretation of intricate biosignals. We stand at the vanguard of this revolutionary change, delving into the capabilities of these models for the analysis and interpretation of critical biosignals, including electroencephalograms (EEG) and electrocardiograms (ECG).<br />
<br />
Foundation Models, encompassing a spectrum of robust, pre-trained models, are transforming our ability to process and interpret large datasets. Initially trained on extensive and diverse datasets, these models are adaptable for specific tasks, offering remarkable accuracy and efficiency. This adaptability renders them particularly beneficial for biosignal analysis, where the intricacies of EEG and ECG data demand both precision and contextual understanding.<br />
<br />
As a subset of Foundation Models, LLMs have demonstrated efficacy in processing and generating human language. At IIS, we are pioneering the application of LLMs in the domain of biosignal interpretation, extending beyond textual data. This entails training the models to interpret the 'language' of biosignals, translating complex patterns into actionable insights.<br />
<br />
Our emphasis on EEG and ECG signals is motivated by these biosignals' profound insights into human health. EEGs, capturing brain activity, and ECGs, monitoring heart rhythms, are instrumental in diagnosing and managing various health conditions. By leveraging Foundation Models and LLMs, our objective is to refine diagnostic accuracy, predict health outcomes, and customize patient care.<br />
<br />
IIS invites Master's students to immerse themselves in this pioneering area. Our projects offer avenues to engage with state-of-the-art technologies, apply them to real-world health challenges, and contribute to shaping a future where healthcare is more predictive, preventive, and personalized. We encourage your participation in this exhilarating endeavor to redefine the confluence of healthcare and technology.<br />
<br />
===Links===<br />
* [https://braingpt.org/ BrainGPT]<br />
<br />
<br />
===Available Projects===<br />
<DynamicPageList><br />
category = Available<br />
category = Digital<br />
category = HealthGPT<br />
</DynamicPageList><br />
<br />
<br />
<br />
<!--<br />
=Extremely Resilient Hyperdimensional Processor=<br />
[[File:BrainChip.jpg|thumb|left]]<br />
<br />
===Short Description===<br />
The most important aspect of hyperdimensional (HD) computing, for hardware realization, is its robustness against noise and variations in the computing platforms. Principles of HD computing allows to implement resilient controllers and state machines for extreme noisy conditions. Its tolerance in operating with faulty components and low signal-to-noise ratio (SNR) conditions is achieved by brain-inspired properties of hypervectors: (pseudo)randomness, high-dimensionality, and fully distributed holographic representations.<br />
<br />
In this project, your goal would be to design and develop an end-to-end robust HD processor with extremely resilient controller based on principles of HD computing, and measure its resiliency against noisy environment and faulty components.<br />
<br />
===Links===<br />
* [https://iis-people.ee.ethz.ch/~arahimi/papers/ISLPED16.pdf A Robust and Energy-Efficient Classifier Using Brain-Inspired Hyperdimensional Computing]<br />
* [https://iis-people.ee.ethz.ch/~arahimi/papers/DAC18.pdf PULP-HD: Accelerating Brain-Inspired High-Dimensional Computing on a Parallel Ultra-Low Power Platform]<br />
* [http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8216554 Associative Synthesis of Finite State Automata Model of a Controlled Object with Hyperdimensional Computing]<br />
<br />
=Flexible High-Density Sensors for Hand Gesture Recognition=<br />
[[File:Hyperdimensional_EMG.png|thumb|center]]<br />
[[File:FlexEMG.png|thumb|right|500px]]<br />
<br />
===Short Description===<br />
The surface electromyography (EMG) signals are the superposition of the electrical activity of underneath muscles when contractions occur.<br />
Wearable surface EMG devices have a wide range of applications in controlling the upper limb prostheses and hand gesture recognition systems intended for consumer human-machine interaction. High-density EMG electrode array covering the whole arm can ease targeting the most desired muscle locations and cope the issues with sensors misplacement.<br />
For robust gesture recognition from such EMG sensors, we rely on brain-inspired HD computing.<br />
<br />
In this project, your goal would be to develop new sensors and RTL implementation of HD computing for one-shot gesture learning in an ultra low-power device.<br />
<br />
===Links===<br />
* [https://bwrc.eecs.berkeley.edu/sites/default/files/files/u2630/flexemg_v2_lq.mp4#t=2 Flexible EMG Demo]<br />
* [https://iis-people.ee.ethz.ch/~arahimi/papers/ISCAS2018.pdf Gesture Recognition System with Flexible High-Density Sensors] <br />
* [https://iis-people.ee.ethz.ch/~arahimi/papers/papers/ICRC16.pdf Hyperdimensional Biosignal Processing: A Case Study for EMG-based Hand Gesture Recognition (paper)]<br />
* [https://arxiv.org/abs/1901.00234 Adaptive EMG-based hand gesture recognition using hyperdimensional computing (paper)]<br />
* [https://github.com/abbas-rahimi/HDC-EMG Related Matlab code]<br />
<br />
==Robot Learning by Demonstration==<br />
[[File:Robot-VSA.png|thumb|left|Image source: Neubert et al, IROS 2016]]<br />
===Short Description===<br />
Robot learning from demonstration is a paradigm for enabling robots to autonomously perform new tasks. <br />
HD computing is a nice fit in this area since it naturally enables modeling relation between sensory inputs and actuator outputs of a robot by learning from few demonstrations. <br />
In this project, your goal would be to develop algorithms and implementations based on HD computing to enhance a robot to learn from online demonstrations. <br />
Further, such HD computing-based paradigm can be coupled to a brain-computer interface device enabling to control a robot by EEG signals from the brain. It has a wonderful application in neuroprosthetics to learn from a patient (see [https://www.youtube.com/watch?time_continue=26&v=jAtcVlTqxeA this] demonstration at EPFL).<br />
<br />
===Links===<br />
* [https://actu.epfl.ch/news/when-the-neuroprosthetics-learn-from-the-patient-5/ When the neuroprosthetics learn from the patient]<br />
* [https://www.tu-chemnitz.de/etit/proaut/publications/IROS2016_neubert.pdf Learning Vector Symbolic Architectures for Reactive Robot Behaviours] <br />
* [https://www.aaai.org/ocs/index.php/WS/AAAIW13/paper/download/7075/6578 Learning Behavior Hierarchies via High-Dimensional Sensor Projection (paper)]<br />
---><br />
<br />
= Projects in Progress=<br />
<DynamicPageList><br />
supresserrors = true<br />
category = In progress<br />
category = Human Intranet<br />
</DynamicPageList><br />
<br />
=Completed Projects=<br />
These are projects that were recently completed: <br />
<DynamicPageList><br />
category = Completed<br />
category = Digital<br />
category = Human Intranet<br />
<br />
</DynamicPageList><br />
<br />
=Where to find us= <br />
{|<br />
| style="padding: 10px" | [[File:Thorir.jpg|frameless|left|100px]]<br />
| style="padding: 10px" | [[File:SebiFrey.jpg|frameless|left|100px]]<br />
| style="padding: 10px" | <br />
| style="padding: 10px" | [[File:Andrea_Cossettini.jpg|frameless|left|100px]]<br />
|-<br />
| [[:User:Thoriri | Thorir Mar Ingolfsson]]<br />
| Sebastian Frey<br />
| [[:User:Xiaywang | Dr. Xiaying Wang]]<br />
| [[:User:Cosandre | Dr. Andrea Cossettini]]<br />
|-<br />
| '''Office''': OAT U21<br />
| '''Office''': ETZ J69.2<br />
| '''Office''': OAT U24 / ETZ J68.2<br />
| '''Office''': OAT U27 / ETZ J69.2<br />
|-<br />
| '''e-mail''': [mailto:thoriri@iis.ee.ethz.ch thoriri@iis.ee.ethz.ch]<br />
| '''e-mail''': [mailto:sefrey@iis.ee.ethz.ch sefrey@iis.ee.ethz.ch]<br />
| '''e-mail''': [mailto:xiaywang@iis.ee.ethz.ch xiaywang@iis.ee.ethz.ch]<br />
| '''e-mail''': [mailto:cossettini.andrea@iis.ee.ethz.ch cossettini.andrea@iis.ee.ethz.ch]<br />
|}<br />
<br />
<br />
<br />
[[Category:Digital]]<br />
[[Category:Human Intranet]]<br />
[[Category:ASIC]]<br />
[[Category:FPGA]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]]</div>Xiaywanghttp://iis-projects.ee.ethz.ch/index.php?title=Spiking_Neural_Network_for_Motor_Function_Decoding_Based_on_Neural_Dust&diff=10252Spiking Neural Network for Motor Function Decoding Based on Neural Dust2024-03-10T18:05:16Z<p>Xiaywang: </p>
<hr />
<div>[[File:ReMote.png|thumb|600px]]<br />
=== Description ===<br />
A brain-machine interface (BMI) acquires brain activity and translates the information into actions to control software and hardware such as computers and prostheses. As a potential treatment for many neurological diseases, it has won great attention in academia and industry.<br />
<br />
Researchers have developed mm-scale implantable neural probe [1][2] aiming to restore motor function while reducing the damage caused by the implantation. Such systems pose stringent constraints on the power consumption, area, and latency.<br />
<br />
Spiking neural network (SNN), an emerging brain-inspired algorithm, takes advantage of asynchronous information and computation to achieve low latency and low power consumption. Unlike images or sounds acquired by conventional sensors, the neural signals are naturally asynchronous and encode information in timing and firing rate, which is perfectly compatible with SNN’s requirement for input data. This makes SNN a good candidate for neural decoding, and the nature of neural signals may release more potential of SNN.<br />
<br />
In particular, reservoir-based Liquid State Machine (LSM) [5] is a recurrent computational model which is a more adequate emulation of the biological cortical networks compared to layer-based SNNs. It maps the input spike trains to the output spike trains by means of the so-called Liquid or reservoir, which consists of a recurrent neural network formed by many computational nodes. Kasabov [6] proposed a reservoir-based SNN, named NeuCube, to learn and understand the spatio-temporal features of the brain activities and has been demonstrated to be competitive to layer-based approaches. A very recent work [7] used NeuCube to perform the regression problem of Grasp-and-Lift dataset achieving very promising results.<br />
<br />
In this project, the student will<br />
<br />
1. Study prior art, including and not limited to LSM, SNN<br />
<br />
2. Get familiar with the dataset, the system, and the deep learning framework<br />
<br />
3. Explore and develop SNN to offline decode finger movement<br />
A more detailed project description will be provided tailored to the type of the project (master or semester).<br />
<br />
===Status: Available===<br />
:Looking for master or semester thesis students<br />
:Supervisor: [[:User:Liaoj | Jiawei Liao]], [[:User:xiaywang|Xiaying Wang]]<br />
<br />
===Prerequisites===<br />
* Machine Learning<br />
* Deep Learning<br />
* Python<br />
* VLSI is a plus<br />
<br />
===Character===<br />
* 20% Literature review<br />
* 20% Theory<br />
* 60% Programming<br />
<br />
===Professor===<br />
Prof. Taekwang Jang <[mailto:tjang@ethz.ch tjang@ethz.ch]><br />
<br />
=== Reference===<br />
[1] J. Liao et al., "An Energy-Efficient Spiking Neural Network for Finger Velocity Decoding for Implantable Brain-Machine Interface," 2022 IEEE 4th International Conference on Artificial Intelligence Circuits and Systems (AICAS), 2022, pp. 134-137, doi: 10.1109/AICAS54282.2022.9869846.<br />
<br />
[2] M. S. Willsey et al., “Real-time brain-machine interface in non-human primates achieves high-velocity prosthetic finger movements using a shallow feedforward neural network decoder,” Nat Commun, vol. 13, no. 1, Art. no. 1, Nov. 2022, doi: 10.1038/s41467-022-34452-w.<br />
<br />
[3] J. Lim et al., “26.9 A 0.19×0.17mm 2 Wireless Neural Recording IC for Motor Prediction with Near-Infrared-Based Power and Data Telemetry,” in 2020 IEEE International Solid- State Circuits Conference - (ISSCC), San Francisco, CA, USA, Feb. 2020, pp. 416–418. doi: 10.1109/ISSCC19947.2020.9063005.<br />
<br />
[4] E. Moon et al., “Bridging the ‘Last Millimeter’ Gap of Brain-Machine Interfaces via Near-Infrared Wireless Power Transfer and Data Communications,” ACS Photonics, vol. 8, no. 5, pp. 1430–1438, May 2021, doi: 10.1021/acsphotonics.1c00160.<br />
<br />
[5] S. R. Nason et al., “A low-power band of neuronal spiking activity dominated by local single units improves the performance of brain–machine interfaces,” Nat Biomed Eng, vol. 4, no. 10, pp. 973–983, Oct. 2020, doi: 10.1038/s41551-020-0591-0.<br />
<br />
[6] A. K. Vaskov et al., “Cortical Decoding of Individual Finger Group Motions Using ReFIT Kalman Filter,” Front. Neurosci., vol. 12, 2018, doi: 10.3389/fnins.2018.00751.<br />
<br />
[7] W. Maass, et al., “Real-time computing without stable states: A new framework for neural computation based on perturbations”. In Neural Computation 14, 2002, pp. 2531–2560. <br />
<br />
[8] N. K. Kasabov, "NeuCube: A spiking neural network architecture for mapping, learning and understanding of spatio-temporal brain data." Neural Networks 52, 2014, pp. 62-76.<br />
<br />
[9] Kumarasinghe, et al., D. Brain-inspired spiking neural networks for decoding and understanding muscle activity and kinematics from electroencephalography signals during hand movements. Sci Rep 11, 2486 (2021). https://doi.org/10.1038/s41598-021-81805-4<br />
<br />
[[#top|↑ top]]<br />
[[Category:EECIS]]<br />
[[Category:Available]]<br />
[[Category:2021]]<br />
[[Category:Liaoj]]<br />
[[Category:Xiaywang]]<br />
[[Category:Digital]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]]<br />
[[Category:Hot]]<br />
<br />
===Practical Details===<br />
* '''[[Project Plan]]'''<br />
* '''[[Project Meetings]]'''<br />
* '''[[Final Report]]'''<br />
* '''[[Final Presentation]]'''</div>Xiaywanghttp://iis-projects.ee.ethz.ch/index.php?title=Mixed-Precision_Neural_Networks_for_Brain-Computer_Interface_Applications&diff=10251Mixed-Precision Neural Networks for Brain-Computer Interface Applications2024-03-10T18:04:26Z<p>Xiaywang: </p>
<hr />
<div>==Description==<br />
Brain-computer interfaces (BCI) are devices and applications which seek to enable direct communication between a user's brain and a computer, e.g., by means of electroencephalography (EEG). An example application that has seen intensive research is motor imagery, which has the goal of recognizing motion behavior imagined by the user by means of BCI devices. Once fully functional, such a system would be of immeasurable value in the design of, e.g., motorized prostheses.<br />
<br />
Researchers at IIS have performed in-depth studies into bringing these capabilities to resource-constrained edge devices such as microcontrollers, setting the state of the art in terms of efficiency for low-power motor imagery systems. However, the deployed networks have so far all been run at a relatively high numerical precision of 8 bits. <br />
<br />
Recent research has shown that neural networks' operands can be aggressively quantized, i.e., represented with as few as 2 bits, with only minor accuracy drops. This has the advantage of decreasing model size (as each parameter requires less storage to be represented), and, with appropriate hardware support, decreasing inference latency at comparable power consumption, leading to significantly lower energy consumption per inference. Thanks to these theoretical insights, combined with a new generation of MCU cores developed at IIS and the University of Bologna (namely, RISC-V cores of the PULP family supporting the xPULPnn ISA extension - see references), there is thus potential for improving the efficiency of these applications even further - this is where you come in!<br />
<br />
In this project, you will enhance existing neural networks for BCI applications with mixed-precision features. The goal is to decrease the energy per inference while retaining the statistical accuracy of the original network by running layers at numerical precisions lower than 8 bits. To facilitate this process, we have developed QuantLab, a framework to make training quantized neural networks easy and simplify the exploration of the design space of topology, precision and training algorithms. A network trained and quantized in QuantLab can be exported and consumed by DORY, a deployment tool which automatically generates optimized C code to run the network in question on a PULP-family microcontroller.<br />
<br />
In this project, you will perform the following steps:<br />
# Select one or multiple BCI networks to quantize and map to PULP - e.g., EEGNet or EEG-TCNet <br />
# Port this network into QuantLab and train full-precision and 8-bit baselines<br />
# Select a quantization strategy to lower selected layers' precisions, taking into account hardware constraints such as memory hierarchy<br />
# Tune the individual layers' precisions to find a low-precision network which achieves (close to) full-precision accuracy<br />
# Map the final network to PULP using DORY (either a simulation or the physical Kraken chip - see references) and evaluate performance compared to the 8-bit baseline<br />
# Determine performance bottlenecks and tune the performance - either by introducing improved kernels to DORY's mapping process, or by replacing the implementations of certain layers with hand-written kernels.<br />
<br />
<br />
===Status: Currently Not Available ===<br />
Looking for 1-2 students for a Semester project. If you have any questions, suggestions for a related (or even unrelated) project or are simply curious about what we do, please do not hesitate to contact us!<br />
<br />
: Supervision: [[:User:Georg|Georg Rutishauser]], [[:User:Xiaywang|Xiaying Wang]]<br />
<br />
===Prerequisites===<br />
* Machine Learning<br />
* Python<br />
* C<br />
===Character===<br />
: 20% Theory<br />
: 80% Implementation<br />
<br />
===Literature===<br />
* [http://asic.ee.ethz.ch/2021/Kraken.html] Kraken in the IIS chip gallery<br />
* [https://arxiv.org/abs/1905.13082] M. Rusci et al., Memory-Driven Mixed Low Precision Quantization For Enabling Deep Network Inference On Microcontrollers<br />
* [https://arxiv.org/abs/2011.14325] A. Garofalo et al., XpulpNN: Enabling Energy Efficient and Flexible Inference of Quantized Neural Network on RISC-V based IoT End Nodes<br />
* [https://arxiv.org/abs/2006.00622] T. Mar Ingolfsson et al., EEG-TCNet: An Accurate Temporal Convolutional Network for Embedded Motor-Imagery Brain-Machine Interfaces<br />
* [https://arxiv.org/abs/2004.00077] X. Wang et al., An Accurate EEGNet-based Motor-Imagery Brain-Computer Interface for Low-Power Edge Computing<br />
* [https://arxiv.org/abs/2004.11690] T. Schneider et al., Q-EEGNet: an Energy-Efficient 8-bit Quantized Parallel EEGNet Implementation for Edge Motor-Imagery Brain--Machine Interfaces<br />
<br />
<br />
===Professor===<br />
: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]<br />
[[#top|↑ top]]<br />
<br />
<!--<br />
==Detailed Task Description==<br />
<br />
===Meetings & Presentations===<br />
The student(s) and advisor(s) agree on weekly meetings to discuss all relevant decisions and decide on how to proceed. Of course, additional meetings can be organized to address urgent issues. <br />
Around the middle of the project there is a design review, where senior members of the lab review your work (bring all the relevant information, such as prelim. specifications, block diagrams, synthesis reports, testing strategy, ...) to make sure everything is on track and decide whether further support is necessary. They also make the definite decision on whether the chip is actually manufactured (no reason to worry, if the project is on track) and whether more chip area, a different package, ... is provided. For more details confer to [http://eda.ee.ethz.ch/index.php/Design_review]. <br />
At the end of the project, you have to present/defend your work during a 15 min. presentation and 5 min. of discussion as part of the IIS colloquium (as required for any semester or master thesis at D-ITET).<br />
<br />
===Deliverables===<br />
* description of the most promising architectures, and argumentation on the decision taken (as part of the report)<br />
* synthesizable, verified VHDL code<br />
* generated test vector files<br />
* synthesis scripts & relevant software models developed for verification<br />
* synthesis results and final chip layout (GDS II data), bonding diagram<br />
* datasheet (part of report)<br />
* presentation slides<br />
* project report (in digital form; a hard copy also welcome, but not necessary)<br />
---><br />
<!--<br />
===Timeline==<br />
To give some idea on how the time can be split up, we provide some possible partitioning: <br />
* Literature survey, building a basic understanding of the problem at hand, catch up on related work <br />
* Development of a working software-based implementation running on the Zynq's ARM core<br />
* Piece-by-piece off-loading of relevant tasks to the programmable logic<br />
* Implementation of data interfaces (software or hardware)<br />
* Report and presentation <br />
--><br />
<!-- 13.5 weeks total here --><br />
<br />
===Practical Details===<br />
<!-- * '''[http://n.ethz.ch/~georgr/project-descriptions/FS21/task_descr_tnn_on_pulp.pdf Detailed Project Description]''' --><br />
* '''[[Project Meetings]]'''<br />
* '''[[Final Report]]'''<br />
* '''[[Final Presentation]]'''<br />
<br />
[[#top|↑ top]]<br />
<br />
<!-- <br />
<br />
COPY PASTE FROM THE LIST BELOW TO ADD TO CATEGORIES<br />
<br />
GROUP<br />
[[Category:Digital]]<br />
<br />
STATUS<br />
[[Category:Available]]<br />
[[Category:In progress]]<br />
[[Category:Completed]]<br />
[[Category:Hot]]<br />
<br />
TYPE OF WORK<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]]<br />
[[Category:PhD Thesis]]<br />
[[Category:Research]]<br />
<br />
NAMES OF EU/CTI/NT PROJECTS<br />
[[Category:UltrasoundToGo]]<br />
[[Category:IcySoC]]<br />
[[Category:PSocrates]]<br />
[[Category:UlpSoC]]<br />
[[Category:Qcrypt]]<br />
<br />
YEAR (IF FINISHED)<br />
[[Category:2010]]<br />
[[Category:2011]]<br />
[[Category:2012]]<br />
[[Category:2013]]<br />
[[Category:2014]]<br />
<br />
---><br />
<br />
[[Category:Georg]]<br />
[[Category:Xiaywang]]<br />
[[Category:Deep Learning Projects]]<br />
[[Category:Embedded Coding]]<br />
[[Category:PULP]]<br />
[[Category:Digital]][[Category:Semester Thesis]] [[Category:Master Thesis]] [[Category:NotAvailable]] [[Category:2021]][[Category:Hot]]</div>Xiaywanghttp://iis-projects.ee.ethz.ch/index.php?title=Deep_neural_networks_for_seizure_detection&diff=10250Deep neural networks for seizure detection2024-03-10T18:02:47Z<p>Xiaywang: </p>
<hr />
<div>[[File:Non-EEG Seizure.jpg|thumb|300px]]<br />
==Description==<br />
Epilepsy is one of the most prevalent chronic neurological disorders. One-third of patients with epilepsy continue to suffer from seizures despite pharmacological therapy. For these patients with drug-resistant epilepsy, efficient algorithms for seizure detection are needed in particular during pre-surgical monitoring. Many efforts have been pursued in this direction with the fabrication of many ASIC and the development of advanced machine/deep-learning to optimize both the energy efficiency for years-operating devices and the accuracy in the epilepsy detection.<br />
In terms of time-series analysis, a big variety of deep-learning approaches are arising for efficient processing such as InceptionTime [1], MultiScale-CNN [2], and Temporal Convolutional Networks (TCN) [3].<br />
<br />
The thesis would be a 6-month full-time project with the following steps to accomplish:<br />
<br />
1 - Development in a high-level programming language (python) of different deep learning algorithms for time series classification. In particular, the initial targets will be the InceptionTime, the MultiScale-CNN, the Temporal Convolutional Networks, and a bidirectional LSTM.<br />
<br />
2 - Benchmarking of these algorithms on a large-scale dataset collected by the Bern Inselspital about epileptic patients [4].<br />
<br />
3 - Comparison with state of the art methods (Local Binary pattern + Hyperdimensional computing [5], Short-time Fourier transform + Convolutional Neural Networks and classical machine learning methods).<br />
<br />
4 - Characterization of the algorithm on different computing platforms, from the high-level number of operations to the number of cycles and energy consumption on embedded devices (e.g. GAP8, a multi-core chip from GreenWaves Technology).<br />
<br />
<br />
===Status: Currently Not Available ===<br />
Looking for one student for Master's thesis. <br />
: Supervision: [[:User:xiaywang|Xiaying Wang]], [mailto:alessio.burrello@unibo.it Alessio Burrello]<br />
<br />
===Prerequisites===<br />
* Machine Learning<br />
* Python & C Programming<br />
<br />
<br />
<br />
===Character===<br />
: 20% Theory<br />
: 80% Programming<br />
<br />
===Professor===<br />
: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]<br />
[[#top|↑ top]]<br />
<br />
<!--<br />
==Detailed Task Description==<br />
<br />
===Meetings & Presentations===<br />
The student(s) and advisor(s) agree on weekly meetings to discuss all relevant decisions and decide on how to proceed. Of course, additional meetings can be organized to address urgent issues. <br />
Around the middle of the project there is a design review, where senior members of the lab review your work (bring all the relevant information, such as prelim. specifications, block diagrams, synthesis reports, testing strategy, ...) to make sure everything is on track and decide whether further support is necessary. They also make the definite decision on whether the chip is actually manufactured (no reason to worry, if the project is on track) and whether more chip area, a different package, ... is provided. For more details confer to [http://eda.ee.ethz.ch/index.php/Design_review]. <br />
At the end of the project, you have to present/defend your work during a 15 min. presentation and 5 min. of discussion as part of the IIS colloquium (as required for any semester or master thesis at D-ITET).<br />
<br />
===Deliverables===<br />
* description of the most promising architectures, and argumentation on the decision taken (as part of the report)<br />
* synthesizable, verified VHDL code<br />
* generated test vector files<br />
* synthesis scripts & relevant software models developed for verification<br />
* synthesis results and final chip layout (GDS II data), bonding diagram<br />
* datasheet (part of report)<br />
* presentation slides<br />
* project report (in digital form; a hard copy also welcome, but not necessary)<br />
---><br />
<!--<br />
===Timeline==<br />
To give some idea on how the time can be split up, we provide some possible partitioning: <br />
* Literature survey, building a basic understanding of the problem at hand, catch up on related work <br />
* Development of a working software-based implementation running on the Zynq's ARM core<br />
* Piece-by-piece off-loading of relevant tasks to the programmable logic<br />
* Implementation of data interfaces (software or hardware)<br />
* Report and presentation <br />
--><br />
<!-- 13.5 weeks total here --><br />
<br />
<br />
===Literature===<br />
* H. I. Fawaz et al., InceptionTime: Finding AlexNet for Time Series Classification [https://arxiv.org/abs/1909.04939]<br />
* Z. Cui et al., Multi-Scale Convolutional Neural Networks for Time Series Classification [https://arxiv.org/abs/1603.06995]<br />
* C. Lea et al., Temporal Convolutional Networks: A Unified Approach to Action Segmentation, [https://link.springer.com/chapter/10.1007/978-3-319-49409-8_7]<br />
* iEEG-SWEZ data base [http://ieeg-swez.ethz.ch/]<br />
* A. Burello et al., Laelaps: An Energy-Efficient Seizure Detection Algorithm from Long-term Human iEEG Recordings without False Alarms [https://ieeexplore.ieee.org/abstract/document/8715186]<br />
<br />
<br />
<br />
===Practical Details===<br />
* '''[[Project Plan]]'''<br />
* '''[[Project Meetings]]'''<br />
* '''[[Final Report]]'''<br />
* '''[[Final Presentation]]'''<br />
<br />
[[#top|↑ top]]<br />
<br />
[[Category:Digital]][[Category:Semester Thesis]] [[Category:NotAvailable]] [[Category:2020]][[Category:Hot]][[Category:Human Intranet]][[Category:xiaywang]][[Category:Epilepsy]][[Category:EmbeddedAI]]<br />
<br />
<!-- <br />
<br />
COPY PASTE FROM THE LIST BELOW TO ADD TO CATEGORIES<br />
<br />
GROUP<br />
[[Category:Digital]]<br />
<br />
STATUS<br />
[[Category:Available]]<br />
[[Category:In progress]]<br />
[[Category:Completed]]<br />
[[Category:Hot]]<br />
<br />
TYPE OF WORK<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]]<br />
[[Category:PhD Thesis]]<br />
[[Category:Research]]<br />
<br />
NAMES OF EU/CTI/NT PROJECTS<br />
[[Category:UltrasoundToGo]]<br />
[[Category:IcySoC]]<br />
[[Category:PSocrates]]<br />
[[Category:UlpSoC]]<br />
[[Category:Qcrypt]]<br />
<br />
YEAR (IF FINISHED)<br />
[[Category:2010]]<br />
[[Category:2011]]<br />
[[Category:2012]]<br />
[[Category:2013]]<br />
[[Category:2014]]<br />
<br />
---></div>Xiaywanghttp://iis-projects.ee.ethz.ch/index.php?title=Real-Time_Motor-Imagery_Classification_Using_Neuromorphic_Processor&diff=10249Real-Time Motor-Imagery Classification Using Neuromorphic Processor2024-03-10T18:02:11Z<p>Xiaywang: </p>
<hr />
<div>[[File:motor-imagery-loihi.png|600px|right|thumb]]<br />
==Short Description==<br />
<br />
Brain–computer interfaces (BCIs) aim to provide a communication and control channel based on the recognition of the subject's intentions, e.g., when performing motor-imagery (MI) from neural activity. MI-BCI systems are designed to find patterns in the (electroencephalogram) EEG signals and match the signal to the motor motion that was imagined by the subject. Such information could enable communication for severely paralyzed users, control of a wheelchair, or assistance in stroke rehabilitation.<br />
<br />
MI-BCIs are still susceptible to errors, mostly due to high inter- and intra-subject variance in EEG data, resulting in low classification accuracy. Moreover, BCI requires very low latency and, to be effective, low power processing that can be implemented in a wearable device. Compared to the "traditional" artificial neural network, the spiking neural network (SNN) can provide both improved latency and energy efficiency. Previous works have shown its potentiality for biomedical signals such as ECG and EMG, and it has demonstrated a better performance when the sample size is limited. Some previous work has been presented showing the potentiality of SNN on brain signals, however, the state of the art use still very and a couple of classes motor-imagery and often they are not implemented in a neuromorphic processor, and none of them are presenting a whole system from the data acquisition to the processing. <br />
<br />
The goal of the present project is to investigate and develop a novel neuromorphic system for Brain–computer interfaces, trained for multi-class motor-imagery, that embeds Intel Loihi as the processing core. SNN algorithms will be implemented and evaluated on real hardware. Moreover, the project has the goal of acquiring data from real subjects to have a data set to train and evaluate the algorithms on the proposed application scenario.<br />
<br />
===Goal & Tasks===<br />
The project(s) will address the following challenges:<br />
* Investigate and develop techniques and methods to perform motor imagery brain-computer interface with energy-efficient SNN.<br />
* The algorithms will be evaluated and optimized for the capability of the Loihi Platform to both increase the energy efficiency and, at the same increase the response time of the detection, aiming to achieve an always-on system.<br />
* Propose novel low power mixed analog-digital systems for biomedical signal (in particular EEG but suitable also for ECG and EMG) analyses to have a real-world acquisition system designed for neuromorphic processing, including Loihi.<br />
* Acquire a large dataset for BCI and possibly other biomedical applications to have the possibility to test and train the SNN that will be implemented on the hardware.<br />
* A complete hardware and software prototype of a smart sensor system, which includes all the subsystems (sensor acquisition, preprocessing, and processing and radio communication), will be developed to demonstrate the benefits of the proposed approach and the capability to achieve perpetual low latency and energy efficiency on the challenging scenario of BCI. <br />
* The working prototype with the Loihi processors will be evaluated to carry out the benchmark with traditional approaches using digital processors. <br />
<br />
<br />
===Prerequisites===<br />
(''not all need to be met'' by the single candidate)<br />
* Knowdleg of high and low level programming languages (e.g. Python, embedded C)<br />
* Knowdleg of embedded systems<br />
* Knowledge of machine learning and signal processing<br />
* Motivation to learn spiking neural networks simulation packages (e.g. BRIAN, ANNarchy, NEST, or NEURON)<br />
* Motivation to build and test a real system and acquiring field data<br />
<br />
===Detailed Task Description===<br />
A detailed task description will be worked out right before the project, taking the student's interests and capabilities into account.<br />
<br />
<br />
<br />
===Status: Currently Not Available ===<br />
* Looking for Semester and Master Project Students<br />
: Supervisors: [[:User:magnom|Michele Magno]], [[:User:xiaywang|Xiaying Wang]]<br />
<br />
<br />
===Character===<br />
: 35% Theory and Algorithms<br />
: 35% Implementation<br />
: 30% Data acquisition, Verification, and Testing<br />
<br />
===IIS Professor===<br />
: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]<br />
<!-- : [http://www.iis.ee.ethz.ch/portrait/staff/huang.en.html Qiuting Huang] ---><br />
<!-- : [http://lne.ee.ethz.ch/en/general-information/people/professor.html Vanessa Wood] ---><br />
<!-- : [http://www.iis.ee.ethz.ch/portrait/staff/mluisier.en.html Mathieu Luisier] ---><br />
<!-- : [http://www.iis.ee.ethz.ch/portrait/staff/schenk.en.html Andreas Schenk] ---><br />
<!-- : [http://www.dz.ee.ethz.ch/en/general-information/about/staff/uid/364.html Hubert Kaeslin] ---><br />
[[#top|↑ top]]<br />
<br />
===Practical Details===<br />
* '''[[Project Plan]]'''<br />
* '''[[Project Meetings]]'''<br />
* '''[[Design Review]]'''<br />
* '''[[Coding Guidelines]]'''<br />
* '''[[Final Report]]'''<br />
* '''[[Final Presentation]]'''<br />
<br />
<br />
[[Category:Digital]]<br />
[[Category:NotAvailable]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]]<br />
[[Category:SmartSensors]]<br />
[[Category:EmbeddedAI]]<br />
[[Category:System Design]]<br />
[[Category:Magnom]]<br />
[[Category:Xiaywang]]<br />
<br />
<br />
<!-- <br />
<br />
COPY PASTE FROM THE LIST BELOW TO ADD TO CATEGORIES<br />
<br />
GROUP<br />
[[Category:Digital]]<br />
[[Category:Analog]]<br />
[[Category:Nano-TCAD]]<br />
[[Category:Nano Electronics]]<br />
<br />
STATUS<br />
[[Category:Available]]<br />
[[Category:In progress]]<br />
[[Category:Completed]]<br />
[[Category:Hot]]<br />
<br />
TYPE OF WORK<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]]<br />
[[Category:PhD Thesis]]<br />
[[Category:Research]]<br />
<br />
NAMES OF EU/CTI/NT PROJECTS<br />
[[Category:UltrasoundToGo]]<br />
[[Category:IcySoC]]<br />
[[Category:PSocrates]]<br />
[[Category:UlpSoC]]<br />
[[Category:Qcrypt]]<br />
<br />
YEAR (IF FINISHED)<br />
[[Category:2010]]<br />
[[Category:2011]]<br />
[[Category:2012]]<br />
[[Category:2013]]<br />
[[Category:2014]]<br />
<br />
---></div>Xiaywanghttp://iis-projects.ee.ethz.ch/index.php?title=Contrastive_Learning_for_Self-supervised_Clustering_of_iEEG_Data_for_Epileptic_Patients&diff=10248Contrastive Learning for Self-supervised Clustering of iEEG Data for Epileptic Patients2024-03-10T18:01:31Z<p>Xiaywang: </p>
<hr />
<div>[[Category:Digital]][[Category:Semester Thesis]][[Category:Master Thesis]] [[Category:NotAvailable]] [[Category:2020]][[Category:Hot]][[Category:Human Intranet]][[Category:xiaywang]][[Category:Epilepsy]]<br />
<br />
==Description==<br />
Epilepsy is a severe and prevalent chronic neurological disorder affecting 1–2% of the world’s population. One-third of epilepsy patients continue to suffer from seizures despite the best possible pharmacological treatment. For these patients with so-called drug-resistant epilepsy, various algorithms based on intracranial electroencephalography (iEEG) recording are proposed to detect the onset of seizures. Training accurate models (e.g., convolutional neural networks) for the detection of seizure onsets requires a large amount of labeled data. Indeed, labeling can be particularly challenging for certain types of data that are highly complex or noisy, resulting in poor quality human annotations at best.<br />
<br />
A promising alternative is to train the models in a self-supervised way using contrastive learning [1], which was able to learn different sleep states. This will not only improve seizure onset detection accuracy but also gives important insights into the features of the model. You will start with a given model for seizure detection and apply it to contrastive learning. We use the publicly available long-term dataset [2] consisting of a total of 2656 hours iEEG recordings.<br />
<br />
<br />
===Status: Currently Not Available ===<br />
Looking for student for Master's thesis or Semester project. <br />
: Supervision: [[:User:xiaywang|Xiaying Wang]]<br />
<br />
===Prerequisites===<br />
* Machine Learning<br />
* Python & C Programming<br />
<br />
<br />
<br />
===Character===<br />
: 20% Theory<br />
: 80% Programming<br />
<br />
===Professor===<br />
: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]<br />
[[#top|↑ top]]<br />
<br />
<!--<br />
==Detailed Task Description==<br />
<br />
===Meetings & Presentations===<br />
The student(s) and advisor(s) agree on weekly meetings to discuss all relevant decisions and decide on how to proceed. Of course, additional meetings can be organized to address urgent issues. <br />
Around the middle of the project there is a design review, where senior members of the lab review your work (bring all the relevant information, such as prelim. specifications, block diagrams, synthesis reports, testing strategy, ...) to make sure everything is on track and decide whether further support is necessary. They also make the definite decision on whether the chip is actually manufactured (no reason to worry, if the project is on track) and whether more chip area, a different package, ... is provided. For more details confer to [http://eda.ee.ethz.ch/index.php/Design_review]. <br />
At the end of the project, you have to present/defend your work during a 15 min. presentation and 5 min. of discussion as part of the IIS colloquium (as required for any semester or master thesis at D-ITET).<br />
<br />
===Deliverables===<br />
* description of the most promising architectures, and argumentation on the decision taken (as part of the report)<br />
* synthesizable, verified VHDL code<br />
* generated test vector files<br />
* synthesis scripts & relevant software models developed for verification<br />
* synthesis results and final chip layout (GDS II data), bonding diagram<br />
* datasheet (part of report)<br />
* presentation slides<br />
* project report (in digital form; a hard copy also welcome, but not necessary)<br />
---><br />
<!--<br />
===Timeline==<br />
To give some idea on how the time can be split up, we provide some possible partitioning: <br />
* Literature survey, building a basic understanding of the problem at hand, catch up on related work <br />
* Development of a working software-based implementation running on the Zynq's ARM core<br />
* Piece-by-piece off-loading of relevant tasks to the programmable logic<br />
* Implementation of data interfaces (software or hardware)<br />
* Report and presentation <br />
--><br />
<!-- 13.5 weeks total here --><br />
<br />
<br />
===Literature===<br />
* [https://arxiv.org/abs/1911.05419] H. Banville et al., Self-supervised representation learning from electroencephalography signals, 2019 <br />
* [http://ieeg-swez.ethz.ch/] iEEG-SWEZ data base<br />
<br />
===Practical Details===<br />
* '''[[Project Plan]]'''<br />
* '''[[Project Meetings]]'''<br />
* '''[[Final Report]]'''<br />
* '''[[Final Presentation]]'''<br />
<br />
[[#top|↑ top]]<br />
<br />
<!-- <br />
<br />
COPY PASTE FROM THE LIST BELOW TO ADD TO CATEGORIES<br />
<br />
GROUP<br />
[[Category:Digital]]<br />
<br />
STATUS<br />
[[Category:Available]]<br />
[[Category:In progress]]<br />
[[Category:Completed]]<br />
[[Category:Hot]]<br />
<br />
TYPE OF WORK<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]]<br />
[[Category:PhD Thesis]]<br />
[[Category:Research]]<br />
<br />
NAMES OF EU/CTI/NT PROJECTS<br />
[[Category:UltrasoundToGo]]<br />
[[Category:IcySoC]]<br />
[[Category:PSocrates]]<br />
[[Category:UlpSoC]]<br />
[[Category:Qcrypt]]<br />
<br />
YEAR (IF FINISHED)<br />
[[Category:2010]]<br />
[[Category:2011]]<br />
[[Category:2012]]<br />
[[Category:2013]]<br />
[[Category:2014]]<br />
<br />
---></div>Xiaywanghttp://iis-projects.ee.ethz.ch/index.php?title=Compression_of_iEEG_Data&diff=10247Compression of iEEG Data2024-03-10T18:01:01Z<p>Xiaywang: </p>
<hr />
<div>[[Category:Digital]][[Category:Semester Thesis]][[Category:NotAvailable]] [[Category:2020]][[Category:Hot]][[Category:Human Intranet]][[Category:xiaywang]][[Category:Epilepsy]]<br />
<br />
==Description==<br />
Seizure detection systems hold promise for improving the quality of life for patients with epilepsy that afflicts nearly 1% of the world’s population. High-resolution intracranial Electroencephalography (iEEG) enables the detection and location of such seizures. When aiming a low power implanted system the large amount of data has to be efficiently reduced. iEEG signals are sparse and have been successfully compressed using well-established encoders such as Discrete Wavelet Transform (DWT) or Non-Negative Matrix Factorization (NNMF) [1]. Due to its recent success, however, convolutional neural networks (CNNs) are getting more attention and have shown to be a viable option to compress EEG signals [2]. This project compares deep convolutional autoencoders with state-of-the-art DWT and NNMF to compress iEEG data from long-term recordings of epileptic patients. We use the publicly available long-term dataset [3] consisting of a total of 2656 hours iEEG recordings.<br />
<br />
<br />
===Status: Currently Not Available ===<br />
Looking for student for Master's thesis or Semester project. <br />
: Supervision: [[:User:xiaywang|Xiaying Wang]]<br />
<br />
===Prerequisites===<br />
* Machine Learning<br />
* Python & C Programming<br />
<br />
<br />
<br />
===Character===<br />
: 20% Theory<br />
: 80% Programming<br />
<br />
===Professor===<br />
: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]<br />
[[#top|↑ top]]<br />
<br />
<!--<br />
==Detailed Task Description==<br />
<br />
===Meetings & Presentations===<br />
The student(s) and advisor(s) agree on weekly meetings to discuss all relevant decisions and decide on how to proceed. Of course, additional meetings can be organized to address urgent issues. <br />
Around the middle of the project there is a design review, where senior members of the lab review your work (bring all the relevant information, such as prelim. specifications, block diagrams, synthesis reports, testing strategy, ...) to make sure everything is on track and decide whether further support is necessary. They also make the definite decision on whether the chip is actually manufactured (no reason to worry, if the project is on track) and whether more chip area, a different package, ... is provided. For more details confer to [http://eda.ee.ethz.ch/index.php/Design_review]. <br />
At the end of the project, you have to present/defend your work during a 15 min. presentation and 5 min. of discussion as part of the IIS colloquium (as required for any semester or master thesis at D-ITET).<br />
<br />
===Deliverables===<br />
* description of the most promising architectures, and argumentation on the decision taken (as part of the report)<br />
* synthesizable, verified VHDL code<br />
* generated test vector files<br />
* synthesis scripts & relevant software models developed for verification<br />
* synthesis results and final chip layout (GDS II data), bonding diagram<br />
* datasheet (part of report)<br />
* presentation slides<br />
* project report (in digital form; a hard copy also welcome, but not necessary)<br />
---><br />
<!--<br />
===Timeline==<br />
To give some idea on how the time can be split up, we provide some possible partitioning: <br />
* Literature survey, building a basic understanding of the problem at hand, catch up on related work <br />
* Development of a working software-based implementation running on the Zynq's ARM core<br />
* Piece-by-piece off-loading of relevant tasks to the programmable logic<br />
* Implementation of data interfaces (software or hardware)<br />
* Report and presentation <br />
--><br />
<!-- 13.5 weeks total here --><br />
<br />
<br />
===Literature===<br />
* [https://pubmed.ncbi.nlm.nih.gov/29040672/] M. Baud, et al., Unsupervised Learning of Spatiotemporal Interictal Discharges in Focal Epilepsy, 2018<br />
* [https://ieeexplore.ieee.org/document/8450511] A. Al-Marridi, et al., Convolutional Autoencoder Approach for EEG Compression and Reconstruction in m-Health Systems, 2018<br />
* [http://ieeg-swez.ethz.ch/] iEEG-SWEZ data base<br />
<br />
===Practical Details===<br />
* '''[[Project Plan]]'''<br />
* '''[[Project Meetings]]'''<br />
* '''[[Final Report]]'''<br />
* '''[[Final Presentation]]'''<br />
<br />
[[#top|↑ top]]<br />
<br />
<!-- <br />
<br />
COPY PASTE FROM THE LIST BELOW TO ADD TO CATEGORIES<br />
<br />
GROUP<br />
[[Category:Digital]]<br />
<br />
STATUS<br />
[[Category:Available]]<br />
[[Category:In progress]]<br />
[[Category:Completed]]<br />
[[Category:Hot]]<br />
<br />
TYPE OF WORK<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]]<br />
[[Category:PhD Thesis]]<br />
[[Category:Research]]<br />
<br />
NAMES OF EU/CTI/NT PROJECTS<br />
[[Category:UltrasoundToGo]]<br />
[[Category:IcySoC]]<br />
[[Category:PSocrates]]<br />
[[Category:UlpSoC]]<br />
[[Category:Qcrypt]]<br />
<br />
YEAR (IF FINISHED)<br />
[[Category:2010]]<br />
[[Category:2011]]<br />
[[Category:2012]]<br />
[[Category:2013]]<br />
[[Category:2014]]<br />
<br />
---></div>Xiaywanghttp://iis-projects.ee.ethz.ch/index.php?title=Data_Augmentation_Techniques_in_Biosignal_Classification&diff=10246Data Augmentation Techniques in Biosignal Classification2024-03-10T17:59:09Z<p>Xiaywang: </p>
<hr />
<div>[[Category:Digital]][[Category:Semester Thesis]][[Category:Master Thesis]] [[Category: NotAvailable]] [[Category:2020]][[Category:Hot]][[Category:Human Intranet]][[Category:xiaywang]][Category:Epilepsy]][[Category:BCI]][[Category:EmbeddedAI]]<br />
<br />
==Description==<br />
In many biosignal classification tasks, simple and robust classifiers are preferred compared to more sophisticated and potentially more accurate ones. This is mostly due to the lack of labeled training data. Especially for biomedical applications, data acquisition and labeling are very expensive and time-consuming. In this thesis, the student explores data augmentation methods [1][2] to improve the learning in biosignal classification tasks. Applications range from ECG to EEG [3] classification tasks. <br />
<br />
<br />
<br />
===Status: Currently Not Available ===<br />
Looking for student for Master's thesis or Semester project. <br />
: Supervision: [[:User:xiaywang|Xiaying Wang]]<br />
<br />
===Prerequisites===<br />
* Machine Learning<br />
* Python & C Programming<br />
<br />
<br />
<br />
===Character===<br />
: 20% Theory<br />
: 80% Programming<br />
<br />
===Professor===<br />
: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]<br />
[[#top|↑ top]]<br />
<br />
<!--<br />
==Detailed Task Description==<br />
<br />
===Meetings & Presentations===<br />
The student(s) and advisor(s) agree on weekly meetings to discuss all relevant decisions and decide on how to proceed. Of course, additional meetings can be organized to address urgent issues. <br />
Around the middle of the project there is a design review, where senior members of the lab review your work (bring all the relevant information, such as prelim. specifications, block diagrams, synthesis reports, testing strategy, ...) to make sure everything is on track and decide whether further support is necessary. They also make the definite decision on whether the chip is actually manufactured (no reason to worry, if the project is on track) and whether more chip area, a different package, ... is provided. For more details confer to [http://eda.ee.ethz.ch/index.php/Design_review]. <br />
At the end of the project, you have to present/defend your work during a 15 min. presentation and 5 min. of discussion as part of the IIS colloquium (as required for any semester or master thesis at D-ITET).<br />
<br />
===Deliverables===<br />
* description of the most promising architectures, and argumentation on the decision taken (as part of the report)<br />
* synthesizable, verified VHDL code<br />
* generated test vector files<br />
* synthesis scripts & relevant software models developed for verification<br />
* synthesis results and final chip layout (GDS II data), bonding diagram<br />
* datasheet (part of report)<br />
* presentation slides<br />
* project report (in digital form; a hard copy also welcome, but not necessary)<br />
---><br />
<!--<br />
===Timeline==<br />
To give some idea on how the time can be split up, we provide some possible partitioning: <br />
* Literature survey, building a basic understanding of the problem at hand, catch up on related work <br />
* Development of a working software-based implementation running on the Zynq's ARM core<br />
* Piece-by-piece off-loading of relevant tasks to the programmable logic<br />
* Implementation of data interfaces (software or hardware)<br />
* Report and presentation <br />
--><br />
<!-- 13.5 weeks total here --><br />
<br />
<br />
===Literature===<br />
* [https://arxiv.org/abs/2002.12478] Q. Wen, et al., Time Series Data Augmentation for Deep Learning: A Survey, 2020<br />
* [https://arxiv.org/abs/1801.02730] M. M. Krell, et al., Data Augmentation for Brain-Computer Interfaces: Analysis on Event-Related Potentials Data, 2018<br />
* [https://arxiv.org/abs/2006.00622] T. M. Ingolfsson, et al., EEG-TCNet: An Accurate Temporal Convolutional Network for Embedded Motor-Imagery Brain-Machine Interfaces, 2020<br />
<br />
===Practical Details===<br />
* '''[[Project Plan]]'''<br />
* '''[[Project Meetings]]'''<br />
* '''[[Final Report]]'''<br />
* '''[[Final Presentation]]'''<br />
<br />
[[#top|↑ top]]<br />
<br />
<!-- <br />
<br />
COPY PASTE FROM THE LIST BELOW TO ADD TO CATEGORIES<br />
<br />
GROUP<br />
[[Category:Digital]]<br />
<br />
STATUS<br />
[[Category:Available]]<br />
[[Category:In progress]]<br />
[[Category:Completed]]<br />
[[Category:Hot]]<br />
<br />
TYPE OF WORK<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]]<br />
[[Category:PhD Thesis]]<br />
[[Category:Research]]<br />
<br />
NAMES OF EU/CTI/NT PROJECTS<br />
[[Category:UltrasoundToGo]]<br />
[[Category:IcySoC]]<br />
[[Category:PSocrates]]<br />
[[Category:UlpSoC]]<br />
[[Category:Qcrypt]]<br />
<br />
YEAR (IF FINISHED)<br />
[[Category:2010]]<br />
[[Category:2011]]<br />
[[Category:2012]]<br />
[[Category:2013]]<br />
[[Category:2014]]<br />
<br />
---></div>Xiaywanghttp://iis-projects.ee.ethz.ch/index.php?title=Graph_neural_networks_for_epileptic_seizure_detection&diff=10245Graph neural networks for epileptic seizure detection2024-03-10T17:58:16Z<p>Xiaywang: </p>
<hr />
<div>[[File:Non-EEG Seizure.jpg|thumb|300px]]<br />
==Description==<br />
Epilepsy is a severe and prevalent chronic neurological disorder affecting 1–2% of the world’s population [1]. One-third of epilepsy patients continue to suffer from seizures despite best possible pharmacological treatment [2]. For these patients with so-called drug-resistant epilepsy [3], various algorithms based on intracranial electroencephalography (iEEG) recording are proposed to detect the onset of seizures [1]. Complementary to this approach, efficient and robust algorithms are required to not only detect the seizure onset but also to identify the ictogenic (i.e. seizure-generating) brain regions for possible surgical removal [4, 5]. The iEEG currently provides the best spatial resolution and the highest signal-to-noise ratio (SNR) of electrical brain activity recordings [1]. Recent studies have shown successful applications of machine learning methods [1, 6, 7, 8] using iEEG signals to detect two distinct states of brain activity in patients with epilepsy, i.e., interictal (= between seizures) and ictal (= during seizures). These methods are based on extracting useful features followed by traditional supervised machine learning methods (such as random forest [1], support vector machines [6], Bayesian analysis [8], artificial neural networks [6]), and more recently deep learning algorithms [7].<br />
<br />
Graph Neural Networks (GNNs), have gained increasing interest in the deep learning(DL) community thanks to their capacity of capturing relational information between entities [9]. Graph theory analysis has been applied to neural signals to analyze the functional connectivity in the human brain [10]. However, conventional GNNs only capture static information, while dynamic graphs take into consideration also the relationship over time between different entities of the graph and their connections. Spatio-temporal graph Convolutional Networks (GCNs) have been proposed by researchers to study the spatial and temporal dependencies in the dataset, for example in traffic forecasting [11]. Continuous-time dynamic graphs have achieved impressive results in many tasks [12]. Few works have applied GNNs to EEG signals, e.g. by combining graphs with convolutional networks (GCNs), achieving state-of-the-art performance in public datasets [13].[14] proposed a temporal GCN to tackle the task of seizure detection. However, EEG signals provide much worse temporal and spatial resolution than iEEG signals. <br />
<br />
Depending on the type of thesis, the following steps are to be accomplished:<br />
<br />
1 - Development in a high-level programming language (python) of graph neural networks and/or convolutional neural networks for seizure detection.<br />
<br />
2 - Benchmarking of these algorithms on a large-scale dataset collected by the Bern Inselspital about epileptic patients (http://ieeg-swez.ethz.ch/).<br />
<br />
3 - Comparison with state-of-the-art methods.<br />
<br />
4 - Characterization of the algorithm on different computing platforms, from the high-level number of operations to the number of cycles and energy consumption on embedded devices (e.g. GAP8, a multi-core chip from GreenWaves Technology).<br />
<br />
===Status: Currently Not Available ===<br />
Looking for Master's (preferred) or Semester thesis students. <br />
: Supervision: [[:User:xiaywang|Xiaying Wang]], [mailto:alessio.burrello@unibo.it Alessio Burrello]<br />
<br />
===Prerequisites===<br />
* Machine Learning<br />
* Deep Learning<br />
* Python (and C Programming)<br />
<br />
<br />
===Character===<br />
: 20% Theory<br />
: 80% Programming<br />
<br />
===Professor===<br />
: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]<br />
[[#top|↑ top]]<br />
<br />
<!--<br />
==Detailed Task Description==<br />
<br />
===Meetings & Presentations===<br />
The student(s) and advisor(s) agree on weekly meetings to discuss all relevant decisions and decide on how to proceed. Of course, additional meetings can be organized to address urgent issues. <br />
Around the middle of the project there is a design review, where senior members of the lab review your work (bring all the relevant information, such as prelim. specifications, block diagrams, synthesis reports, testing strategy, ...) to make sure everything is on track and decide whether further support is necessary. They also make the definite decision on whether the chip is actually manufactured (no reason to worry, if the project is on track) and whether more chip area, a different package, ... is provided. For more details confer to [http://eda.ee.ethz.ch/index.php/Design_review]. <br />
At the end of the project, you have to present/defend your work during a 15 min. presentation and 5 min. of discussion as part of the IIS colloquium (as required for any semester or master thesis at D-ITET).<br />
<br />
===Deliverables===<br />
* description of the most promising architectures, and argumentation on the decision taken (as part of the report)<br />
* synthesizable, verified VHDL code<br />
* generated test vector files<br />
* synthesis scripts & relevant software models developed for verification<br />
* synthesis results and final chip layout (GDS II data), bonding diagram<br />
* datasheet (part of report)<br />
* presentation slides<br />
* project report (in digital form; a hard copy also welcome, but not necessary)<br />
---><br />
<!--<br />
===Timeline==<br />
To give some idea on how the time can be split up, we provide some possible partitioning: <br />
* Literature survey, building a basic understanding of the problem at hand, catch up on related work <br />
* Development of a working software-based implementation running on the Zynq's ARM core<br />
* Piece-by-piece off-loading of relevant tasks to the programmable logic<br />
* Implementation of data interfaces (software or hardware)<br />
* Report and presentation <br />
--><br />
<!-- 13.5 weeks total here --><br />
<br />
<br />
===Literature===<br />
* [1] S. N. Baldassano, B. H. Brinkmann, H. Ung, T. Blevins, E. C. Conrad, K. Leyde,M. J. Cook, A. N. Khambhati, J. B. Wagenaar, G. A. Worrell, and B. Litt, “Crowd-sourcing seizure detection: algorithm development and validation on human im-planted device recordings,”Brain, vol. 140, no. 6, pp. 1680–1691, 2017.<br />
* [2] D. Schmidt and M. Sillanpää, “Evidence-based review on the natural history of theepilepsies.”Current opinion in neurology, vol. 25 2, pp. 159–63, 2012.<br />
* [3] J. F. Tellez-Zenteno, R. Dhar, L. Hernandez-Ronquillo, and S. Wiebe, “Long-termoutcomes in epilepsy surgery: antiepileptic drugs, mortality, cognitive and psychoso-cial aspects,”Brain, vol. 130, no. Pt 2, pp. 334–345, Feb 2007.<br />
* [4] S. Wiebe, W. T. Blume, J. P. Girvin, and M. Eliasziw, “A randomized, controlledtrial of surgery for temporal-lobe epilepsy,”N. Engl. J. Med., vol. 345, no. 5, pp.311–318, Aug 2001.<br />
* [5] C. Rummel, E. Abela, R. G. Andrzejak, M. Hauf, C. Pollo, M. Muller, C. Weisstan-ner, R. Wiest, and K. Schindler, “Resected Brain Tissue, Seizure Onset Zone andQuantitative EEG Measures: Towards Prediction of Post-Surgical Seizure Control,”PLoS ONE, vol. 10, no. 10, p. e0141023, 2015.<br />
* [6] A. K. Jaiswal and H. Banka, “Local pattern transformation based feature extractiontechniques for classification of epileptic EEG signals,”Biomedical Signal Processingand Control, vol. 34, pp. 81 – 92, 2017.<br />
* [7] R. Hussein, H. Palangi, Z. J. Wang, and R. Ward, “Robust detection of epilepticseizures using deep neural networks,” in2018 IEEE International Conference onAcoustics, Speech and Signal Processing (ICASSP), April 2018, pp. 2546–2550.<br />
* [8] W. Zhou, Y. Liu, Q. Yuan, and X. Li, “Epileptic Seizure Detection Using Lacunar-ity and Bayesian Linear Discriminant Analysis in Intracranial EEG,”IEEE TransBiomed Eng, vol. 60, no. 12, pp. 3375–3381, Dec 2013.<br />
* [9] J. Zhou, G. Cui, Z. Zhang, C. Yang, Z. Liu, L. Wang, C. Li, and M. Sun, “Graphneural networks: A review of methods and applications,” 2019.<br />
* [10] S. Sun, X. Li, J. Zhu, Y. Wang, R. La, X. Zhang, L. Wei, and B. Hu, “Graph theoryanalysis of functional connectivity in major depression disorder with high-densityresting state eeg data,”IEEE Transactions on Neural Systems and RehabilitationEngineering, vol. 27, no. 3, pp. 429–439, 2019.<br />
* [11] B. Yu, H. Yin, and Z. Zhu, “Spatio-temporal graph convolutional networks: Adeep learning framework for traffic forecasting,”Proceedings of the Twenty-SeventhInternational Joint Conference on Artificial Intelligence, Jul 2018. [Online].Available: http://dx.doi.org/10.24963/ijcai.2018/505<br />
* [12] E. Rossi, B. Chamberlain, F. Frasca, D. Eynard, F. Monti, and M. Bronstein, “Tem-poral graph networks for deep learning on dynamic graphs,” 2020.<br />
* [13] X. Lun, S. Jia, Y. Hou, Y. Shi, Y. Li, H. Yang, S. Zhang, and J. Lv, “Gcns-net: Agraph convolutional neural network approach for decoding time-resolved eeg motorimagery signals,” 2020.<br />
* [14] I. Covert, B. Krishnan, I. Najm, J. Zhan, M. Shore, J. Hixson, and M. J. Po,“Temporal graph convolutional networks for automatic seizure detection,” 2019.<br />
<br />
<br />
===Practical Details===<br />
* '''[[Project Plan]]'''<br />
* '''[[Project Meetings]]'''<br />
* '''[[Final Report]]'''<br />
* '''[[Final Presentation]]'''<br />
<br />
<br />
<br />
[[#top|↑ top]]<br />
<br />
[[Category:Digital]][[Category:Semester Thesis]][[Category:Master Thesis]] [[Category: NotAvailable]] [[Category:2020]][[Category:Hot]][[Category:Human Intranet]][[Category:xiaywang]][[Category:Epilepsy]][[Category:EmbeddedAI]]<br />
<br />
<!-- <br />
<br />
COPY PASTE FROM THE LIST BELOW TO ADD TO CATEGORIES<br />
<br />
GROUP<br />
[[Category:Digital]]<br />
<br />
STATUS<br />
[[Category:Available]]<br />
[[Category:In progress]]<br />
[[Category:Completed]]<br />
[[Category:Hot]]<br />
<br />
TYPE OF WORK<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]]<br />
[[Category:PhD Thesis]]<br />
[[Category:Research]]<br />
<br />
NAMES OF EU/CTI/NT PROJECTS<br />
[[Category:UltrasoundToGo]]<br />
[[Category:IcySoC]]<br />
[[Category:PSocrates]]<br />
[[Category:UlpSoC]]<br />
[[Category:Qcrypt]]<br />
<br />
YEAR (IF FINISHED)<br />
[[Category:2010]]<br />
[[Category:2011]]<br />
[[Category:2012]]<br />
[[Category:2013]]<br />
[[Category:2014]]<br />
<br />
---></div>Xiaywanghttp://iis-projects.ee.ethz.ch/index.php?title=Classification_of_Evoked_Local-Field_Potentials_in_Rat_Barrel_Cortex_using_Hyper-dimensional_Computing&diff=10244Classification of Evoked Local-Field Potentials in Rat Barrel Cortex using Hyper-dimensional Computing2024-03-10T17:58:02Z<p>Xiaywang: </p>
<hr />
<div>[[File:iis-project-description.jpg|400px|right|thumb]]<br />
<br />
==Description==<br />
One of the most ambitious goals of neuroscience and its neuroprosthetic applications is to interface intelligent electronic devices with the biological brain to cure neurological diseases. Neural coding is the branch of neuroscience that investigates the relationship between stimulus and neuronal responses. This emerging research field builds on our growing understanding of brain circuits and on recent technological advances in miniaturization of implantable multielectrode-arrays (MEAs) to record brain signals at high spatio-temporal resolution. Data processing is needed to decode useful information from the recorded neural activity to better understand the function of underlying neural circuits and, in perspective, to operate neuroprosthetic devices. In this context, artificial intelligence combined with low-power embedded devices is a very promising starting point towards real-time decoding of cerebral activities with low power consumption digital processors for brain-machine interfacing and neuroprosthetic applications [1].<br />
<br />
Brain-inspired hyperdimensional computing (HDC) explores the emulation of cognition by computing with hypervectors as an alternative to computing with numbers. HDC has proven to be promising for energy-efficient computing applied to biosignal classification [2].<br />
<br />
This project focuses on processing data of evoked Local Field Potentials (LFPs) recorded from the rat barrel cortex using a miniaturized 16-by-16 MEA while stimulating the principal whisker. The sensor has been implanted in vivo and 2D images have been acquired from different cortical depths. The deflection of the whisker is performed by means of a piezo-electric bender using various stimulation amplitudes. The aim of the project is to assess the performance of HDC in classifying different external stimulus applied to the animal.<br />
<br />
<br />
The task includes the following main sub-points:<br />
<ul><li> Understand the LFP basics and interpret the dataset.</li><br />
<li> Develop (high-level Phython or Matlab) machine learning or deep learning algorithm to classify the stimulation amplitudes or to detect signal onset. </li><br />
<li> Map the algorithm in the hardware (C-programming PULP, parallel computing).</li><br />
<li> Conduct in-vivo experiments to validate the method with a realistic setting.</li></ul><br />
<br />
The task is anyways flexible and it will be adapted to the student's skills and will.<br />
<br />
<br />
===Status: Currently Not Available ===<br />
* Semester project<br />
: Supervisors: [[:User:xiaywang|Xiaying Wang]]<br />
<br />
===Character===<br />
: 20% Theory<br />
: 80% Programming<br />
<br />
===Prerequisites===<br />
: Knowledge in Machine Learning (preprocessing, feature extraction, classifier, supervised-learning)<br />
: Embedded system programming<br />
: Python, C/C++, Matlab<br />
<br />
===Literature===<br />
* [https://ieeexplore.ieee.org/abstract/document/8584830] X. Wang, et al., Embedded Classification of Local Field Potentials Recorded from Rat Barrel Cortex with Implanted Multi-Electrode Array, 2018<br />
* [https://ieeexplore.ieee.org/document/8450511] A. Rahimi, et al., Hyperdimensional biosignal processing: A case study for EMG-based hand gesture recognition, 2016<br />
<br />
<br />
===IIS Professor===<br />
: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]<br />
<!-- : [http://www.iis.ee.ethz.ch/portrait/staff/huang.en.html Qiuting Huang] ---><br />
<!-- : [http://lne.ee.ethz.ch/en/general-information/people/professor.html Vanessa Wood] ---><br />
<!-- : [http://www.iis.ee.ethz.ch/portrait/staff/mluisier.en.html Mathieu Luisier] ---><br />
<!-- : [http://www.iis.ee.ethz.ch/portrait/staff/schenk.en.html Andreas Schenk] ---><br />
<!-- : [http://www.dz.ee.ethz.ch/en/general-information/about/staff/uid/364.html Hubert Kaeslin] ---><br />
[[#top|↑ top]]<br />
<br />
<br />
===Practical Details===<br />
* '''[[Project Plan]]'''<br />
* '''[[Project Meetings]]'''<br />
* '''[[Design Review]]'''<br />
* '''[[Coding Guidelines]]'''<br />
* '''[[Final Report]]'''<br />
* '''[[Final Presentation]]'''<br />
<br />
<br />
[[Category:Digital]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Xiaywang]]<br />
[[Category: NotAvailable]]<br />
[[Category:Hyper-dimensional Computing]]<br />
<br />
<!-- <br />
<br />
COPY PASTE FROM THE LIST BELOW TO ADD TO CATEGORIES<br />
<br />
GROUP<br />
[[Category:Digital]]<br />
[[Category:Analog]]<br />
[[Category:Nano-TCAD]]<br />
[[Category:Nano Electronics]]<br />
<br />
STATUS<br />
[[Category:Available]]<br />
[[Category:In progress]]<br />
[[Category:Completed]]<br />
[[Category:Hot]]<br />
<br />
TYPE OF WORK<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]]<br />
[[Category:PhD Thesis]]<br />
[[Category:Research]]<br />
<br />
NAMES OF EU/CTI/NT PROJECTS<br />
[[Category:UltrasoundToGo]]<br />
[[Category:IcySoC]]<br />
[[Category:PSocrates]]<br />
[[Category:UlpSoC]]<br />
[[Category:Qcrypt]]<br />
<br />
YEAR (IF FINISHED)<br />
[[Category:2010]]<br />
[[Category:2011]]<br />
[[Category:2012]]<br />
[[Category:2013]]<br />
[[Category:2014]]<br />
<br />
---></div>Xiaywanghttp://iis-projects.ee.ethz.ch/index.php?title=Graph_neural_networks_for_epileptic_seizure_detection&diff=10243Graph neural networks for epileptic seizure detection2024-03-10T17:56:40Z<p>Xiaywang: </p>
<hr />
<div>[[File:Non-EEG Seizure.jpg|thumb|300px]]<br />
==Description==<br />
Epilepsy is a severe and prevalent chronic neurological disorder affecting 1–2% of the world’s population [1]. One-third of epilepsy patients continue to suffer from seizures despite best possible pharmacological treatment [2]. For these patients with so-called drug-resistant epilepsy [3], various algorithms based on intracranial electroencephalography (iEEG) recording are proposed to detect the onset of seizures [1]. Complementary to this approach, efficient and robust algorithms are required to not only detect the seizure onset but also to identify the ictogenic (i.e. seizure-generating) brain regions for possible surgical removal [4, 5]. The iEEG currently provides the best spatial resolution and the highest signal-to-noise ratio (SNR) of electrical brain activity recordings [1]. Recent studies have shown successful applications of machine learning methods [1, 6, 7, 8] using iEEG signals to detect two distinct states of brain activity in patients with epilepsy, i.e., interictal (= between seizures) and ictal (= during seizures). These methods are based on extracting useful features followed by traditional supervised machine learning methods (such as random forest [1], support vector machines [6], Bayesian analysis [8], artificial neural networks [6]), and more recently deep learning algorithms [7].<br />
<br />
Graph Neural Networks (GNNs), have gained increasing interest in the deep learning(DL) community thanks to their capacity of capturing relational information between entities [9]. Graph theory analysis has been applied to neural signals to analyze the functional connectivity in the human brain [10]. However, conventional GNNs only capture static information, while dynamic graphs take into consideration also the relationship over time between different entities of the graph and their connections. Spatio-temporal graph Convolutional Networks (GCNs) have been proposed by researchers to study the spatial and temporal dependencies in the dataset, for example in traffic forecasting [11]. Continuous-time dynamic graphs have achieved impressive results in many tasks [12]. Few works have applied GNNs to EEG signals, e.g. by combining graphs with convolutional networks (GCNs), achieving state-of-the-art performance in public datasets [13].[14] proposed a temporal GCN to tackle the task of seizure detection. However, EEG signals provide much worse temporal and spatial resolution than iEEG signals. <br />
<br />
Depending on the type of thesis, the following steps are to be accomplished:<br />
<br />
1 - Development in a high-level programming language (python) of graph neural networks and/or convolutional neural networks for seizure detection.<br />
<br />
2 - Benchmarking of these algorithms on a large-scale dataset collected by the Bern Inselspital about epileptic patients (http://ieeg-swez.ethz.ch/).<br />
<br />
3 - Comparison with state-of-the-art methods.<br />
<br />
4 - Characterization of the algorithm on different computing platforms, from the high-level number of operations to the number of cycles and energy consumption on embedded devices (e.g. GAP8, a multi-core chip from GreenWaves Technology).<br />
<br />
===Status: Not Available ===<br />
Looking for Master's (preferred) or Semester thesis students. <br />
: Supervision: [[:User:xiaywang|Xiaying Wang]], [mailto:alessio.burrello@unibo.it Alessio Burrello]<br />
<br />
===Prerequisites===<br />
* Machine Learning<br />
* Deep Learning<br />
* Python (and C Programming)<br />
<br />
<br />
===Character===<br />
: 20% Theory<br />
: 80% Programming<br />
<br />
===Professor===<br />
: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]<br />
[[#top|↑ top]]<br />
<br />
<!--<br />
==Detailed Task Description==<br />
<br />
===Meetings & Presentations===<br />
The student(s) and advisor(s) agree on weekly meetings to discuss all relevant decisions and decide on how to proceed. Of course, additional meetings can be organized to address urgent issues. <br />
Around the middle of the project there is a design review, where senior members of the lab review your work (bring all the relevant information, such as prelim. specifications, block diagrams, synthesis reports, testing strategy, ...) to make sure everything is on track and decide whether further support is necessary. They also make the definite decision on whether the chip is actually manufactured (no reason to worry, if the project is on track) and whether more chip area, a different package, ... is provided. For more details confer to [http://eda.ee.ethz.ch/index.php/Design_review]. <br />
At the end of the project, you have to present/defend your work during a 15 min. presentation and 5 min. of discussion as part of the IIS colloquium (as required for any semester or master thesis at D-ITET).<br />
<br />
===Deliverables===<br />
* description of the most promising architectures, and argumentation on the decision taken (as part of the report)<br />
* synthesizable, verified VHDL code<br />
* generated test vector files<br />
* synthesis scripts & relevant software models developed for verification<br />
* synthesis results and final chip layout (GDS II data), bonding diagram<br />
* datasheet (part of report)<br />
* presentation slides<br />
* project report (in digital form; a hard copy also welcome, but not necessary)<br />
---><br />
<!--<br />
===Timeline==<br />
To give some idea on how the time can be split up, we provide some possible partitioning: <br />
* Literature survey, building a basic understanding of the problem at hand, catch up on related work <br />
* Development of a working software-based implementation running on the Zynq's ARM core<br />
* Piece-by-piece off-loading of relevant tasks to the programmable logic<br />
* Implementation of data interfaces (software or hardware)<br />
* Report and presentation <br />
--><br />
<!-- 13.5 weeks total here --><br />
<br />
<br />
===Literature===<br />
* [1] S. N. Baldassano, B. H. Brinkmann, H. Ung, T. Blevins, E. C. Conrad, K. Leyde,M. J. Cook, A. N. Khambhati, J. B. Wagenaar, G. A. Worrell, and B. Litt, “Crowd-sourcing seizure detection: algorithm development and validation on human im-planted device recordings,”Brain, vol. 140, no. 6, pp. 1680–1691, 2017.<br />
* [2] D. Schmidt and M. Sillanpää, “Evidence-based review on the natural history of theepilepsies.”Current opinion in neurology, vol. 25 2, pp. 159–63, 2012.<br />
* [3] J. F. Tellez-Zenteno, R. Dhar, L. Hernandez-Ronquillo, and S. Wiebe, “Long-termoutcomes in epilepsy surgery: antiepileptic drugs, mortality, cognitive and psychoso-cial aspects,”Brain, vol. 130, no. Pt 2, pp. 334–345, Feb 2007.<br />
* [4] S. Wiebe, W. T. Blume, J. P. Girvin, and M. Eliasziw, “A randomized, controlledtrial of surgery for temporal-lobe epilepsy,”N. Engl. J. Med., vol. 345, no. 5, pp.311–318, Aug 2001.<br />
* [5] C. Rummel, E. Abela, R. G. Andrzejak, M. Hauf, C. Pollo, M. Muller, C. Weisstan-ner, R. Wiest, and K. Schindler, “Resected Brain Tissue, Seizure Onset Zone andQuantitative EEG Measures: Towards Prediction of Post-Surgical Seizure Control,”PLoS ONE, vol. 10, no. 10, p. e0141023, 2015.<br />
* [6] A. K. Jaiswal and H. Banka, “Local pattern transformation based feature extractiontechniques for classification of epileptic EEG signals,”Biomedical Signal Processingand Control, vol. 34, pp. 81 – 92, 2017.<br />
* [7] R. Hussein, H. Palangi, Z. J. Wang, and R. Ward, “Robust detection of epilepticseizures using deep neural networks,” in2018 IEEE International Conference onAcoustics, Speech and Signal Processing (ICASSP), April 2018, pp. 2546–2550.<br />
* [8] W. Zhou, Y. Liu, Q. Yuan, and X. Li, “Epileptic Seizure Detection Using Lacunar-ity and Bayesian Linear Discriminant Analysis in Intracranial EEG,”IEEE TransBiomed Eng, vol. 60, no. 12, pp. 3375–3381, Dec 2013.<br />
* [9] J. Zhou, G. Cui, Z. Zhang, C. Yang, Z. Liu, L. Wang, C. Li, and M. Sun, “Graphneural networks: A review of methods and applications,” 2019.<br />
* [10] S. Sun, X. Li, J. Zhu, Y. Wang, R. La, X. Zhang, L. Wei, and B. Hu, “Graph theoryanalysis of functional connectivity in major depression disorder with high-densityresting state eeg data,”IEEE Transactions on Neural Systems and RehabilitationEngineering, vol. 27, no. 3, pp. 429–439, 2019.<br />
* [11] B. Yu, H. Yin, and Z. Zhu, “Spatio-temporal graph convolutional networks: Adeep learning framework for traffic forecasting,”Proceedings of the Twenty-SeventhInternational Joint Conference on Artificial Intelligence, Jul 2018. [Online].Available: http://dx.doi.org/10.24963/ijcai.2018/505<br />
* [12] E. Rossi, B. Chamberlain, F. Frasca, D. Eynard, F. Monti, and M. Bronstein, “Tem-poral graph networks for deep learning on dynamic graphs,” 2020.<br />
* [13] X. Lun, S. Jia, Y. Hou, Y. Shi, Y. Li, H. Yang, S. Zhang, and J. Lv, “Gcns-net: Agraph convolutional neural network approach for decoding time-resolved eeg motorimagery signals,” 2020.<br />
* [14] I. Covert, B. Krishnan, I. Najm, J. Zhan, M. Shore, J. Hixson, and M. J. Po,“Temporal graph convolutional networks for automatic seizure detection,” 2019.<br />
<br />
<br />
===Practical Details===<br />
* '''[[Project Plan]]'''<br />
* '''[[Project Meetings]]'''<br />
* '''[[Final Report]]'''<br />
* '''[[Final Presentation]]'''<br />
<br />
<br />
<br />
[[#top|↑ top]]<br />
<br />
[[Category:Digital]][[Category:Semester Thesis]][[Category:Master Thesis]] [[Category: NotAvailable]] [[Category:2020]][[Category:Hot]][[Category:Human Intranet]][[Category:xiaywang]][[Category:Epilepsy]][[Category:EmbeddedAI]]<br />
<br />
<!-- <br />
<br />
COPY PASTE FROM THE LIST BELOW TO ADD TO CATEGORIES<br />
<br />
GROUP<br />
[[Category:Digital]]<br />
<br />
STATUS<br />
[[Category:Available]]<br />
[[Category:In progress]]<br />
[[Category:Completed]]<br />
[[Category:Hot]]<br />
<br />
TYPE OF WORK<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]]<br />
[[Category:PhD Thesis]]<br />
[[Category:Research]]<br />
<br />
NAMES OF EU/CTI/NT PROJECTS<br />
[[Category:UltrasoundToGo]]<br />
[[Category:IcySoC]]<br />
[[Category:PSocrates]]<br />
[[Category:UlpSoC]]<br />
[[Category:Qcrypt]]<br />
<br />
YEAR (IF FINISHED)<br />
[[Category:2010]]<br />
[[Category:2011]]<br />
[[Category:2012]]<br />
[[Category:2013]]<br />
[[Category:2014]]<br />
<br />
---></div>Xiaywanghttp://iis-projects.ee.ethz.ch/index.php?title=User:Xiaywang&diff=10242User:Xiaywang2024-03-10T17:56:10Z<p>Xiaywang: </p>
<hr />
<div>==Xiaying Wang -- Contact Information==<br />
* '''Office''': ETZ J68.2<br />
* '''e-mail''': [mailto:xiaywang@iis.ee.ethz.ch xiaywang@iis.ee.ethz.ch]<br />
<br />
==Interests==<br />
* Bio-images and bio-signals processing<br />
* Machine learning and deep learning<br />
* Embedded Systems<br />
* Low-power micro-controllers<br />
* Brain--machine interfaces<br />
<br />
==Available Projects==<br />
<DynamicPageList><br />
supresserrors = true<br />
category = Available<br />
category = Xiaywang<br />
</DynamicPageList><br />
<br />
== Projects in Progress==<br />
<DynamicPageList><br />
supresserrors = true<br />
category = In progress<br />
category = Xiaywang<br />
</DynamicPageList><br />
<br />
== Completed Projects==<br />
<DynamicPageList><br />
supresserrors = true<br />
category = Completed<br />
category = Xiaywang<br />
</DynamicPageList><br />
<br />
== Not Available==<br />
<DynamicPageList><br />
supresserrors = true<br />
category = NotAvailable<br />
category = Xiaywang<br />
</DynamicPageList><br />
<br />
[[Category:Supervisors]]</div>Xiaywanghttp://iis-projects.ee.ethz.ch/index.php?title=Short_Range_Radars_For_Biomedical_Application&diff=7359Short Range Radars For Biomedical Application2021-12-17T10:46:25Z<p>Xiaywang: </p>
<hr />
<div>[[File:Soliradar.png|600px|right|thumb]]<br />
==Short Description==<br />
<br />
Short Range radars have been shown potentially to be used in many application scenarios including for remote sensing of biosignals in a more comfortable and easier way than wearable and contact devices. However, their performance has not been tested and reported either in practical scenarios and very few works exploit machine learning to further improve the signal processing. PBL is building a strong collaboration with Infineon to have the unique opportunity to work with novel short range radards developing systems and applications for biomedical applications. <br />
<br />
The goal of the present project is to investigate and develop a novel embedded system for acquiring and processing short range data with machine learning. According to the background and level of the work the students can be involved in the algorithms that will be implemented and evaluated on real hardware, on the data acquisition, on the hardware design, on the hardware-software co-design or in all of them (i.e. in the case of a master thesis) . Moreover, the project has the goal of acquiring data from real subjects to have a data set to train and evaluate the algorithms on the proposed application scenario.<br />
This work will be done in collaboration with Infineon for the sensor side, and Hospital of Lausanne or Zurich for the biomedical application. <br />
<br />
===Goal & Tasks===<br />
The project(s) will address the following challenges:<br />
* Investigate and develop techniques and methods to perform biomedical signal processing (hearth rate, respiration rate, etc) with short-range radar and energy-efficient NN to extract useful information (I..e prevent to detect some diseases.<br />
* The algorithms will be evaluated and optimized for the capability of the embedded Platform to both increase the energy efficiency and, at the same increase the response time of the detection, aiming to achieve an always-on system.<br />
* Acquire a large dataset for biosignal with short-range radars and possibly other biomedical applications to have the possibility to test and train the ML that will be implemented on the hardware.<br />
* A complete hardware and software prototype of a smart sensor system, which includes all the subsystems (sensor acquisition, preprocessing, and processing and radio communication), will be developed to demonstrate the benefits of the proposed approach and the capability to achieve perpetual low latency and energy efficiency on the challenging scenario of biomedical applications <br />
<br />
===Prerequisites===<br />
(''not all need to be met'' by the single candidate)<br />
* Knowdleg of high and low level programming languages (e.g. Python, embedded C)<br />
* Knowdleg of embedded systems<br />
* Knowledge of machine learning and signal processing<br />
* Motivation to learn neural networks <br />
* Motivation to build and test a real system and acquiring field data<br />
<br />
===Detailed Task Description===<br />
A detailed task description will be worked out right before the project, taking the student's interests and capabilities into account.<br />
<br />
<br />
<br />
===Status: Available ===<br />
* Looking for Bachelor, Semester and Master Project Students<br />
: Supervisors: [[:User:magnom|Michele Magno]], <br />
<br />
===Character===<br />
: 35% Theory and Algorithms<br />
: 35% Implementation<br />
: 30% Data acquisition, Verification, and Testing<br />
<br />
===IIS Professor===<br />
: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]<br />
<!-- : [http://www.iis.ee.ethz.ch/portrait/staff/huang.en.html Qiuting Huang] ---><br />
<!-- : [http://lne.ee.ethz.ch/en/general-information/people/professor.html Vanessa Wood] ---><br />
<!-- : [http://www.iis.ee.ethz.ch/portrait/staff/mluisier.en.html Mathieu Luisier] ---><br />
<!-- : [http://www.iis.ee.ethz.ch/portrait/staff/schenk.en.html Andreas Schenk] ---><br />
<!-- : [http://www.dz.ee.ethz.ch/en/general-information/about/staff/uid/364.html Hubert Kaeslin] ---><br />
[[#top|↑ top]]<br />
<br />
===Practical Details===<br />
* '''[[Project Plan]]'''<br />
* '''[[Project Meetings]]'''<br />
* '''[[Design Review]]'''<br />
* '''[[Coding Guidelines]]'''<br />
* '''[[Final Report]]'''<br />
* '''[[Final Presentation]]'''<br />
<br />
<br />
[[Category:Digital]]<br />
[[Category:Available]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]]<br />
[[Category:SmartSensors]]<br />
[[Category:EmbeddedAI]]<br />
[[Category:System Design]]<br />
[[Category:Magnom]]<br />
<br />
<!-- <br />
<br />
COPY PASTE FROM THE LIST BELOW TO ADD TO CATEGORIES<br />
<br />
GROUP<br />
[[Category:Digital]]<br />
[[Category:Analog]]<br />
[[Category:Nano-TCAD]]<br />
[[Category:Nano Electronics]]<br />
<br />
STATUS<br />
[[Category:Available]]<br />
[[Category:In progress]]<br />
[[Category:Completed]]<br />
[[Category:Hot]]<br />
<br />
TYPE OF WORK<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]]<br />
[[Category:PhD Thesis]]<br />
[[Category:Research]]<br />
<br />
NAMES OF EU/CTI/NT PROJECTS<br />
[[Category:UltrasoundToGo]]<br />
[[Category:IcySoC]]<br />
[[Category:PSocrates]]<br />
[[Category:UlpSoC]]<br />
[[Category:Qcrypt]]<br />
<br />
YEAR (IF FINISHED)<br />
[[Category:2010]]<br />
[[Category:2011]]<br />
[[Category:2012]]<br />
[[Category:2013]]<br />
[[Category:2014]]<br />
<br />
---></div>Xiaywanghttp://iis-projects.ee.ethz.ch/index.php?title=Neural_Network_Algorithms_and_Interfaces_with_Accelerators_for_Embedded_Platforms_with_Real_World_Applications&diff=7352Neural Network Algorithms and Interfaces with Accelerators for Embedded Platforms with Real World Applications2021-12-06T15:30:47Z<p>Xiaywang: </p>
<hr />
<div>[[File:applications.png|400px|right|thumb]]<br />
==Short Description==<br />
<br />
Neural networks and other machine learning approaches accurately predict patterns in image and time-based data when large historical datasets with curated datasets, excessive compute power including GPUs, and large amounts of time are available. For most applications these prerequisites are not given, and it is important to achieve good predictive results for models that are quickly adapted from a standard pattern to an individual personalized pattern.<br />
<br />
IBM Research Zurich has an opening for a student project (preferably Master Thesis, but also Semester Project can be considered) with the objective of achieving an usable workflow for neural network personalization to be used for monitoring activity of daily living of elderly and to evaluate psychological and physiological stress in firemen by using leading hardware (HW) accelerators.<br />
<br />
We propose three different HW platforms: Cloud field programmable gate arrays (FPGA https://www.zurich.ibm.com/cci/cloudFPGA/), tensor processing units (TPU), and PULP-based platforms (https://pulp-platform.org). The FPGA accelerator service is accessed via an application program interface (API) and a scheduler. The front-end API receives accelerator service calls through the OpenStack dashboard or a command-line interface. The Coral Dev Board, a compact board with an edge tensor processing unit (TPU) AI accelerator chip speeds up machine learning. A new family of classification models -- Net-EdgeTPU identifies relationships among a baseline AI model's scaling dimensions under a fixed resource constraint (https://coral.withgoogle.com/docs/edgetpu/benchmarks/). The appropriate scaling coefficients for each dimension are then applied to scale up the model to the desired size or computational budget. The PULP platform is an open hardware multi-core platform achieving leading-edge energy-efficiency and featuring widely-tunable performance enabling battery-operated artificial intelligence (AI) in Internet of Things (IoT) applications. It is an open source platform and it comes with an SDK available on GitHub (https://github.com/pulp-platform/pulp-sdk). <br />
<br />
The project (Master Thesis) will take place at the IBM Research, Zurich.<br />
<br />
==Application Scenarios==<br />
One application is stress-detection from smart-device data, for activities in extreme environments, (https://researcher.watson.ibm.com/researcher/view_group.php?id=10009).<br />
Stress is a root cause for many modern, chronic diseases, thus wearble stress monitoring has a huge potential for stress prevention and management and to improve the quality of life. However, mental stress also critically affects decision making skills. Thus, tools to detect mental stress early, can significantly contribute to work safety in extreme conditions.<br />
Traditional stress detection methods are not practical for field-deployment. However, with the availability of low-cost consumer wearable devices that monitor vital signs in real time, more practical stress detection schemes have become possible. Previously, IBM Research measured heart rate variability (HRV) with wearable devices in realistic training environments for firefighters, who were subject to physical, psychological and combined stress. Using machine learning algorithms, different stress types were identified with 88% accuracy, in 1-minute time windows. These predictions are used to help firefighters train more efficiently and experience personal limits, help coordinators to put together the right team for specific missions and finally help mission commanders keep their teams safe. If combined with AI at the edge, to additionally extract context information, real-time closed loop risk mitigation schemes can be implemented with a few seconds latencies. The method is not limited to stress monitoring but can be extended to activity monitoring of daily living of elderly (https://www.activageproject.eu/deploymentsites/Region-Emilia-Romagna/). While activity of daily living personalization and analysis is not as time critical, future expansions, for example fall detection requires faster response times profit from edge acceleration.<br />
<br />
===Goal & Tasks===<br />
Building on the existing model and data provided by IBM, in this project you will:<br />
* Literature study of relevant background context<br />
* Achieve a usable workflow for neural network personalization<br />
* Derive new algorithms to improve performance, also in view of running them on performance constraint embedded platforms<br />
* Help complete the current dataset acquisition for quantitative stress assessment with additional vital signs like motion or audio and carry out the initial data-analysis<br />
* Data curation<br />
* With the new data, assess applicability and portability of personalized deep learning models to generic “community” models and their dependence on personal baselines and expand the model towards federate learning schemes<br />
<br />
===Prerequisites===<br />
(''not all need to be met'' by the single candidate)<br />
* Knowdleg of Python and some cloud computing<br />
* Knowledge of machine learning and signal processing<br />
* Familiarity with deep learning frameworks (Keras, TensorFlow)<br />
* Motivation to build and test a real system and acquiring field data<br />
<br />
<br />
===Detailed Task Description===<br />
A detailed task description will be worked out right before the project, taking the student's interests and capabilities into account.<br />
<br />
<br />
<br />
===Status: Completed ===<br />
* Looking for Semester and Master Project Students<br />
: Supervisors: [[:User:magnom|Michele Magno]], [[:User:xiaywang|Xiaying Wang]]<br />
: IBM contacts: Dr. Bruno Michel, Dr. Jonas Weiss<br />
<br />
<br />
===Character===<br />
: 40% Theory and Algorithms<br />
: 40% Implementation<br />
: 20% Verification and Testing<br />
<br />
===IIS Professor===<br />
: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]<br />
<!-- : [http://www.iis.ee.ethz.ch/portrait/staff/huang.en.html Qiuting Huang] ---><br />
<!-- : [http://lne.ee.ethz.ch/en/general-information/people/professor.html Vanessa Wood] ---><br />
<!-- : [http://www.iis.ee.ethz.ch/portrait/staff/mluisier.en.html Mathieu Luisier] ---><br />
<!-- : [http://www.iis.ee.ethz.ch/portrait/staff/schenk.en.html Andreas Schenk] ---><br />
<!-- : [http://www.dz.ee.ethz.ch/en/general-information/about/staff/uid/364.html Hubert Kaeslin] ---><br />
[[#top|↑ top]]<br />
<br />
===Practical Details===<br />
* '''[[Project Plan]]'''<br />
* '''[[Project Meetings]]'''<br />
* '''[[Design Review]]'''<br />
* '''[[Coding Guidelines]]'''<br />
* '''[[Final Report]]'''<br />
* '''[[Final Presentation]]'''<br />
<br />
<br />
[[Category:Digital]]<br />
[[Category:Completed]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]]<br />
[[Category:SmartSensors]]<br />
[[Category:EmbeddedAI]]<br />
[[Category:System Design]]<br />
[[Category:Magnom]]<br />
[[Category:Xiaywang]]<br />
<br />
<br />
<!-- <br />
<br />
COPY PASTE FROM THE LIST BELOW TO ADD TO CATEGORIES<br />
<br />
GROUP<br />
[[Category:Digital]]<br />
[[Category:Analog]]<br />
[[Category:Nano-TCAD]]<br />
[[Category:Nano Electronics]]<br />
<br />
STATUS<br />
[[Category:Available]]<br />
[[Category:In progress]]<br />
[[Category:Completed]]<br />
[[Category:Hot]]<br />
<br />
TYPE OF WORK<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]]<br />
[[Category:PhD Thesis]]<br />
[[Category:Research]]<br />
<br />
NAMES OF EU/CTI/NT PROJECTS<br />
[[Category:UltrasoundToGo]]<br />
[[Category:IcySoC]]<br />
[[Category:PSocrates]]<br />
[[Category:UlpSoC]]<br />
[[Category:Qcrypt]]<br />
<br />
YEAR (IF FINISHED)<br />
[[Category:2010]]<br />
[[Category:2011]]<br />
[[Category:2012]]<br />
[[Category:2013]]<br />
[[Category:2014]]<br />
<br />
---></div>Xiaywanghttp://iis-projects.ee.ethz.ch/index.php?title=Mixed-Precision_Neural_Networks_for_Brain-Computer_Interface_Applications&diff=7101Mixed-Precision Neural Networks for Brain-Computer Interface Applications2021-10-29T10:02:49Z<p>Xiaywang: /* Description */</p>
<hr />
<div>==Description==<br />
Brain-computer interfaces (BCI) are devices and applications which seek to enable direct communication between a user's brain and a computer, e.g., by means of electroencephalography (EEG). An example application that has seen intensive research is motor imagery, which has the goal of recognizing motion behavior imagined by the user by means of BCI devices. Once fully functional, such a system would be of immeasurable value in the design of, e.g., motorized prostheses.<br />
<br />
Researchers at IIS have performed in-depth studies into bringing these capabilities to resource-constrained edge devices such as microcontrollers, setting the state of the art in terms of efficiency for low-power motor imagery systems. However, the deployed networks have so far all been run at a relatively high numerical precision of 8 bits. <br />
<br />
Recent research has shown that neural networks' operands can be aggressively quantized, i.e., represented with as few as 2 bits, with only minor accuracy drops. This has the advantage of decreasing model size (as each parameter requires less storage to be represented), and, with appropriate hardware support, decreasing inference latency at comparable power consumption, leading to significantly lower energy consumption per inference. Thanks to these theoretical insights, combined with a new generation of MCU cores developed at IIS and the University of Bologna (namely, RISC-V cores of the PULP family supporting the xPULPnn ISA extension - see references), there is thus potential for improving the efficiency of these applications even further - this is where you come in!<br />
<br />
In this project, you will enhance existing neural networks for BCI applications with mixed-precision features. The goal is to decrease the energy per inference while retaining the statistical accuracy of the original network by running layers at numerical precisions lower than 8 bits. To facilitate this process, we have developed QuantLab, a framework to make training quantized neural networks easy and simplify the exploration of the design space of topology, precision and training algorithms. A network trained and quantized in QuantLab can be exported and consumed by DORY, a deployment tool which automatically generates optimized C code to run the network in question on a PULP-family microcontroller.<br />
<br />
In this project, you will perform the following steps:<br />
# Select one or multiple BCI networks to quantize and map to PULP - e.g., EEGNet or EEG-TCNet <br />
# Port this network into QuantLab and train full-precision and 8-bit baselines<br />
# Select a quantization strategy to lower selected layers' precisions, taking into account hardware constraints such as memory hierarchy<br />
# Tune the individual layers' precisions to find a low-precision network which achieves (close to) full-precision accuracy<br />
# Map the final network to PULP using DORY (either a simulation or the physical Kraken chip - see references) and evaluate performance compared to the 8-bit baseline<br />
# Determine performance bottlenecks and tune the performance - either by introducing improved kernels to DORY's mapping process, or by replacing the implementations of certain layers with hand-written kernels.<br />
<br />
<br />
===Status: Available ===<br />
Looking for 1-2 students for a Semester project. If you have any questions, suggestions for a related (or even unrelated) project or are simply curious about what we do, please do not hesitate to contact us!<br />
<br />
: Supervision: [[:User:Georg|Georg Rutishauser]], [[:User:Xiaywang|Xiaying Wang]]<br />
<br />
===Prerequisites===<br />
* Machine Learning<br />
* Python<br />
* C<br />
===Character===<br />
: 20% Theory<br />
: 80% Implementation<br />
<br />
===Literature===<br />
* [http://asic.ee.ethz.ch/2021/Kraken.html] Kraken in the IIS chip gallery<br />
* [https://arxiv.org/abs/1905.13082] M. Rusci et al., Memory-Driven Mixed Low Precision Quantization For Enabling Deep Network Inference On Microcontrollers<br />
* [https://arxiv.org/abs/2011.14325] A. Garofalo et al., XpulpNN: Enabling Energy Efficient and Flexible Inference of Quantized Neural Network on RISC-V based IoT End Nodes<br />
* [https://arxiv.org/abs/2006.00622] T. Mar Ingolfsson et al., EEG-TCNet: An Accurate Temporal Convolutional Network for Embedded Motor-Imagery Brain-Machine Interfaces<br />
* [https://arxiv.org/abs/2004.00077] X. Wang et al., An Accurate EEGNet-based Motor-Imagery Brain-Computer Interface for Low-Power Edge Computing<br />
* [https://arxiv.org/abs/2004.11690] T. Schneider et al., Q-EEGNet: an Energy-Efficient 8-bit Quantized Parallel EEGNet Implementation for Edge Motor-Imagery Brain--Machine Interfaces<br />
<br />
<br />
===Professor===<br />
: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]<br />
[[#top|↑ top]]<br />
<br />
<!--<br />
==Detailed Task Description==<br />
<br />
===Meetings & Presentations===<br />
The student(s) and advisor(s) agree on weekly meetings to discuss all relevant decisions and decide on how to proceed. Of course, additional meetings can be organized to address urgent issues. <br />
Around the middle of the project there is a design review, where senior members of the lab review your work (bring all the relevant information, such as prelim. specifications, block diagrams, synthesis reports, testing strategy, ...) to make sure everything is on track and decide whether further support is necessary. They also make the definite decision on whether the chip is actually manufactured (no reason to worry, if the project is on track) and whether more chip area, a different package, ... is provided. For more details confer to [http://eda.ee.ethz.ch/index.php/Design_review]. <br />
At the end of the project, you have to present/defend your work during a 15 min. presentation and 5 min. of discussion as part of the IIS colloquium (as required for any semester or master thesis at D-ITET).<br />
<br />
===Deliverables===<br />
* description of the most promising architectures, and argumentation on the decision taken (as part of the report)<br />
* synthesizable, verified VHDL code<br />
* generated test vector files<br />
* synthesis scripts & relevant software models developed for verification<br />
* synthesis results and final chip layout (GDS II data), bonding diagram<br />
* datasheet (part of report)<br />
* presentation slides<br />
* project report (in digital form; a hard copy also welcome, but not necessary)<br />
---><br />
<!--<br />
===Timeline==<br />
To give some idea on how the time can be split up, we provide some possible partitioning: <br />
* Literature survey, building a basic understanding of the problem at hand, catch up on related work <br />
* Development of a working software-based implementation running on the Zynq's ARM core<br />
* Piece-by-piece off-loading of relevant tasks to the programmable logic<br />
* Implementation of data interfaces (software or hardware)<br />
* Report and presentation <br />
--><br />
<!-- 13.5 weeks total here --><br />
<br />
===Practical Details===<br />
<!-- * '''[http://n.ethz.ch/~georgr/project-descriptions/FS21/task_descr_tnn_on_pulp.pdf Detailed Project Description]''' --><br />
* '''[[Project Meetings]]'''<br />
* '''[[Final Report]]'''<br />
* '''[[Final Presentation]]'''<br />
<br />
[[#top|↑ top]]<br />
<br />
<!-- <br />
<br />
COPY PASTE FROM THE LIST BELOW TO ADD TO CATEGORIES<br />
<br />
GROUP<br />
[[Category:Digital]]<br />
<br />
STATUS<br />
[[Category:Available]]<br />
[[Category:In progress]]<br />
[[Category:Completed]]<br />
[[Category:Hot]]<br />
<br />
TYPE OF WORK<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]]<br />
[[Category:PhD Thesis]]<br />
[[Category:Research]]<br />
<br />
NAMES OF EU/CTI/NT PROJECTS<br />
[[Category:UltrasoundToGo]]<br />
[[Category:IcySoC]]<br />
[[Category:PSocrates]]<br />
[[Category:UlpSoC]]<br />
[[Category:Qcrypt]]<br />
<br />
YEAR (IF FINISHED)<br />
[[Category:2010]]<br />
[[Category:2011]]<br />
[[Category:2012]]<br />
[[Category:2013]]<br />
[[Category:2014]]<br />
<br />
---><br />
<br />
[[Category:Georg]]<br />
[[Category:Scheremo]]<br />
[[Category:Deep Learning Projects]]<br />
[[Category:Embedded Coding]]<br />
[[Category:PULP]]<br />
[[Category:Digital]][[Category:Semester Thesis]] [[Category:Master Thesis]] [[Category:Available]] [[Category:2021]][[Category:Hot]]</div>Xiaywanghttp://iis-projects.ee.ethz.ch/index.php?title=Hardware/software_codesign_neural_decoding_algorithm_for_%E2%80%9Cneural_dust%E2%80%9D&diff=6603Hardware/software codesign neural decoding algorithm for “neural dust”2021-06-07T15:19:14Z<p>Xiaywang: </p>
<hr />
<div>[[File:ReMote.png|thumb|600px]]<br />
=== Description ===<br />
A brain-machine interface (BMI) acquires brain activity and translates the information into actions to control software and hardware such as computers and prostheses. As a potential treatment for many neurological diseases, it has won great attention in academia and industry.<br />
<br />
Non-invasive BMIs, mostly based on EEG signals, do not require surgery to implant sensing nodes in the brain. However, they strongly suffer from low spatial resolution and low signal-to-noise ratio, making them often not accurate. Implantable BMIs, on the other hand, offer high resolution and high signal quality, making them necessary for applications where decoding accuracy is crucial.<br />
<br />
Minimizing the damage to the brain is one of the primary goals of implantable BMI systems. Most of the existing systems are bulky and wired for communication and power transfer. Wireless, miniaturized, and implantable BMI systems (sometimes called “neural recording dust”) hold the promise of restoring motor function while reducing the damage caused by the implantation [1][2]. However, such system also poses stringent constraints on the power consumption and area.<br />
<br />
Researchers have developed mm-scale neural probe [1][2] and efficient algorithms [3][4] to tackle the problem. Our goal is to hardware/software codesign novel deep learning algorithm for neural decoding based on the spiking band power (SBP) information from the mm-scale neural probe.<br />
<br />
In this project, the student will:<br />
1. Study prior art<br />
2. Get familiar with the dataset and the system<br />
3. Explore ML algorithms (CNNs, RNNs, SNNs, …)<br />
4. H/S codesign efficient algorithms<br />
<br />
===Status: Available===<br />
:Looking for master or semester thesis students<br />
:Supervisor: [[:User:Liaoj | Jiawei Liao]], [[:User:xiaywang|Xiaying Wang]]<br />
<br />
===Prerequisites===<br />
* Machine Learning<br />
* Deep Learning<br />
* Python<br />
* VLSI is a plus<br />
<br />
===Character===<br />
* 20% Literature review<br />
* 20% Theory<br />
* 60% Programming<br />
<br />
===Professor===<br />
Prof. Taekwang Jang <[mailto:tjang@ethz.ch tjang@ethz.ch]><br />
<br />
=== Reference===<br />
[1] J. Lim et al., “26.9 A 0.19×0.17mm 2 Wireless Neural Recording IC for Motor Prediction with Near-Infrared-Based Power and Data Telemetry,” in 2020 IEEE International Solid- State Circuits Conference - (ISSCC), San Francisco, CA, USA, Feb. 2020, pp. 416–418. doi: 10.1109/ISSCC19947.2020.9063005.<br />
<br />
[2] E. Moon et al., “Bridging the ‘Last Millimeter’ Gap of Brain-Machine Interfaces via Near-Infrared Wireless Power Transfer and Data Communications,” ACS Photonics, vol. 8, no. 5, pp. 1430–1438, May 2021, doi: 10.1021/acsphotonics.1c00160.<br />
<br />
[3] S. R. Nason et al., “A low-power band of neuronal spiking activity dominated by local single units improves the performance of brain–machine interfaces,” Nat Biomed Eng, vol. 4, no. 10, pp. 973–983, Oct. 2020, doi: 10.1038/s41551-020-0591-0.<br />
<br />
[4] A. K. Vaskov et al., “Cortical Decoding of Individual Finger Group Motions Using ReFIT Kalman Filter,” Front. Neurosci., vol. 12, 2018, doi: 10.3389/fnins.2018.00751.<br />
<br />
[[#top|↑ top]]<br />
[[Category:EECIS]]<br />
[[Category:Available]]<br />
[[Category:In progress]]<br />
[[Category:2021]]<br />
[[Category:Liaoj]]<br />
[[Category:xiaywang]]<br />
[[Category:Digital]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]]<br />
[[Category:Hot]]<br />
<br />
===Practical Details===<br />
* '''[[Project Plan]]'''<br />
* '''[[Project Meetings]]'''<br />
* '''[[Final Report]]'''<br />
* '''[[Final Presentation]]'''</div>Xiaywanghttp://iis-projects.ee.ethz.ch/index.php?title=Digital&diff=6602Digital2021-06-07T15:19:00Z<p>Xiaywang: /* Active Projects */</p>
<hr />
<div>__NOTOC__<br />
<imagemap><br />
Image:Project_Map_2020_11.png|780px<br />
rect 0 0 260 130 [[High Performance SoCs]]<br />
rect 520 0 780 130 [[Energy Efficient SoCs]]<br />
rect 0 130 260 260 [[Hardware Acceleration]]<br />
rect 520 130 780 260 [[Biomedical Circuits, Systems, and Applications]]<br />
rect 0 260 260 390 [[SW/HW Predictability and Security]]<br />
rect 260 260 520 390 [[Deep Learning Projects|Deep Learning Acceleration]]<br />
rect 520 260 780 390 [[Embedded Systems and autonomous UAVs]]<br />
default [[Digital]]<br />
</imagemap><br />
<br />
===Topic List===<br />
* '''[[High Performance SoCs]]'''<br />
** '''[[Heterogeneous Acceleration Systems]]'''<br />
* '''[[Energy Efficient SoCs]]'''<br />
* '''[[Hardware Acceleration]]'''<br />
* '''[[Biomedical Circuits, Systems, and Applications]]'''<br />
** '''[[Human Intranet]]'''<br />
** '''[[Digital Medical Ultrasound Imaging]]'''<br />
* '''[[SW/HW Predictability and Security]]'''<br />
** '''[[Predictable Execution]]'''<br />
** '''[[Cryptography|Cryptographic Hardware]]'''<br />
* '''[[Deep Learning Projects|Machine Learning / Deep Learning]]'''<br />
** '''[[Event-Driven Computing]]'''<br />
* '''[[Embedded Systems and autonomous UAVs]]'''<br />
** '''[[Energy Efficient Autonomous UAVs]]'''<br />
** '''[[Low Power Embedded Systems]]'''<br />
** '''[[Embedded Artificial Intelligence:Systems And Applications]]'''<br />
* '''[[ASIC Design Projects]]'''<br />
<br />
==External Collaborations==<br />
<imagemap><br />
Image:Project_Map_2020_11_external.png|520px<br />
rect 0 65 260 195 [[Biomedical System on Chips]]<br />
rect 260 65 540 195 [[Wireless Communication Systems for the IoT]]<br />
rect 0 195 260 324 [[IBM Research]]<br />
rect 260 195 540 324 [[Students' International Competitions: F1(AMZ), Swissloop, Educational Rockets]]<br />
</imagemap><br />
<br />
===Topic List===<br />
* '''[[Biomedical System on Chips]]'''<br />
* '''[[Wireless Communication Systems for the IoT]]'''<br />
* '''[[IBM Research]]'''<br />
* '''[[Huawei_Research|Huawei Research - Future Computing Laboratory (Computer Architecture and Machine Learning Acceleration)]]'''<br />
* '''[[Students' International Competitions: F1(AMZ), Swissloop, Educational Rockets]]'''<br />
* '''[[Physics is looking for PULP]]<br />
<br />
==Active Projects==<br />
These are the projects that are currently active:<br />
<DynamicPageList><br />
category = In progress<br />
category = Digital<br />
</DynamicPageList><br />
<br />
==Completed Projects==<br />
These are projects that were completed in the last few years:<br />
===2019===<br />
<DynamicPageList><br />
category = Completed<br />
category = Digital<br />
category = 2019<br />
suppresserrors=true<br />
</DynamicPageList><br />
===2018===<br />
<DynamicPageList><br />
category = Completed<br />
category = Digital<br />
category = 2018<br />
suppresserrors=true<br />
</DynamicPageList><br />
===2017===<br />
<DynamicPageList><br />
category = Completed<br />
category = Digital<br />
category = 2017<br />
suppresserrors=true<br />
</DynamicPageList><br />
===2016===<br />
<DynamicPageList><br />
category = Completed<br />
category = Digital<br />
category = 2016<br />
suppresserrors=true<br />
</DynamicPageList><br />
===2015===<br />
<DynamicPageList><br />
category = Completed<br />
category = Digital<br />
category = 2015<br />
suppresserrors=true<br />
</DynamicPageList><br />
===2014===<br />
<DynamicPageList><br />
category = Completed<br />
category = Digital<br />
category = 2014<br />
</DynamicPageList><br />
===2013===<br />
<DynamicPageList><br />
category = Completed<br />
category = Digital<br />
category = 2013<br />
</DynamicPageList><br />
===2012===<br />
<DynamicPageList><br />
category = Completed<br />
category = Digital<br />
category = 2012<br />
</DynamicPageList><br />
===2011===<br />
<DynamicPageList><br />
category = Completed<br />
category = Digital<br />
category = 2011<br />
</DynamicPageList><br />
<br />
===ASICs===<br />
<DynamicPageList><br />
category = ASIC<br />
category = Available<br />
</DynamicPageList><br />
<br />
[[Category:Computer Architecture]]<br />
[[Category:Acceleration and Transprecision]]<br />
[[Category:Heterogeneous Acceleration Systems]]<br />
[[Category:Event-Driven Computing]]<br />
[[Category:Predictable Execution]]<br />
[[Category:Low Power Embedded Systems]]<br />
[[Category:Embedded Artificial Intelligence:Systems And Applications]]<br />
[[Category:Transient Computing]]<br />
[[Category:System on Chips for IoTs]]<br />
[[Category:Energy Efficient Autonomous UAVs]]<br />
[[Category:Biomedical System on Chips]]<br />
[[Category:Digital Medical Ultrasound Imaging]]<br />
[[Category:Cryptography]]<br />
[[Category:Deep Learning Acceleration]]<br />
[[Category:Human Intranet]]</div>Xiaywanghttp://iis-projects.ee.ethz.ch/index.php?title=Digital&diff=6601Digital2021-06-07T15:18:48Z<p>Xiaywang: /* Active Projects */</p>
<hr />
<div>__NOTOC__<br />
<imagemap><br />
Image:Project_Map_2020_11.png|780px<br />
rect 0 0 260 130 [[High Performance SoCs]]<br />
rect 520 0 780 130 [[Energy Efficient SoCs]]<br />
rect 0 130 260 260 [[Hardware Acceleration]]<br />
rect 520 130 780 260 [[Biomedical Circuits, Systems, and Applications]]<br />
rect 0 260 260 390 [[SW/HW Predictability and Security]]<br />
rect 260 260 520 390 [[Deep Learning Projects|Deep Learning Acceleration]]<br />
rect 520 260 780 390 [[Embedded Systems and autonomous UAVs]]<br />
default [[Digital]]<br />
</imagemap><br />
<br />
===Topic List===<br />
* '''[[High Performance SoCs]]'''<br />
** '''[[Heterogeneous Acceleration Systems]]'''<br />
* '''[[Energy Efficient SoCs]]'''<br />
* '''[[Hardware Acceleration]]'''<br />
* '''[[Biomedical Circuits, Systems, and Applications]]'''<br />
** '''[[Human Intranet]]'''<br />
** '''[[Digital Medical Ultrasound Imaging]]'''<br />
* '''[[SW/HW Predictability and Security]]'''<br />
** '''[[Predictable Execution]]'''<br />
** '''[[Cryptography|Cryptographic Hardware]]'''<br />
* '''[[Deep Learning Projects|Machine Learning / Deep Learning]]'''<br />
** '''[[Event-Driven Computing]]'''<br />
* '''[[Embedded Systems and autonomous UAVs]]'''<br />
** '''[[Energy Efficient Autonomous UAVs]]'''<br />
** '''[[Low Power Embedded Systems]]'''<br />
** '''[[Embedded Artificial Intelligence:Systems And Applications]]'''<br />
* '''[[ASIC Design Projects]]'''<br />
<br />
==External Collaborations==<br />
<imagemap><br />
Image:Project_Map_2020_11_external.png|520px<br />
rect 0 65 260 195 [[Biomedical System on Chips]]<br />
rect 260 65 540 195 [[Wireless Communication Systems for the IoT]]<br />
rect 0 195 260 324 [[IBM Research]]<br />
rect 260 195 540 324 [[Students' International Competitions: F1(AMZ), Swissloop, Educational Rockets]]<br />
</imagemap><br />
<br />
===Topic List===<br />
* '''[[Biomedical System on Chips]]'''<br />
* '''[[Wireless Communication Systems for the IoT]]'''<br />
* '''[[IBM Research]]'''<br />
* '''[[Huawei_Research|Huawei Research - Future Computing Laboratory (Computer Architecture and Machine Learning Acceleration)]]'''<br />
* '''[[Students' International Competitions: F1(AMZ), Swissloop, Educational Rockets]]'''<br />
* '''[[Physics is looking for PULP]]<br />
<br />
==Active Projects==<br />
These are the projects that are currently active:<br />
<DynamicPageList><br />
category = In progress<br />
category = Available<br />
category = Digital<br />
</DynamicPageList><br />
<br />
==Completed Projects==<br />
These are projects that were completed in the last few years:<br />
===2019===<br />
<DynamicPageList><br />
category = Completed<br />
category = Digital<br />
category = 2019<br />
suppresserrors=true<br />
</DynamicPageList><br />
===2018===<br />
<DynamicPageList><br />
category = Completed<br />
category = Digital<br />
category = 2018<br />
suppresserrors=true<br />
</DynamicPageList><br />
===2017===<br />
<DynamicPageList><br />
category = Completed<br />
category = Digital<br />
category = 2017<br />
suppresserrors=true<br />
</DynamicPageList><br />
===2016===<br />
<DynamicPageList><br />
category = Completed<br />
category = Digital<br />
category = 2016<br />
suppresserrors=true<br />
</DynamicPageList><br />
===2015===<br />
<DynamicPageList><br />
category = Completed<br />
category = Digital<br />
category = 2015<br />
suppresserrors=true<br />
</DynamicPageList><br />
===2014===<br />
<DynamicPageList><br />
category = Completed<br />
category = Digital<br />
category = 2014<br />
</DynamicPageList><br />
===2013===<br />
<DynamicPageList><br />
category = Completed<br />
category = Digital<br />
category = 2013<br />
</DynamicPageList><br />
===2012===<br />
<DynamicPageList><br />
category = Completed<br />
category = Digital<br />
category = 2012<br />
</DynamicPageList><br />
===2011===<br />
<DynamicPageList><br />
category = Completed<br />
category = Digital<br />
category = 2011<br />
</DynamicPageList><br />
<br />
===ASICs===<br />
<DynamicPageList><br />
category = ASIC<br />
category = Available<br />
</DynamicPageList><br />
<br />
[[Category:Computer Architecture]]<br />
[[Category:Acceleration and Transprecision]]<br />
[[Category:Heterogeneous Acceleration Systems]]<br />
[[Category:Event-Driven Computing]]<br />
[[Category:Predictable Execution]]<br />
[[Category:Low Power Embedded Systems]]<br />
[[Category:Embedded Artificial Intelligence:Systems And Applications]]<br />
[[Category:Transient Computing]]<br />
[[Category:System on Chips for IoTs]]<br />
[[Category:Energy Efficient Autonomous UAVs]]<br />
[[Category:Biomedical System on Chips]]<br />
[[Category:Digital Medical Ultrasound Imaging]]<br />
[[Category:Cryptography]]<br />
[[Category:Deep Learning Acceleration]]<br />
[[Category:Human Intranet]]</div>Xiaywanghttp://iis-projects.ee.ethz.ch/index.php?title=Graph_neural_networks_for_epileptic_seizure_detection&diff=6484Graph neural networks for epileptic seizure detection2021-03-12T11:24:48Z<p>Xiaywang: </p>
<hr />
<div>[[Category:Digital]][[Category:Semester Thesis]][[Category:Master Thesis]] [[Category:Available]] [[Category:2020]][[Category:Hot]][[Category:Human Intranet]][[Category:xiaywang]][[Category:Epilepsy]]<br />
[[File:Non-EEG Seizure.jpg|thumb|300px]]<br />
==Description==<br />
Epilepsy is a severe and prevalent chronic neurological disorder affecting 1–2% of the world’s population [1]. One-third of epilepsy patients continue to suffer from seizures despite best possible pharmacological treatment [2]. For these patients with so-called drug-resistant epilepsy [3], various algorithms based on intracranial electroencephalography (iEEG) recording are proposed to detect the onset of seizures [1]. Complementary to this approach, efficient and robust algorithms are required to not only detect the seizure onset but also to identify the ictogenic (i.e. seizure-generating) brain regions for possible surgical removal [4, 5]. The iEEG currently provides the best spatial resolution and the highest signal-to-noise ratio (SNR) of electrical brain activity recordings [1]. Recent studies have shown successful applications of machine learning methods [1, 6, 7, 8] using iEEG signals to detect two distinct states of brain activity in patients with epilepsy, i.e., interictal (= between seizures) and ictal (= during seizures). These methods are based on extracting useful features followed by traditional supervised machine learning methods (such as random forest [1], support vector machines [6], Bayesian analysis [8], artificial neural networks [6]), and more recently deep learning algorithms [7].<br />
<br />
Graph Neural Networks (GNNs), have gained increasing interest in the deep learning(DL) community thanks to their capacity of capturing relational information between entities [9]. Graph theory analysis has been applied to neural signals to analyze the functional connectivity in the human brain [10]. However, conventional GNNs only capture static information, while dynamic graphs take into consideration also the relationship over time between different entities of the graph and their connections. Spatio-temporal graph Convolutional Networks (GCNs) have been proposed by researchers to study the spatial and temporal dependencies in the dataset, for example in traffic forecasting [11]. Continuous-time dynamic graphs have achieved impressive results in many tasks [12]. Few works have applied GNNs to EEG signals, e.g. by combining graphs with convolutional networks (GCNs), achieving state-of-the-art performance in public datasets [13].[14] proposed a temporal GCN to tackle the task of seizure detection. However, EEG signals provide much worse temporal and spatial resolution than iEEG signals. <br />
<br />
Depending on the type of thesis, the following steps are to be accomplished:<br />
<br />
1 - Development in a high-level programming language (python) of graph neural networks and/or convolutional neural networks for seizure detection.<br />
<br />
2 - Benchmarking of these algorithms on a large-scale dataset collected by the Bern Inselspital about epileptic patients (http://ieeg-swez.ethz.ch/).<br />
<br />
3 - Comparison with state-of-the-art methods.<br />
<br />
4 - Characterization of the algorithm on different computing platforms, from the high-level number of operations to the number of cycles and energy consumption on embedded devices (e.g. GAP8, a multi-core chip from GreenWaves Technology).<br />
<br />
===Status: Available ===<br />
Looking for Master's (preferred) or Semester thesis students. <br />
: Supervision: [[:User:xiaywang|Xiaying Wang]], [mailto:alessio.burrello@unibo.it Alessio Burrello]<br />
<br />
===Prerequisites===<br />
* Machine Learning<br />
* Deep Learning<br />
* Python (and C Programming)<br />
<br />
<br />
===Character===<br />
: 20% Theory<br />
: 80% Programming<br />
<br />
===Professor===<br />
: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]<br />
[[#top|↑ top]]<br />
<br />
<!--<br />
==Detailed Task Description==<br />
<br />
===Meetings & Presentations===<br />
The student(s) and advisor(s) agree on weekly meetings to discuss all relevant decisions and decide on how to proceed. Of course, additional meetings can be organized to address urgent issues. <br />
Around the middle of the project there is a design review, where senior members of the lab review your work (bring all the relevant information, such as prelim. specifications, block diagrams, synthesis reports, testing strategy, ...) to make sure everything is on track and decide whether further support is necessary. They also make the definite decision on whether the chip is actually manufactured (no reason to worry, if the project is on track) and whether more chip area, a different package, ... is provided. For more details confer to [http://eda.ee.ethz.ch/index.php/Design_review]. <br />
At the end of the project, you have to present/defend your work during a 15 min. presentation and 5 min. of discussion as part of the IIS colloquium (as required for any semester or master thesis at D-ITET).<br />
<br />
===Deliverables===<br />
* description of the most promising architectures, and argumentation on the decision taken (as part of the report)<br />
* synthesizable, verified VHDL code<br />
* generated test vector files<br />
* synthesis scripts & relevant software models developed for verification<br />
* synthesis results and final chip layout (GDS II data), bonding diagram<br />
* datasheet (part of report)<br />
* presentation slides<br />
* project report (in digital form; a hard copy also welcome, but not necessary)<br />
---><br />
<!--<br />
===Timeline==<br />
To give some idea on how the time can be split up, we provide some possible partitioning: <br />
* Literature survey, building a basic understanding of the problem at hand, catch up on related work <br />
* Development of a working software-based implementation running on the Zynq's ARM core<br />
* Piece-by-piece off-loading of relevant tasks to the programmable logic<br />
* Implementation of data interfaces (software or hardware)<br />
* Report and presentation <br />
--><br />
<!-- 13.5 weeks total here --><br />
<br />
<br />
===Literature===<br />
* [1] S. N. Baldassano, B. H. Brinkmann, H. Ung, T. Blevins, E. C. Conrad, K. Leyde,M. J. Cook, A. N. Khambhati, J. B. Wagenaar, G. A. Worrell, and B. Litt, “Crowd-sourcing seizure detection: algorithm development and validation on human im-planted device recordings,”Brain, vol. 140, no. 6, pp. 1680–1691, 2017.<br />
* [2] D. Schmidt and M. Sillanpää, “Evidence-based review on the natural history of theepilepsies.”Current opinion in neurology, vol. 25 2, pp. 159–63, 2012.<br />
* [3] J. F. Tellez-Zenteno, R. Dhar, L. Hernandez-Ronquillo, and S. Wiebe, “Long-termoutcomes in epilepsy surgery: antiepileptic drugs, mortality, cognitive and psychoso-cial aspects,”Brain, vol. 130, no. Pt 2, pp. 334–345, Feb 2007.<br />
* [4] S. Wiebe, W. T. Blume, J. P. Girvin, and M. Eliasziw, “A randomized, controlledtrial of surgery for temporal-lobe epilepsy,”N. Engl. J. Med., vol. 345, no. 5, pp.311–318, Aug 2001.<br />
* [5] C. Rummel, E. Abela, R. G. Andrzejak, M. Hauf, C. Pollo, M. Muller, C. Weisstan-ner, R. Wiest, and K. Schindler, “Resected Brain Tissue, Seizure Onset Zone andQuantitative EEG Measures: Towards Prediction of Post-Surgical Seizure Control,”PLoS ONE, vol. 10, no. 10, p. e0141023, 2015.<br />
* [6] A. K. Jaiswal and H. Banka, “Local pattern transformation based feature extractiontechniques for classification of epileptic EEG signals,”Biomedical Signal Processingand Control, vol. 34, pp. 81 – 92, 2017.<br />
* [7] R. Hussein, H. Palangi, Z. J. Wang, and R. Ward, “Robust detection of epilepticseizures using deep neural networks,” in2018 IEEE International Conference onAcoustics, Speech and Signal Processing (ICASSP), April 2018, pp. 2546–2550.<br />
* [8] W. Zhou, Y. Liu, Q. Yuan, and X. Li, “Epileptic Seizure Detection Using Lacunar-ity and Bayesian Linear Discriminant Analysis in Intracranial EEG,”IEEE TransBiomed Eng, vol. 60, no. 12, pp. 3375–3381, Dec 2013.<br />
* [9] J. Zhou, G. Cui, Z. Zhang, C. Yang, Z. Liu, L. Wang, C. Li, and M. Sun, “Graphneural networks: A review of methods and applications,” 2019.<br />
* [10] S. Sun, X. Li, J. Zhu, Y. Wang, R. La, X. Zhang, L. Wei, and B. Hu, “Graph theoryanalysis of functional connectivity in major depression disorder with high-densityresting state eeg data,”IEEE Transactions on Neural Systems and RehabilitationEngineering, vol. 27, no. 3, pp. 429–439, 2019.<br />
* [11] B. Yu, H. Yin, and Z. Zhu, “Spatio-temporal graph convolutional networks: Adeep learning framework for traffic forecasting,”Proceedings of the Twenty-SeventhInternational Joint Conference on Artificial Intelligence, Jul 2018. [Online].Available: http://dx.doi.org/10.24963/ijcai.2018/505<br />
* [12] E. Rossi, B. Chamberlain, F. Frasca, D. Eynard, F. Monti, and M. Bronstein, “Tem-poral graph networks for deep learning on dynamic graphs,” 2020.<br />
* [13] X. Lun, S. Jia, Y. Hou, Y. Shi, Y. Li, H. Yang, S. Zhang, and J. Lv, “Gcns-net: Agraph convolutional neural network approach for decoding time-resolved eeg motorimagery signals,” 2020.<br />
* [14] I. Covert, B. Krishnan, I. Najm, J. Zhan, M. Shore, J. Hixson, and M. J. Po,“Temporal graph convolutional networks for automatic seizure detection,” 2019.<br />
<br />
<br />
===Practical Details===<br />
* '''[[Project Plan]]'''<br />
* '''[[Project Meetings]]'''<br />
* '''[[Final Report]]'''<br />
* '''[[Final Presentation]]'''<br />
<br />
[[#top|↑ top]]<br />
<br />
<!-- <br />
<br />
COPY PASTE FROM THE LIST BELOW TO ADD TO CATEGORIES<br />
<br />
GROUP<br />
[[Category:Digital]]<br />
<br />
STATUS<br />
[[Category:Available]]<br />
[[Category:In progress]]<br />
[[Category:Completed]]<br />
[[Category:Hot]]<br />
<br />
TYPE OF WORK<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]]<br />
[[Category:PhD Thesis]]<br />
[[Category:Research]]<br />
<br />
NAMES OF EU/CTI/NT PROJECTS<br />
[[Category:UltrasoundToGo]]<br />
[[Category:IcySoC]]<br />
[[Category:PSocrates]]<br />
[[Category:UlpSoC]]<br />
[[Category:Qcrypt]]<br />
<br />
YEAR (IF FINISHED)<br />
[[Category:2010]]<br />
[[Category:2011]]<br />
[[Category:2012]]<br />
[[Category:2013]]<br />
[[Category:2014]]<br />
<br />
---></div>Xiaywanghttp://iis-projects.ee.ethz.ch/index.php?title=Graph_neural_networks_for_epileptic_seizure_detection&diff=6483Graph neural networks for epileptic seizure detection2021-03-12T11:24:18Z<p>Xiaywang: </p>
<hr />
<div>[[Category:Digital]][[Category:Semester Thesis]] [[Category:Available]] [[Category:2020]][[Category:Hot]][[Category:Human Intranet]][[Category:xiaywang]][[Category:Epilepsy]]<br />
[[File:Non-EEG Seizure.jpg|thumb|300px]]<br />
==Description==<br />
Epilepsy is a severe and prevalent chronic neurological disorder affecting 1–2% of the world’s population [1]. One-third of epilepsy patients continue to suffer from seizures despite best possible pharmacological treatment [2]. For these patients with so-called drug-resistant epilepsy [3], various algorithms based on intracranial electroencephalography (iEEG) recording are proposed to detect the onset of seizures [1]. Complementary to this approach, efficient and robust algorithms are required to not only detect the seizure onset but also to identify the ictogenic (i.e. seizure-generating) brain regions for possible surgical removal [4, 5]. The iEEG currently provides the best spatial resolution and the highest signal-to-noise ratio (SNR) of electrical brain activity recordings [1]. Recent studies have shown successful applications of machine learning methods [1, 6, 7, 8] using iEEG signals to detect two distinct states of brain activity in patients with epilepsy, i.e., interictal (= between seizures) and ictal (= during seizures). These methods are based on extracting useful features followed by traditional supervised machine learning methods (such as random forest [1], support vector machines [6], Bayesian analysis [8], artificial neural networks [6]), and more recently deep learning algorithms [7].<br />
<br />
Graph Neural Networks (GNNs), have gained increasing interest in the deep learning(DL) community thanks to their capacity of capturing relational information between entities [9]. Graph theory analysis has been applied to neural signals to analyze the functional connectivity in the human brain [10]. However, conventional GNNs only capture static information, while dynamic graphs take into consideration also the relationship over time between different entities of the graph and their connections. Spatio-temporal graph Convolutional Networks (GCNs) have been proposed by researchers to study the spatial and temporal dependencies in the dataset, for example in traffic forecasting [11]. Continuous-time dynamic graphs have achieved impressive results in many tasks [12]. Few works have applied GNNs to EEG signals, e.g. by combining graphs with convolutional networks (GCNs), achieving state-of-the-art performance in public datasets [13].[14] proposed a temporal GCN to tackle the task of seizure detection. However, EEG signals provide much worse temporal and spatial resolution than iEEG signals. <br />
<br />
Depending on the type of thesis, the following steps are to be accomplished:<br />
<br />
1 - Development in a high-level programming language (python) of graph neural networks and/or convolutional neural networks for seizure detection.<br />
<br />
2 - Benchmarking of these algorithms on a large-scale dataset collected by the Bern Inselspital about epileptic patients (http://ieeg-swez.ethz.ch/).<br />
<br />
3 - Comparison with state-of-the-art methods.<br />
<br />
4 - Characterization of the algorithm on different computing platforms, from the high-level number of operations to the number of cycles and energy consumption on embedded devices (e.g. GAP8, a multi-core chip from GreenWaves Technology).<br />
<br />
===Status: Available ===<br />
Looking for Master's (preferred) or Semester thesis students. <br />
: Supervision: [[:User:xiaywang|Xiaying Wang]], [mailto:alessio.burrello@unibo.it Alessio Burrello]<br />
<br />
===Prerequisites===<br />
* Machine Learning<br />
* Deep Learning<br />
* Python (and C Programming)<br />
<br />
<br />
===Character===<br />
: 20% Theory<br />
: 80% Programming<br />
<br />
===Professor===<br />
: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]<br />
[[#top|↑ top]]<br />
<br />
<!--<br />
==Detailed Task Description==<br />
<br />
===Meetings & Presentations===<br />
The student(s) and advisor(s) agree on weekly meetings to discuss all relevant decisions and decide on how to proceed. Of course, additional meetings can be organized to address urgent issues. <br />
Around the middle of the project there is a design review, where senior members of the lab review your work (bring all the relevant information, such as prelim. specifications, block diagrams, synthesis reports, testing strategy, ...) to make sure everything is on track and decide whether further support is necessary. They also make the definite decision on whether the chip is actually manufactured (no reason to worry, if the project is on track) and whether more chip area, a different package, ... is provided. For more details confer to [http://eda.ee.ethz.ch/index.php/Design_review]. <br />
At the end of the project, you have to present/defend your work during a 15 min. presentation and 5 min. of discussion as part of the IIS colloquium (as required for any semester or master thesis at D-ITET).<br />
<br />
===Deliverables===<br />
* description of the most promising architectures, and argumentation on the decision taken (as part of the report)<br />
* synthesizable, verified VHDL code<br />
* generated test vector files<br />
* synthesis scripts & relevant software models developed for verification<br />
* synthesis results and final chip layout (GDS II data), bonding diagram<br />
* datasheet (part of report)<br />
* presentation slides<br />
* project report (in digital form; a hard copy also welcome, but not necessary)<br />
---><br />
<!--<br />
===Timeline==<br />
To give some idea on how the time can be split up, we provide some possible partitioning: <br />
* Literature survey, building a basic understanding of the problem at hand, catch up on related work <br />
* Development of a working software-based implementation running on the Zynq's ARM core<br />
* Piece-by-piece off-loading of relevant tasks to the programmable logic<br />
* Implementation of data interfaces (software or hardware)<br />
* Report and presentation <br />
--><br />
<!-- 13.5 weeks total here --><br />
<br />
<br />
===Literature===<br />
* [1] S. N. Baldassano, B. H. Brinkmann, H. Ung, T. Blevins, E. C. Conrad, K. Leyde,M. J. Cook, A. N. Khambhati, J. B. Wagenaar, G. A. Worrell, and B. Litt, “Crowd-sourcing seizure detection: algorithm development and validation on human im-planted device recordings,”Brain, vol. 140, no. 6, pp. 1680–1691, 2017.<br />
* [2] D. Schmidt and M. Sillanpää, “Evidence-based review on the natural history of theepilepsies.”Current opinion in neurology, vol. 25 2, pp. 159–63, 2012.<br />
* [3] J. F. Tellez-Zenteno, R. Dhar, L. Hernandez-Ronquillo, and S. Wiebe, “Long-termoutcomes in epilepsy surgery: antiepileptic drugs, mortality, cognitive and psychoso-cial aspects,”Brain, vol. 130, no. Pt 2, pp. 334–345, Feb 2007.<br />
* [4] S. Wiebe, W. T. Blume, J. P. Girvin, and M. Eliasziw, “A randomized, controlledtrial of surgery for temporal-lobe epilepsy,”N. Engl. J. Med., vol. 345, no. 5, pp.311–318, Aug 2001.<br />
* [5] C. Rummel, E. Abela, R. G. Andrzejak, M. Hauf, C. Pollo, M. Muller, C. Weisstan-ner, R. Wiest, and K. Schindler, “Resected Brain Tissue, Seizure Onset Zone andQuantitative EEG Measures: Towards Prediction of Post-Surgical Seizure Control,”PLoS ONE, vol. 10, no. 10, p. e0141023, 2015.<br />
* [6] A. K. Jaiswal and H. Banka, “Local pattern transformation based feature extractiontechniques for classification of epileptic EEG signals,”Biomedical Signal Processingand Control, vol. 34, pp. 81 – 92, 2017.<br />
* [7] R. Hussein, H. Palangi, Z. J. Wang, and R. Ward, “Robust detection of epilepticseizures using deep neural networks,” in2018 IEEE International Conference onAcoustics, Speech and Signal Processing (ICASSP), April 2018, pp. 2546–2550.<br />
* [8] W. Zhou, Y. Liu, Q. Yuan, and X. Li, “Epileptic Seizure Detection Using Lacunar-ity and Bayesian Linear Discriminant Analysis in Intracranial EEG,”IEEE TransBiomed Eng, vol. 60, no. 12, pp. 3375–3381, Dec 2013.<br />
* [9] J. Zhou, G. Cui, Z. Zhang, C. Yang, Z. Liu, L. Wang, C. Li, and M. Sun, “Graphneural networks: A review of methods and applications,” 2019.<br />
* [10] S. Sun, X. Li, J. Zhu, Y. Wang, R. La, X. Zhang, L. Wei, and B. Hu, “Graph theoryanalysis of functional connectivity in major depression disorder with high-densityresting state eeg data,”IEEE Transactions on Neural Systems and RehabilitationEngineering, vol. 27, no. 3, pp. 429–439, 2019.<br />
* [11] B. Yu, H. Yin, and Z. Zhu, “Spatio-temporal graph convolutional networks: Adeep learning framework for traffic forecasting,”Proceedings of the Twenty-SeventhInternational Joint Conference on Artificial Intelligence, Jul 2018. [Online].Available: http://dx.doi.org/10.24963/ijcai.2018/505<br />
* [12] E. Rossi, B. Chamberlain, F. Frasca, D. Eynard, F. Monti, and M. Bronstein, “Tem-poral graph networks for deep learning on dynamic graphs,” 2020.<br />
* [13] X. Lun, S. Jia, Y. Hou, Y. Shi, Y. Li, H. Yang, S. Zhang, and J. Lv, “Gcns-net: Agraph convolutional neural network approach for decoding time-resolved eeg motorimagery signals,” 2020.<br />
* [14] I. Covert, B. Krishnan, I. Najm, J. Zhan, M. Shore, J. Hixson, and M. J. Po,“Temporal graph convolutional networks for automatic seizure detection,” 2019.<br />
<br />
<br />
===Practical Details===<br />
* '''[[Project Plan]]'''<br />
* '''[[Project Meetings]]'''<br />
* '''[[Final Report]]'''<br />
* '''[[Final Presentation]]'''<br />
<br />
[[#top|↑ top]]<br />
<br />
<!-- <br />
<br />
COPY PASTE FROM THE LIST BELOW TO ADD TO CATEGORIES<br />
<br />
GROUP<br />
[[Category:Digital]]<br />
<br />
STATUS<br />
[[Category:Available]]<br />
[[Category:In progress]]<br />
[[Category:Completed]]<br />
[[Category:Hot]]<br />
<br />
TYPE OF WORK<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]]<br />
[[Category:PhD Thesis]]<br />
[[Category:Research]]<br />
<br />
NAMES OF EU/CTI/NT PROJECTS<br />
[[Category:UltrasoundToGo]]<br />
[[Category:IcySoC]]<br />
[[Category:PSocrates]]<br />
[[Category:UlpSoC]]<br />
[[Category:Qcrypt]]<br />
<br />
YEAR (IF FINISHED)<br />
[[Category:2010]]<br />
[[Category:2011]]<br />
[[Category:2012]]<br />
[[Category:2013]]<br />
[[Category:2014]]<br />
<br />
---></div>Xiaywanghttp://iis-projects.ee.ethz.ch/index.php?title=Graph_neural_networks_for_epileptic_seizure_detection&diff=6482Graph neural networks for epileptic seizure detection2021-03-12T11:24:06Z<p>Xiaywang: Created page with "Category:DigitalCategory:Semester Thesis Category:Available Category:2020Category:HotCategory:Human IntranetCategory:xiaywangCategory:Epilepsy..."</p>
<hr />
<div>[[Category:Digital]][[Category:Semester Thesis]] [[Category:Available]] [[Category:2020]][[Category:Hot]][[Category:Human Intranet]][[Category:xiaywang]][[Category:Epilepsy]]<br />
[[File:Non-EEG Seizure.jpg|thumb|300px]]<br />
==Description==<br />
Epilepsy is a severe and prevalent chronic neurological disorder affecting 1–2% of the world’s population [1]. One-third of epilepsy patients continue to suffer from seizures despite best possible pharmacological treatment [2]. For these patients with so-called drug-resistant epilepsy [3], various algorithms based on intracranial electroencephalography (iEEG) recording are proposed to detect the onset of seizures [1]. Complementary to this approach, efficient and robust algorithms are required to not only detect the seizure onset but also to identify the ictogenic (i.e. seizure-generating) brain regions for possible surgical removal [4, 5]. The iEEG currently provides the best spatial resolution and the highest signal-to-noise ratio (SNR) of electrical brain activity recordings [1]. Recent studies have shown successful applications of machine learning methods [1, 6, 7, 8] using iEEG signals to detect two distinct states of brain activity in patients with epilepsy, i.e., interictal (= between seizures) and ictal (= during seizures). These methods are based on extracting useful features followed by traditional supervised machine learning methods (such as random forest [1], support vector machines [6], Bayesian analysis [8], artificial neural networks [6]), and more recently deep learning algorithms [7].Graph Neural Networks (GNNs), have gained increasing interest in the deep learning(DL) community thanks to their capacity of capturing relational information between entities [9]. Graph theory analysis has been applied to neural signals to analyze the functional connectivity in the human brain [10]. However, conventional GNNs only capture static information, while dynamic graphs take into consideration also the relationship over time between different entities of the graph and their connections. Spatio-temporal graph Convolutional Networks (GCNs) have been proposed by researchers to study the spatial and temporal dependencies in the dataset, for example in traffic forecasting [11]. Continuous-time dynamic graphs have achieved impressive results in many tasks [12]. Few works have applied GNNs to EEG signals, e.g. by combining graphs with convolutional networks (GCNs), achieving state-of-the-art performance in public datasets [13].[14] proposed a temporal GCN to tackle the task of seizure detection. However, EEG signals provide much worse temporal and spatial resolution than iEEG signals. <br />
<br />
Depending on the type of thesis, the following steps are to be accomplished:<br />
<br />
1 - Development in a high-level programming language (python) of graph neural networks and/or convolutional neural networks for seizure detection.<br />
<br />
2 - Benchmarking of these algorithms on a large-scale dataset collected by the Bern Inselspital about epileptic patients (http://ieeg-swez.ethz.ch/).<br />
<br />
3 - Comparison with state-of-the-art methods.<br />
<br />
4 - Characterization of the algorithm on different computing platforms, from the high-level number of operations to the number of cycles and energy consumption on embedded devices (e.g. GAP8, a multi-core chip from GreenWaves Technology).<br />
<br />
===Status: Available ===<br />
Looking for Master's (preferred) or Semester thesis students. <br />
: Supervision: [[:User:xiaywang|Xiaying Wang]], [mailto:alessio.burrello@unibo.it Alessio Burrello]<br />
<br />
===Prerequisites===<br />
* Machine Learning<br />
* Deep Learning<br />
* Python (and C Programming)<br />
<br />
<br />
===Character===<br />
: 20% Theory<br />
: 80% Programming<br />
<br />
===Professor===<br />
: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]<br />
[[#top|↑ top]]<br />
<br />
<!--<br />
==Detailed Task Description==<br />
<br />
===Meetings & Presentations===<br />
The student(s) and advisor(s) agree on weekly meetings to discuss all relevant decisions and decide on how to proceed. Of course, additional meetings can be organized to address urgent issues. <br />
Around the middle of the project there is a design review, where senior members of the lab review your work (bring all the relevant information, such as prelim. specifications, block diagrams, synthesis reports, testing strategy, ...) to make sure everything is on track and decide whether further support is necessary. They also make the definite decision on whether the chip is actually manufactured (no reason to worry, if the project is on track) and whether more chip area, a different package, ... is provided. For more details confer to [http://eda.ee.ethz.ch/index.php/Design_review]. <br />
At the end of the project, you have to present/defend your work during a 15 min. presentation and 5 min. of discussion as part of the IIS colloquium (as required for any semester or master thesis at D-ITET).<br />
<br />
===Deliverables===<br />
* description of the most promising architectures, and argumentation on the decision taken (as part of the report)<br />
* synthesizable, verified VHDL code<br />
* generated test vector files<br />
* synthesis scripts & relevant software models developed for verification<br />
* synthesis results and final chip layout (GDS II data), bonding diagram<br />
* datasheet (part of report)<br />
* presentation slides<br />
* project report (in digital form; a hard copy also welcome, but not necessary)<br />
---><br />
<!--<br />
===Timeline==<br />
To give some idea on how the time can be split up, we provide some possible partitioning: <br />
* Literature survey, building a basic understanding of the problem at hand, catch up on related work <br />
* Development of a working software-based implementation running on the Zynq's ARM core<br />
* Piece-by-piece off-loading of relevant tasks to the programmable logic<br />
* Implementation of data interfaces (software or hardware)<br />
* Report and presentation <br />
--><br />
<!-- 13.5 weeks total here --><br />
<br />
<br />
===Literature===<br />
* [1] S. N. Baldassano, B. H. Brinkmann, H. Ung, T. Blevins, E. C. Conrad, K. Leyde,M. J. Cook, A. N. Khambhati, J. B. Wagenaar, G. A. Worrell, and B. Litt, “Crowd-sourcing seizure detection: algorithm development and validation on human im-planted device recordings,”Brain, vol. 140, no. 6, pp. 1680–1691, 2017.<br />
* [2] D. Schmidt and M. Sillanpää, “Evidence-based review on the natural history of theepilepsies.”Current opinion in neurology, vol. 25 2, pp. 159–63, 2012.<br />
* [3] J. F. Tellez-Zenteno, R. Dhar, L. Hernandez-Ronquillo, and S. Wiebe, “Long-termoutcomes in epilepsy surgery: antiepileptic drugs, mortality, cognitive and psychoso-cial aspects,”Brain, vol. 130, no. Pt 2, pp. 334–345, Feb 2007.<br />
* [4] S. Wiebe, W. T. Blume, J. P. Girvin, and M. Eliasziw, “A randomized, controlledtrial of surgery for temporal-lobe epilepsy,”N. Engl. J. Med., vol. 345, no. 5, pp.311–318, Aug 2001.<br />
* [5] C. Rummel, E. Abela, R. G. Andrzejak, M. Hauf, C. Pollo, M. Muller, C. Weisstan-ner, R. Wiest, and K. Schindler, “Resected Brain Tissue, Seizure Onset Zone andQuantitative EEG Measures: Towards Prediction of Post-Surgical Seizure Control,”PLoS ONE, vol. 10, no. 10, p. e0141023, 2015.<br />
* [6] A. K. Jaiswal and H. Banka, “Local pattern transformation based feature extractiontechniques for classification of epileptic EEG signals,”Biomedical Signal Processingand Control, vol. 34, pp. 81 – 92, 2017.<br />
* [7] R. Hussein, H. Palangi, Z. J. Wang, and R. Ward, “Robust detection of epilepticseizures using deep neural networks,” in2018 IEEE International Conference onAcoustics, Speech and Signal Processing (ICASSP), April 2018, pp. 2546–2550.<br />
* [8] W. Zhou, Y. Liu, Q. Yuan, and X. Li, “Epileptic Seizure Detection Using Lacunar-ity and Bayesian Linear Discriminant Analysis in Intracranial EEG,”IEEE TransBiomed Eng, vol. 60, no. 12, pp. 3375–3381, Dec 2013.<br />
* [9] J. Zhou, G. Cui, Z. Zhang, C. Yang, Z. Liu, L. Wang, C. Li, and M. Sun, “Graphneural networks: A review of methods and applications,” 2019.<br />
* [10] S. Sun, X. Li, J. Zhu, Y. Wang, R. La, X. Zhang, L. Wei, and B. Hu, “Graph theoryanalysis of functional connectivity in major depression disorder with high-densityresting state eeg data,”IEEE Transactions on Neural Systems and RehabilitationEngineering, vol. 27, no. 3, pp. 429–439, 2019.<br />
* [11] B. Yu, H. Yin, and Z. Zhu, “Spatio-temporal graph convolutional networks: Adeep learning framework for traffic forecasting,”Proceedings of the Twenty-SeventhInternational Joint Conference on Artificial Intelligence, Jul 2018. [Online].Available: http://dx.doi.org/10.24963/ijcai.2018/505<br />
* [12] E. Rossi, B. Chamberlain, F. Frasca, D. Eynard, F. Monti, and M. Bronstein, “Tem-poral graph networks for deep learning on dynamic graphs,” 2020.<br />
* [13] X. Lun, S. Jia, Y. Hou, Y. Shi, Y. Li, H. Yang, S. Zhang, and J. Lv, “Gcns-net: Agraph convolutional neural network approach for decoding time-resolved eeg motorimagery signals,” 2020.<br />
* [14] I. Covert, B. Krishnan, I. Najm, J. Zhan, M. Shore, J. Hixson, and M. J. Po,“Temporal graph convolutional networks for automatic seizure detection,” 2019.<br />
<br />
<br />
===Practical Details===<br />
* '''[[Project Plan]]'''<br />
* '''[[Project Meetings]]'''<br />
* '''[[Final Report]]'''<br />
* '''[[Final Presentation]]'''<br />
<br />
[[#top|↑ top]]<br />
<br />
<!-- <br />
<br />
COPY PASTE FROM THE LIST BELOW TO ADD TO CATEGORIES<br />
<br />
GROUP<br />
[[Category:Digital]]<br />
<br />
STATUS<br />
[[Category:Available]]<br />
[[Category:In progress]]<br />
[[Category:Completed]]<br />
[[Category:Hot]]<br />
<br />
TYPE OF WORK<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]]<br />
[[Category:PhD Thesis]]<br />
[[Category:Research]]<br />
<br />
NAMES OF EU/CTI/NT PROJECTS<br />
[[Category:UltrasoundToGo]]<br />
[[Category:IcySoC]]<br />
[[Category:PSocrates]]<br />
[[Category:UlpSoC]]<br />
[[Category:Qcrypt]]<br />
<br />
YEAR (IF FINISHED)<br />
[[Category:2010]]<br />
[[Category:2011]]<br />
[[Category:2012]]<br />
[[Category:2013]]<br />
[[Category:2014]]<br />
<br />
---></div>Xiaywanghttp://iis-projects.ee.ethz.ch/index.php?title=Deep_neural_networks_for_seizure_detection&diff=6481Deep neural networks for seizure detection2021-03-12T11:21:23Z<p>Xiaywang: </p>
<hr />
<div>[[Category:Digital]][[Category:Semester Thesis]] [[Category:Available]] [[Category:2020]][[Category:Hot]][[Category:Human Intranet]][[Category:xiaywang]][[Category:Herschmi]][[Category:Epilepsy]]<br />
[[File:Non-EEG Seizure.jpg|thumb|300px]]<br />
==Description==<br />
Epilepsy is one of the most prevalent chronic neurological disorders. One-third of patients with epilepsy continue to suffer from seizures despite pharmacological therapy. For these patients with drug-resistant epilepsy, efficient algorithms for seizure detection are needed in particular during pre-surgical monitoring. Many efforts have been pursued in this direction with the fabrication of many ASIC and the development of advanced machine/deep-learning to optimize both the energy efficiency for years-operating devices and the accuracy in the epilepsy detection.<br />
In terms of time-series analysis, a big variety of deep-learning approaches are arising for efficient processing such as InceptionTime [1], MultiScale-CNN [2], and Temporal Convolutional Networks (TCN) [3].<br />
<br />
The thesis would be a 6-month full-time project with the following steps to accomplish:<br />
<br />
1 - Development in a high-level programming language (python) of different deep learning algorithms for time series classification. In particular, the initial targets will be the InceptionTime, the MultiScale-CNN, the Temporal Convolutional Networks, and a bidirectional LSTM.<br />
<br />
2 - Benchmarking of these algorithms on a large-scale dataset collected by the Bern Inselspital about epileptic patients [4].<br />
<br />
3 - Comparison with state of the art methods (Local Binary pattern + Hyperdimensional computing [5], Short-time Fourier transform + Convolutional Neural Networks and classical machine learning methods).<br />
<br />
4 - Characterization of the algorithm on different computing platforms, from the high-level number of operations to the number of cycles and energy consumption on embedded devices (e.g. GAP8, a multi-core chip from GreenWaves Technology).<br />
<br />
<br />
===Status: Available ===<br />
Looking for one student for Master's thesis. <br />
: Supervision: [[:User:Herschmi | Michael Hersche]], [[:User:xiaywang|Xiaying Wang]], [mailto:alessio.burrello@unibo.it Alessio Burrello]<br />
<br />
===Prerequisites===<br />
* Machine Learning<br />
* Python & C Programming<br />
<br />
<br />
<br />
===Character===<br />
: 20% Theory<br />
: 80% Programming<br />
<br />
===Professor===<br />
: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]<br />
[[#top|↑ top]]<br />
<br />
<!--<br />
==Detailed Task Description==<br />
<br />
===Meetings & Presentations===<br />
The student(s) and advisor(s) agree on weekly meetings to discuss all relevant decisions and decide on how to proceed. Of course, additional meetings can be organized to address urgent issues. <br />
Around the middle of the project there is a design review, where senior members of the lab review your work (bring all the relevant information, such as prelim. specifications, block diagrams, synthesis reports, testing strategy, ...) to make sure everything is on track and decide whether further support is necessary. They also make the definite decision on whether the chip is actually manufactured (no reason to worry, if the project is on track) and whether more chip area, a different package, ... is provided. For more details confer to [http://eda.ee.ethz.ch/index.php/Design_review]. <br />
At the end of the project, you have to present/defend your work during a 15 min. presentation and 5 min. of discussion as part of the IIS colloquium (as required for any semester or master thesis at D-ITET).<br />
<br />
===Deliverables===<br />
* description of the most promising architectures, and argumentation on the decision taken (as part of the report)<br />
* synthesizable, verified VHDL code<br />
* generated test vector files<br />
* synthesis scripts & relevant software models developed for verification<br />
* synthesis results and final chip layout (GDS II data), bonding diagram<br />
* datasheet (part of report)<br />
* presentation slides<br />
* project report (in digital form; a hard copy also welcome, but not necessary)<br />
---><br />
<!--<br />
===Timeline==<br />
To give some idea on how the time can be split up, we provide some possible partitioning: <br />
* Literature survey, building a basic understanding of the problem at hand, catch up on related work <br />
* Development of a working software-based implementation running on the Zynq's ARM core<br />
* Piece-by-piece off-loading of relevant tasks to the programmable logic<br />
* Implementation of data interfaces (software or hardware)<br />
* Report and presentation <br />
--><br />
<!-- 13.5 weeks total here --><br />
<br />
<br />
===Literature===<br />
* H. I. Fawaz et al., InceptionTime: Finding AlexNet for Time Series Classification [https://arxiv.org/abs/1909.04939]<br />
* Z. Cui et al., Multi-Scale Convolutional Neural Networks for Time Series Classification [https://arxiv.org/abs/1603.06995]<br />
* C. Lea et al., Temporal Convolutional Networks: A Unified Approach to Action Segmentation, [https://link.springer.com/chapter/10.1007/978-3-319-49409-8_7]<br />
* iEEG-SWEZ data base [http://ieeg-swez.ethz.ch/]<br />
* A. Burello et al., Laelaps: An Energy-Efficient Seizure Detection Algorithm from Long-term Human iEEG Recordings without False Alarms [https://ieeexplore.ieee.org/abstract/document/8715186]<br />
<br />
<br />
<br />
===Practical Details===<br />
* '''[[Project Plan]]'''<br />
* '''[[Project Meetings]]'''<br />
* '''[[Final Report]]'''<br />
* '''[[Final Presentation]]'''<br />
<br />
[[#top|↑ top]]<br />
<br />
<!-- <br />
<br />
COPY PASTE FROM THE LIST BELOW TO ADD TO CATEGORIES<br />
<br />
GROUP<br />
[[Category:Digital]]<br />
<br />
STATUS<br />
[[Category:Available]]<br />
[[Category:In progress]]<br />
[[Category:Completed]]<br />
[[Category:Hot]]<br />
<br />
TYPE OF WORK<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]]<br />
[[Category:PhD Thesis]]<br />
[[Category:Research]]<br />
<br />
NAMES OF EU/CTI/NT PROJECTS<br />
[[Category:UltrasoundToGo]]<br />
[[Category:IcySoC]]<br />
[[Category:PSocrates]]<br />
[[Category:UlpSoC]]<br />
[[Category:Qcrypt]]<br />
<br />
YEAR (IF FINISHED)<br />
[[Category:2010]]<br />
[[Category:2011]]<br />
[[Category:2012]]<br />
[[Category:2013]]<br />
[[Category:2014]]<br />
<br />
---><br />
<br />
[[Category:Herschmi]]</div>Xiaywanghttp://iis-projects.ee.ethz.ch/index.php?title=Biomedical_Circuits,_Systems,_and_Applications&diff=6090Biomedical Circuits, Systems, and Applications2020-11-16T16:38:24Z<p>Xiaywang: </p>
<hr />
<div>[[File:iis-project-image.png|thumb|right|450px]]<br />
<br />
Research on biomedical sensing systems and signal processing algorithms has been very prolific in recent years with a variety of solutions in a wide range of application scenarios, for example long-term monitoring of human vital signs for disease detection. Low-power consumption and energy efficiency are the key features of such systems starting from the sensor node for data acquisition, towards embedded systems for data handling, and accurate algorithms for data processing. <br />
<br />
Many research topics are actively ongoing around the human body, from chip design, to system development, to algorithmic investigations in various application scenarios. In the following sections you find links to past and current projects that you might find interesting.<br />
<br />
Don't hesitate to drop us an email!<br />
<br />
=[http://iis-projects.ee.ethz.ch/index.php?title=Human_Intranet Human Intranet]=<br />
<br />
Human Intranet is an open, scalable platform that seamlessly integrates an ever-increasing number of sensor, actuation, computation, storage, communication and energy nodes located on, in, or around the human body acting in symbiosis with the functions provided by the body itself. Human Intranet presents a system vision in which, for example, disease would be treated by chronically measuring biosignals deep in the body, or by providing targeted, therapeutic interventions that respond on demand and in situ.<br />
<br />
In the following, a summary of the main projects is given. More details can be found [http://iis-projects.ee.ethz.ch/index.php?title=Human_Intranet here].<br />
<br />
===Brain-Machine Interfaces===<br />
<br />
Noninvasive brain–machine interfaces (BMIs) and neuroprostheses aim to provide a communication and control channel based on the recognition of the subject’s intentions from spatiotemporal neural activity typically recorded by EEG electrodes.<br />
<br />
In this project, our goal is to develop efficient and fast learning algorithms that replace traditional signal processing and classification methods by directly operating with raw data from electrodes. Furthermore, we aim to efficiently deploy those algorithms on tightly resource-limited devices (e.g., Microcontroller units) for near sensor classification using artificial intelligence.<br />
<br />
===Epilepsy Seizure Detection Device===<br />
<br />
Seizure detection systems hold promise for improving the quality of life for patients with epilepsy that afflicts nearly 1% of the world's population. In this project, our goal is to develop efficient techniques for EEG as well as non-EEG signals to detect an upcoming seizure in an ultra-low-power device.<br />
<br />
<br />
=[http://iis-projects.ee.ethz.ch/index.php?title=Digital_Medical_Ultrasound_Imaging Digital Medical Ultrasound Imaging]=<br />
<br />
In the LightProbe project, we are exploring the next generation of medical ultrasound imaging systems: The LightProbe is a programmable ultrasound transducer head, which incorporates the entire analog frontend and directly outputs the captured digital samples. This allows the LightProbe to be directly connected to any commodity hardware (phone, tablet, workstation) for post-processing over a standard digital link as simple as a standard peripheral, like a camera. <br />
<br />
More information can be found [http://iis-projects.ee.ethz.ch/index.php?title=Digital_Medical_Ultrasound_Imaging here].<br />
<br />
=[http://iis-projects.ee.ethz.ch/index.php?title=Biomedical_System_on_Chips Biomedical System on Chips]=<br />
<br />
Every human and animal body generates a large and steady amount of data as consequence of several underlying life-long processes, e.g., respiration, vascular system dynamic, muscle contraction. By acquiring and processing these vital signals, usually by electrical or optical means, substantial amount of information can be extracted, enabling sense-making being used to take informative decisions. Successful application examples range from commercial fitness-tracker gadgets to medical-grade devices that enables tele-health remote medicine, as well as edge-cutting scientific research on living biological models. <br />
<br />
It is a joint effort between the Analog and Mixed Signal and Digital Design Groups. More info [http://iis-projects.ee.ethz.ch/index.php?title=Biomedical_System_on_Chips here].<br />
<br />
<br />
[[Category:Digital]]<br />
[[Category:Human Intranet]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]]</div>Xiaywanghttp://iis-projects.ee.ethz.ch/index.php?title=Biomedical_Circuits,_Systems,_and_Applications&diff=6089Biomedical Circuits, Systems, and Applications2020-11-16T16:37:09Z<p>Xiaywang: </p>
<hr />
<div>[[File:iis-project-image.png|thumb|right|450px]]<br />
<br />
Research on biomedical sensing systems and signal processing algorithms has been very prolific in recent years with a variety of solutions in a wide range of application scenarios, for example long-term monitoring of human vital signs for disease detection. Low-power consumption and energy efficiency are the key features of such systems starting from the sensor node for data acquisition, towards embedded systems for data handling, and accurate algorithms for data processing. <br />
<br />
Many research topics are actively ongoing around the human body, from chip design, to system development, to algorithmic investigations in various application scenarios. In the following sections you find links to past and current projects that you might find interesting.<br />
<br />
Don't hesitate to drop us an email!<br />
<br />
=Human Intranet=<br />
<br />
Human Intranet is an open, scalable platform that seamlessly integrates an ever-increasing number of sensor, actuation, computation, storage, communication and energy nodes located on, in, or around the human body acting in symbiosis with the functions provided by the body itself. Human Intranet presents a system vision in which, for example, disease would be treated by chronically measuring biosignals deep in the body, or by providing targeted, therapeutic interventions that respond on demand and in situ.<br />
<br />
In the following, a summary of the main projects is given. More details can be found [http://iis-projects.ee.ethz.ch/index.php?title=Human_Intranet here].<br />
<br />
===Brain-Machine Interfaces===<br />
<br />
Noninvasive brain–machine interfaces (BMIs) and neuroprostheses aim to provide a communication and control channel based on the recognition of the subject’s intentions from spatiotemporal neural activity typically recorded by EEG electrodes.<br />
<br />
In this project, our goal is to develop efficient and fast learning algorithms that replace traditional signal processing and classification methods by directly operating with raw data from electrodes. Furthermore, we aim to efficiently deploy those algorithms on tightly resource-limited devices (e.g., Microcontroller units) for near sensor classification using artificial intelligence.<br />
<br />
===Epilepsy Seizure Detection Device===<br />
<br />
Seizure detection systems hold promise for improving the quality of life for patients with epilepsy that afflicts nearly 1% of the world's population. In this project, our goal is to develop efficient techniques for EEG as well as non-EEG signals to detect an upcoming seizure in an ultra-low-power device.<br />
<br />
<br />
=Digital Medical Ultrasound Imaging=<br />
<br />
In the LightProbe project, we are exploring the next generation of medical ultrasound imaging systems: The LightProbe is a programmable ultrasound transducer head, which incorporates the entire analog frontend and directly outputs the captured digital samples. This allows the LightProbe to be directly connected to any commodity hardware (phone, tablet, workstation) for post-processing over a standard digital link as simple as a standard peripheral, like a camera. <br />
<br />
More information can be found [http://iis-projects.ee.ethz.ch/index.php?title=Digital_Medical_Ultrasound_Imaging here].<br />
<br />
=Biomedical System on Chips=<br />
<br />
Every human and animal body generates a large and steady amount of data as consequence of several underlying life-long processes, e.g., respiration, vascular system dynamic, muscle contraction. By acquiring and processing these vital signals, usually by electrical or optical means, substantial amount of information can be extracted, enabling sense-making being used to take informative decisions. Successful application examples range from commercial fitness-tracker gadgets to medical-grade devices that enables tele-health remote medicine, as well as edge-cutting scientific research on living biological models. <br />
<br />
It is a joint effort between the Analog and Mixed Signal and Digital Design Groups. More info [http://iis-projects.ee.ethz.ch/index.php?title=Biomedical_System_on_Chips here].<br />
<br />
<br />
[[Category:Digital]]<br />
[[Category:Human Intranet]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]]</div>Xiaywanghttp://iis-projects.ee.ethz.ch/index.php?title=Biomedical_Circuits,_Systems,_and_Applications&diff=6088Biomedical Circuits, Systems, and Applications2020-11-16T16:33:54Z<p>Xiaywang: </p>
<hr />
<div>[[File:iis-project-image.png|thumb|right|450px]]<br />
<br />
Research on biomedical sensing systems and signal processing algorithms has been very prolific in recent years with a variety of solutions in a wide range of application scenarios, for example long-term monitoring of human vital signs for disease detection. Low-power consumption and energy efficiency are the key features of such systems starting from the sensor node for data acquisition, towards embedded systems for data handling, and accurate algorithms for data processing. <br />
<br />
Many research topics are actively ongoing around the human body, from chip design, to system development, to algorithmic investigations in various application scenarios. In the following sections you find links to past and current projects that you might find interesting.<br />
<br />
Don't hesitate to drop us an email!<br />
<br />
=Human Intranet=<br />
<br />
Human Intranet is an open, scalable platform that seamlessly integrates an ever-increasing number of sensor, actuation, computation, storage, communication and energy nodes located on, in, or around the human body acting in symbiosis with the functions provided by the body itself. Human Intranet presents a system vision in which, for example, disease would be treated by chronically measuring biosignals deep in the body, or by providing targeted, therapeutic interventions that respond on demand and in situ.<br />
<br />
In the following, a summary of the main projects is given. More details can be found [http://iis-projects.ee.ethz.ch/index.php?title=Human_Intranet here]<br />
<br />
===Brain-Machine Interfaces===<br />
<br />
Noninvasive brain–machine interfaces (BMIs) and neuroprostheses aim to provide a communication and control channel based on the recognition of the subject’s intentions from spatiotemporal neural activity typically recorded by EEG electrodes.<br />
<br />
In this project, our goal is to develop efficient and fast learning algorithms that replace traditional signal processing and classification methods by directly operating with raw data from electrodes. Furthermore, we aim to efficiently deploy those algorithms on tightly resource-limited devices (e.g., Microcontroller units) for near sensor classification using artificial intelligence.<br />
<br />
===Epilepsy Seizure Detection Device===<br />
<br />
Seizure detection systems hold promise for improving the quality of life for patients with epilepsy that afflicts nearly 1% of the world's population. In this project, our goal is to develop efficient techniques for EEG as well as non-EEG signals to detect an upcoming seizure in an ultra-low-power device.<br />
<br />
<br />
<br />
<br />
[[Category:Digital]]<br />
[[Category:Human Intranet]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]]</div>Xiaywanghttp://iis-projects.ee.ethz.ch/index.php?title=Biomedical_Circuits,_Systems,_and_Applications&diff=5883Biomedical Circuits, Systems, and Applications2020-11-11T11:29:19Z<p>Xiaywang: </p>
<hr />
<div>[[File:iis-project-image.png|thumb|right|450px]]<br />
<br />
Research on biomedical sensing systems and signal processing algorithms has been very prolific in recent years with a variety of solutions in a wide range of application scenarios, for example long-term monitoring of human vital signs for disease detection. Low-power consumption and energy efficiency are the key features of such systems starting from the sensor node for data acquisition, towards embedded systems for data handling, and accurate algorithms for data processing. <br />
<br />
<!-- <br />
<br />
There are many examples of implemented and deployed wearable devices that attempt to exploit intelligent sensing to monitor human vital signs and activities. The main challenges of wearable design are to prolong the operating lifetime and to enhance usability, maintenance, and mobility, while keeping a small and unobtrusive form factor.<br />
Unlike other sensing systems, mobile and wearable sensing systems have to provide continuous data monitoring, acquisition, processing, and classification. Supporting such continuous operation using only ultra-small batteries poses unique challenges in energy efficiency. The other major challenge for mobile sensor systems is to be able to understand the world in a similar way as humans do. Perceptive low-power sensor devices should be able to interpret the context around their users and allow context-aware multi-agent interaction. Machine learning approaches, in particular deep learning techniques, show great promise toward achieving this goal.<br />
<br />
Wearable devices for biomedical applications are becoming more and more pervasive. The push towards the wearable Internet of Things (IoT) requires devices that are flexible enough to be adapted for different applications while being limited by available power for the overall system. A new generation of IoT and smart sensing systems should not only provide continuous data monitoring and acquisition, but are also expected to process and make sense of the acquired data in similar ways as human experts do. Supporting continuous data analysis capabilities on ultra-small batteries poses unique challenges in energy efficiency on both hardware and software.<br />
<br />
Data analytics and classification problems are increasingly tackled with machine learning approaches, featuring many stages of feature extractors and classifiers with lots of parameters that are optimized using the unprecedented wealth of data that has recently become available. Advanced ML techniques, such as deep learning, are achieving impressive results, starting to outperform humans on very challenging problems and datasets. However, most of these approaches are still not suitable for low-power battery operated devices (and even less for wearable, ultra-miniaturized devices) because, in their current embodiment, they require massive amounts of computational power.<br />
<br />
The present research project aims to push beyond the current power walls for machine learning and bio-signal processing and move toward milli-watt (or even micro-watt) continuously active, long-term wearable biomedical systems. This requires working on algorithms, architecture, circuits as well as designing methods for these new promising embedded and wearable devices with artificial intelligence capability under extreme constraints: tiny energy buffers (batteries), miniature energy harvesters providing tiny amounts of energy, low-power sensors, and interfaces. Specifically the project aims at studying and developing ML-based bio-signal data analysis algorithms suitable for ultra-low power wearable devices and to demonstrate hardware-software integration in a prototype smart patch with<br />
multi-lead ExG advanced embedded analytics and classification capabilities within a power envelope of<br />
a few mW.<br />
<br />
<br />
[[Category:Digital]]<br />
[[Category:Human Intranet]]<br />
[[Category:ASIC]]<br />
[[Category:FPGA]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]]<br />
<br />
<br />
<br />
---></div>Xiaywanghttp://iis-projects.ee.ethz.ch/index.php?title=Biomedical_Circuits,_Systems,_and_Applications&diff=5882Biomedical Circuits, Systems, and Applications2020-11-11T11:27:21Z<p>Xiaywang: </p>
<hr />
<div>[[File:iis-project-image.png|thumb|right|450px]]<br />
<br />
Research on biomedical sensing systems and signal processing algorithms has been very prolific in recent years with a variety of solutions in a wide range of application scenarios, for example long-term monitoring of human vital signs for disease detection. Low-power consumption and energy efficiency are the key features of such systems starting from the sensor node for data acquisition, towards embedded systems for data handling, and accurate algorithms for data processing. <br />
<br />
<!-- <br />
<br />
There are many examples of implemented and deployed wearable devices that attempt to exploit intelligent sensing to monitor human vital signs and activities. The main challenges of wearable design are to prolong the operating lifetime and to enhance usability, maintenance, and mobility, while keeping a small and unobtrusive form factor.<br />
Unlike other sensing systems, mobile and wearable sensing systems have to provide continuous data monitoring, acquisition, processing, and classification. Supporting such continuous operation using only ultra-small batteries poses unique challenges in energy efficiency. The other major challenge for mobile sensor systems is to be able to understand the world in a similar way as humans do. Perceptive low-power sensor devices should be able to interpret the context around their users and allow context-aware multi-agent interaction. Machine learning approaches, in particular deep learning techniques, show great promise toward achieving this goal.<br />
<br />
Wearable devices for biomedical applications are becoming more and more pervasive. The push towards the wearable Internet of Things (IoT) requires devices that are flexible enough to be adapted for different applications while being limited by available power for the overall system. A new generation of IoT and smart sensing systems should not only provide continuous data monitoring and acquisition, but are also expected to process and make sense of the acquired data in similar ways as human experts do. Supporting continuous data analysis capabilities on ultra-small batteries poses unique challenges in energy efficiency on both hardware and software.<br />
<br />
Data analytics and classification problems are increasingly tackled with machine learning approaches, featuring many stages of feature extractors and classifiers with lots of parameters that are optimized using the unprecedented wealth of data that has recently become available. Advanced ML techniques, such as deep learning, are achieving impressive results, starting to outperform humans on very challenging problems and datasets. However, most of these approaches are still not suitable for low-power battery operated devices (and even less for wearable, ultra-miniaturized devices) because, in their current embodiment, they require massive amounts of computational power.<br />
<br />
The present research project aims to push beyond the current power walls for machine learning and bio-signal processing and move toward milli-watt (or even micro-watt) continuously active, long-term wearable biomedical systems. This requires working on algorithms, architecture, circuits as well as designing methods for these new promising embedded and wearable devices with artificial intelligence capability under extreme constraints: tiny energy buffers (batteries), miniature energy harvesters providing tiny amounts of energy, low-power sensors, and interfaces. Specifically the project aims at studying and developing ML-based bio-signal data analysis algorithms suitable for ultra-low power wearable devices and to demonstrate hardware-software integration in a prototype smart patch with<br />
multi-lead ExG advanced embedded analytics and classification capabilities within a power envelope of<br />
a few mW.<br />
<br />
<br />
---></div>Xiaywanghttp://iis-projects.ee.ethz.ch/index.php?title=Biomedical_Circuits,_Systems,_and_Applications&diff=5881Biomedical Circuits, Systems, and Applications2020-11-11T11:25:44Z<p>Xiaywang: </p>
<hr />
<div>[[File:iis-project-image.png|thumb|right|450px]]<br />
<br />
Research on biomedical sensing systems and signal processing algorithms has been very prolific in recent years with a variety of solutions in a wide range of application scenarios, for example long-term monitoring of human vital signs for disease detection. Low-power consumption and energy efficiency are the key features of such systems starting from the sensor node, data acquisition, data handling, and processing. <br />
<br />
<!-- <br />
<br />
There are many examples of implemented and deployed wearable devices that attempt to exploit intelligent sensing to monitor human vital signs and activities. The main challenges of wearable design are to prolong the operating lifetime and to enhance usability, maintenance, and mobility, while keeping a small and unobtrusive form factor.<br />
Unlike other sensing systems, mobile and wearable sensing systems have to provide continuous data monitoring, acquisition, processing, and classification. Supporting such continuous operation using only ultra-small batteries poses unique challenges in energy efficiency. The other major challenge for mobile sensor systems is to be able to understand the world in a similar way as humans do. Perceptive low-power sensor devices should be able to interpret the context around their users and allow context-aware multi-agent interaction. Machine learning approaches, in particular deep learning techniques, show great promise toward achieving this goal.<br />
<br />
Wearable devices for biomedical applications are becoming more and more pervasive. The push towards the wearable Internet of Things (IoT) requires devices that are flexible enough to be adapted for different applications while being limited by available power for the overall system. A new generation of IoT and smart sensing systems should not only provide continuous data monitoring and acquisition, but are also expected to process and make sense of the acquired data in similar ways as human experts do. Supporting continuous data analysis capabilities on ultra-small batteries poses unique challenges in energy efficiency on both hardware and software.<br />
<br />
Data analytics and classification problems are increasingly tackled with machine learning approaches, featuring many stages of feature extractors and classifiers with lots of parameters that are optimized using the unprecedented wealth of data that has recently become available. Advanced ML techniques, such as deep learning, are achieving impressive results, starting to outperform humans on very challenging problems and datasets. However, most of these approaches are still not suitable for low-power battery operated devices (and even less for wearable, ultra-miniaturized devices) because, in their current embodiment, they require massive amounts of computational power.<br />
<br />
The present research project aims to push beyond the current power walls for machine learning and bio-signal processing and move toward milli-watt (or even micro-watt) continuously active, long-term wearable biomedical systems. This requires working on algorithms, architecture, circuits as well as designing methods for these new promising embedded and wearable devices with artificial intelligence capability under extreme constraints: tiny energy buffers (batteries), miniature energy harvesters providing tiny amounts of energy, low-power sensors, and interfaces. Specifically the project aims at studying and developing ML-based bio-signal data analysis algorithms suitable for ultra-low power wearable devices and to demonstrate hardware-software integration in a prototype smart patch with<br />
multi-lead ExG advanced embedded analytics and classification capabilities within a power envelope of<br />
a few mW.<br />
<br />
<br />
---></div>Xiaywanghttp://iis-projects.ee.ethz.ch/index.php?title=File:Iis-project-image.png&diff=5878File:Iis-project-image.png2020-11-11T09:55:35Z<p>Xiaywang: Xiaywang uploaded a new version of File:Iis-project-image.png</p>
<hr />
<div></div>Xiaywanghttp://iis-projects.ee.ethz.ch/index.php?title=Biomedical_Circuits,_Systems,_and_Applications&diff=5877Biomedical Circuits, Systems, and Applications2020-11-11T09:54:29Z<p>Xiaywang: Replaced content with "450px"</p>
<hr />
<div>[[File:iis-project-image.png|thumb|right|450px]]</div>Xiaywanghttp://iis-projects.ee.ethz.ch/index.php?title=File:Iis-project-image.png&diff=5876File:Iis-project-image.png2020-11-11T09:53:30Z<p>Xiaywang: </p>
<hr />
<div></div>Xiaywanghttp://iis-projects.ee.ethz.ch/index.php?title=Biomedical_Circuits,_Systems,_and_Applications&diff=5875Biomedical Circuits, Systems, and Applications2020-11-11T09:53:01Z<p>Xiaywang: Created page with "=Biomedical Engineering= 450px The world around us is getting a lot smarter quickly: virtually every single component of our daily l..."</p>
<hr />
<div>=Biomedical Engineering=<br />
[[File:iis-project-image.png|thumb|right|450px]]<br />
The world around us is getting a lot smarter quickly: virtually every single component of our daily living environment is being equipped with sensors, actuators, processing, and connection into a network that will soon count billions of nodes and trillions of sensors. These devices only interact with the human through the traditional input and output channels. Hence, they only indirectly communicate with our brain—through our five sense modalities—forming two separate computing systems: biological versus physical. It could be made a lot more effective if a direct high bandwidth link existed between the two systems, allowing them to truly collaborate with each other and to offer opportunities for enhanced functionality that would otherwise be hard to accomplish. The emergence of miniaturized sense, compute and actuate devices as well as interfaces that are form-fitted to the human body opens the door for a symbiotic convergence between biological function and physical computing.<br />
<br />
Human Intranet is an open, scalable platform that seamlessly integrates an ever-increasing number of sensor, actuation, computation, storage, communication and energy nodes located on, in, or around the human body acting in symbiosis with the functions provided by the body itself. Human Intranet presents a system vision in which, for example, disease would be treated by chronically measuring biosignals deep in the body, or by providing targeted, therapeutic interventions that respond on demand and in situ. To gain a holistic view of a person’s health, these sensors and actuators must communicate and collaborate with each other. Most of such systems prototyped or envisioned today serve to address deficiencies in the human sensory or motor control systems due to birth defects, illnesses, or accidents (e.g., invasive brain-machine interfaces, cochlear implants, artificial retinas, etc.). While all these systems target defects, one can easily imagine that this could lead to many types of enhancement and/or enable direct interaction with the environment: to make us humans smarter!<br />
<br />
Here, in our projects, we mainly focus on '''sensor, computation, communication, and emerging storage''' aspects to develop very efficient closed-loop sense-interpret-actuate systems, enabling distributed autonomous behavior. <br />
<br />
<!--For example, to design the ''brain'' of our physical computing (i.e., the compute/interpret component), we rely on computing with ultra-wide words (e.g., 10,000 bits) that eases interfacing with various sensor modalities and actuators. This novel computing paradigm is called hyperdimensional (HD) computing that is inspired from the very size of the biological brain’s circuits: assuming 1 bit per synapse, the brain is made up of more than 24 billion of such ultra-wide words. You can watch some of our demos:<br />
* [https://bwrc.eecs.berkeley.edu/sites/default/files/files/u2630/flexemg_v2_lq.mp4#t=2 Video1]<br />
* [https://www.youtube.com/watch?time_continue=9&v=vTQGMQ6QaJE Video2]<br />
* [https://iis-people.ee.ethz.ch/~arahimi/papers/ISSCC18-Demo.pdf PDF] <br />
<br />
You can also find a collection of complemented projects with source codes/datasets here:<br />
* [https://github.com/HyperdimensionalComputing/collection Github link]<br />
--><br />
==Prerequisites and Focus==<br />
If you are an B.S. or M.S. student at the ETHZ, typically there is no prerequisite. You can come and talk to us and we adapt the projects based on your skills. The scope and focus of projects are wide. You can choose to work on:<br />
<br />
<!-- * '''Efficient hardware architectures in emerging technologies''' (e.g., [https://www.zurich.ibm.com/sto/memory/ the IBM computational memory])--><br />
* '''Exploring new Human Intranet/IoT applications''' <br />
* '''Algorithm design and optimizations''' (Python)<br />
* '''System-level design and testing''' (Altium, C-programming)<br />
* '''Sensory interfaces''' (analog and digital)<br />
<br />
<br />
<!-- <br />
* '''Theory''' of learning systems including HD computing, Hidden Markov Model (HMM), and clustering algorithms<br />
Overall, our projects cover '''algorithmic, hardware/software, and system level''' design and developments. <br />
However, if you have background in signal processing, VLSI or linear algebra is a super plus! --><br />
<br />
===Useful Reading===<br />
*[https://ieeexplore.ieee.org/document/7030200/ The Human Intranet--Where Swarms and Humans Meet]<br />
*[https://link.springer.com/article/10.1007/s12559-009-9009-8 Hyperdimensional Computing: An Introduction to Computing in Distributed Representation with High-Dimensional Random Vectors]<br />
*[https://ieeexplore.ieee.org/document/8422472/ Hyperdimensional Modulation for Robust Low-Power Communications]<br />
*[https://iis-people.ee.ethz.ch/~arahimi/papers/TCAS17.pdf High-dimensional Computing as a Nanoscalable Paradigm]<br />
*[http://www.oxfordscholarship.com/view/10.1093/acprof:oso/9780199794546.001.0001/acprof-9780199794546 How to Build a Brain]<br />
*[https://mitpress.mit.edu/books/sparse-distributed-memory Pentti Kanerva. 1988. Sparse Distributed Memory. MIT Press, Cambridge, MA, USA]</div>Xiaywanghttp://iis-projects.ee.ethz.ch/index.php?title=Knowledge_Distillation_for_Embedded_Machine_Learning&diff=5441Knowledge Distillation for Embedded Machine Learning2020-09-01T14:27:24Z<p>Xiaywang: </p>
<hr />
<div>[[Category:Digital]][[Category:Semester Thesis]] [[Category:Available]] [[Category:2020]][[Category:Hot]][[Category:EmbeddedAI]][[Category:Xiaywang]]<br />
==Description==<br />
The vast majority of high-performance neural networks used on datasets like ImageNet use Millions or Billions of parameters and are trained and executed with several GPUs at once. Such networks can never be deployed to devices like microcontrollers. <br />
However, using novel training techniques, we can leverage these well-trained networks to transfer their knowledge to smaller networks that can be deployed to embedded devices.<br />
<br />
Knowledge Distillation is a novel training approach for deep neural networks, which uses well-trained large networks or ensembles of specialized models to train smaller, more efficient networks. This technique shows a lot of potential for deploying models to embedded devices when used in conjunction with well-established quantization techniques. The goal of this thesis is to develop a knowledge distillation algorithm and evaluate it for the training of networks for embedded devices, comparing it to traditional training methods.<br />
<br />
<br />
The main goals (not all have to be met in a single semester project) of the project are:<br />
* Develop framework for distillation-based training in PyTorch<br />
<br />
* Combine knowledge distillation with quantization to optimize model size<br />
<br />
* Evaluate knowledge distillation as a method for deployment of networks to embedded devices<br />
<br />
<br />
===Status: Available ===<br />
Looking for a student for a Semester project. <br />
<br />
: Supervision: [[:User:xiaywang|Xiaying Wang]], [[:User:Scheremo|Moritz Scherer]]<br />
<br />
===Prerequisites===<br />
* Machine Learning<br />
* Python<br />
<br />
===Character===<br />
: 20% Theory<br />
: 80% Implementation<br />
<br />
===Literature===<br />
* [https://arxiv.org/abs/1503.02531] G. Hinton, et. al., Distilling the Knowledge in a Neural Network<br />
* [https://arxiv.org/abs/1805.04770] T. Furlanello, et. al., Born Again Neural Networks<br />
<br />
<br />
===Professor===<br />
: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]<br />
[[#top|↑ top]]<br />
<br />
<!--<br />
==Detailed Task Description==<br />
<br />
===Meetings & Presentations===<br />
The student(s) and advisor(s) agree on weekly meetings to discuss all relevant decisions and decide on how to proceed. Of course, additional meetings can be organized to address urgent issues. <br />
Around the middle of the project there is a design review, where senior members of the lab review your work (bring all the relevant information, such as prelim. specifications, block diagrams, synthesis reports, testing strategy, ...) to make sure everything is on track and decide whether further support is necessary. They also make the definite decision on whether the chip is actually manufactured (no reason to worry, if the project is on track) and whether more chip area, a different package, ... is provided. For more details confer to [http://eda.ee.ethz.ch/index.php/Design_review]. <br />
At the end of the project, you have to present/defend your work during a 15 min. presentation and 5 min. of discussion as part of the IIS colloquium (as required for any semester or master thesis at D-ITET).<br />
<br />
===Deliverables===<br />
* description of the most promising architectures, and argumentation on the decision taken (as part of the report)<br />
* synthesizable, verified VHDL code<br />
* generated test vector files<br />
* synthesis scripts & relevant software models developed for verification<br />
* synthesis results and final chip layout (GDS II data), bonding diagram<br />
* datasheet (part of report)<br />
* presentation slides<br />
* project report (in digital form; a hard copy also welcome, but not necessary)<br />
---><br />
<!--<br />
===Timeline==<br />
To give some idea on how the time can be split up, we provide some possible partitioning: <br />
* Literature survey, building a basic understanding of the problem at hand, catch up on related work <br />
* Development of a working software-based implementation running on the Zynq's ARM core<br />
* Piece-by-piece off-loading of relevant tasks to the programmable logic<br />
* Implementation of data interfaces (software or hardware)<br />
* Report and presentation <br />
--><br />
<!-- 13.5 weeks total here --><br />
<br />
===Practical Details===<br />
* '''[[Project Plan]]'''<br />
* '''[[Project Meetings]]'''<br />
* '''[[Final Report]]'''<br />
* '''[[Final Presentation]]'''<br />
<br />
[[#top|↑ top]]<br />
<br />
<!-- <br />
<br />
COPY PASTE FROM THE LIST BELOW TO ADD TO CATEGORIES<br />
<br />
GROUP<br />
[[Category:Digital]]<br />
<br />
STATUS<br />
[[Category:Available]]<br />
[[Category:In progress]]<br />
[[Category:Completed]]<br />
[[Category:Hot]]<br />
<br />
TYPE OF WORK<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]]<br />
[[Category:PhD Thesis]]<br />
[[Category:Research]]<br />
<br />
NAMES OF EU/CTI/NT PROJECTS<br />
[[Category:UltrasoundToGo]]<br />
[[Category:IcySoC]]<br />
[[Category:PSocrates]]<br />
[[Category:UlpSoC]]<br />
[[Category:Qcrypt]]<br />
<br />
YEAR (IF FINISHED)<br />
[[Category:2010]]<br />
[[Category:2011]]<br />
[[Category:2012]]<br />
[[Category:2013]]<br />
[[Category:2014]]<br />
<br />
---><br />
<br />
[[Category:Xiaywang]]</div>Xiaywanghttp://iis-projects.ee.ethz.ch/index.php?title=Human_Intranet&diff=5440Human Intranet2020-09-01T14:26:55Z<p>Xiaywang: </p>
<hr />
<div>[[Category:Digital]]<br />
[[Category:Human Intranet]]<br />
[[Category:ASIC]]<br />
[[Category:FPGA]]<br />
[[Category:In progress]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]]<br />
<br />
__NOTOC__<br />
=What is Human Intranet?=<br />
[[File:HI.png|thumb|right|450px]]<br />
The world around us is getting a lot smarter quickly: virtually every single component of our daily living environment is being equipped with sensors, actuators, processing, and connection into a network that will soon count billions of nodes and trillions of sensors. These devices only interact with the human through the traditional input and output channels. Hence, they only indirectly communicate with our brain—through our five sense modalities—forming two separate computing systems: biological versus physical. It could be made a lot more effective if a direct high bandwidth link existed between the two systems, allowing them to truly collaborate with each other and to offer opportunities for enhanced functionality that would otherwise be hard to accomplish. The emergence of miniaturized sense, compute and actuate devices as well as interfaces that are form-fitted to the human body opens the door for a symbiotic convergence between biological function and physical computing.<br />
<br />
Human Intranet is an open, scalable platform that seamlessly integrates an ever-increasing number of sensor, actuation, computation, storage, communication and energy nodes located on, in, or around the human body acting in symbiosis with the functions provided by the body itself. Human Intranet presents a system vision in which, for example, disease would be treated by chronically measuring biosignals deep in the body, or by providing targeted, therapeutic interventions that respond on demand and in situ. To gain a holistic view of a person’s health, these sensors and actuators must communicate and collaborate with each other. Most of such systems prototyped or envisioned today serve to address deficiencies in the human sensory or motor control systems due to birth defects, illnesses, or accidents (e.g., invasive brain-machine interfaces, cochlear implants, artificial retinas, etc.). While all these systems target defects, one can easily imagine that this could lead to many types of enhancement and/or enable direct interaction with the environment: to make us humans smarter!<br />
<br />
Here, in our projects, we mainly focus on '''sensor, computation, communication, and emerging storage''' aspects to develop very efficient closed-loop sense-interpret-actuate systems, enabling distributed autonomous behavior. For example, to design the ''brain'' of our physical computing (i.e., the compute/interpret component), we rely on computing with ultra-wide words (e.g., 10,000 bits) that eases interfacing with various sensor modalities and actuators. This novel computing paradigm is called hyperdimensional (HD) computing that is inspired from the very size of the biological brain’s circuits: assuming 1 bit per synapse, the brain is made up of more than 24 billion of such ultra-wide words. You can watch some of our demos:<br />
* [https://bwrc.eecs.berkeley.edu/sites/default/files/files/u2630/flexemg_v2_lq.mp4#t=2 Video1]<br />
* [https://www.youtube.com/watch?time_continue=9&v=vTQGMQ6QaJE Video2]<br />
* [https://iis-people.ee.ethz.ch/~arahimi/papers/ISSCC18-Demo.pdf PDF] <br />
<br />
You can also find a collection of complemented projects with source codes/datasets here:<br />
* [https://github.com/HyperdimensionalComputing/collection Github link]<br />
<br />
==Prerequisites and Focus==<br />
If you are an B.S. or M.S. student at the ETHZ, typically there is no prerequisite. You can come and talk to us and we adapt the projects based on your skills. The scope and focus of projects are wide. You can choose to work on:<br />
<br />
* '''Efficient hardware architectures in emerging technologies''' (e.g., [https://www.zurich.ibm.com/sto/memory/ the IBM computational memory])<br />
* '''Exploring new Human Intranet/IoT applications''' (High-level Embedded Programming) <br />
* '''Algorithm design and optimizations''' (Python)<br />
* '''System-level design and testing''' <br />
* '''Sensory interfaces''' (analog and digital)<br />
* '''FPGA prototyping, ASIC, and accelerators''' (SystemVerilog/ VHDL)<br />
<br />
<br />
<br />
<!-- <br />
* '''Theory''' of learning systems including HD computing, Hidden Markov Model (HMM), and clustering algorithms<br />
Overall, our projects cover '''algorithmic, hardware/software, and system level''' design and developments. <br />
However, if you have background in signal processing, VLSI or linear algebra is a super plus! --><br />
<br />
===Useful Reading===<br />
*[https://ieeexplore.ieee.org/document/7030200/ The Human Intranet--Where Swarms and Humans Meet]<br />
*[https://link.springer.com/article/10.1007/s12559-009-9009-8 Hyperdimensional Computing: An Introduction to Computing in Distributed Representation with High-Dimensional Random Vectors]<br />
*[https://ieeexplore.ieee.org/document/8422472/ Hyperdimensional Modulation for Robust Low-Power Communications]<br />
*[https://iis-people.ee.ethz.ch/~arahimi/papers/TCAS17.pdf High-dimensional Computing as a Nanoscalable Paradigm]<br />
*[http://www.oxfordscholarship.com/view/10.1093/acprof:oso/9780199794546.001.0001/acprof-9780199794546 How to Build a Brain]<br />
*[https://mitpress.mit.edu/books/sparse-distributed-memory Pentti Kanerva. 1988. Sparse Distributed Memory. MIT Press, Cambridge, MA, USA]<br />
<br />
=Available Projects=<br />
Here, we provide a short description of the related projects for you to see the scope of our work. The directions and details of the projects can be adapted based on your interests and skills. Please do not hesitate to come and talk to us for more details.<br />
<br />
=Brain-Machine Interfaces=<br />
<!-- [[File:BCI.png|thumb|center]] <br />
[[File:BCI-dryEEG.jpg|thumb|right]] --><br />
[[File:Emotiv-epoc-14-channel-mobile-eeg.jpg|thumb|right|200px]]<br />
<br />
<br />
===Short Description===<br />
Noninvasive brain–machine interfaces (BMIs) and neuroprostheses aim to provide a communication and control channel based on the recognition of the subject’s intentions from spatiotemporal neural activity typically recorded by EEG electrodes. What makes it particularly challenging, however, is its susceptibility to errors over time in the recognition of human intentions.<br />
<br />
In this project, our goal is to develop efficient and fast learning algorithms that replace traditional signal processing and classification methods by directly operating with raw data from electrodes. Furthermore, we aim to efficiently deploy those algorithms on tightly resource-limited devices (e.g., Microcontroller units) for near sensor classification using artificial intelligence.<br />
<br />
===Links===<br />
* [https://iis-people.ee.ethz.ch/~herschmi/EdgeDL20.pdf Q-EEGNet: an Energy-Efficient 8-bit Quantized Parallel EEGNet Implementation for Edge Motor-Imagery Brain–Machine Interfaces]<br />
* [https://iis-people.ee.ethz.ch/~herschmi/MEMEA20.pdf An Accurate EEGNet-based Motor-Imagery Brain–Computer Interface for Low-Power Edge Computing]<br />
* [https://iis-people.ee.ethz.ch/~arahimi/papers/EUSIPCO18.pdf Fast and Accurate Multiclass Inference for Motor Imagery BCIs Using Large Multiscale Temporal and Spectral Features]<br />
* [https://iis-people.ee.ethz.ch/~arahimi/papers/MONET17.pdf Hyperdimensional Computing for Blind and One-Shot Classification of EEG Error-Related Potentials]<br />
* [https://arxiv.org/abs/1812.05705 Exploring Embedding Methods in Binary Hyperdimensional Computing: A Case Study for Motor-Imagery based Brain-Computer Interfaces]<br />
<br />
===Available Projects===<br />
<DynamicPageList><br />
category = Available<br />
category = Digital<br />
category = BCI<br />
<br />
</DynamicPageList><br />
<br />
=Epilepsy Seizure Detection Device=<br />
[[File:Non-EEG Seizure.jpg|border|text-top|300px]]<br />
[[File:NeuroPace.jpg|border|text-top|400px]]<br />
<!-- Seizure-prediction.png --><br />
===Short Description===<br />
Seizure detection systems hold promise for improving the quality of life for patients with epilepsy that afflicts nearly 1% of the world's population.<br />
In this project, our goal is to develop efficient techniques for EEG as well as non-EEG signals to detect an upcoming seizure in an ultra-low-power device. This covers a wide range of analog and digital techniques. The abilities of hyperdimensional computing for one-shot and online learning can come to rescue.<br />
<br />
===Links===<br />
* [http://ieeg-swez.ethz.ch/ The SWEC-ETHZ iEEG Database and Algorithms]<br />
* [https://www.wysscenter.ch/project/epilepsy-monitoring-seizure-forecasts/ Epilepsy monitoring and seizure forecasts at Wyss Center]<br />
* [https://www.youtube.com/watch?time_continue=87&v=ouyPXkEud40 Controlling tinnitus with neurofeedback]<br />
<br />
===Available Projects===<br />
<DynamicPageList><br />
category = Available<br />
category = Digital<br />
category = Epilepsy<br />
<br />
</DynamicPageList><br />
<br />
<br />
<!--<br />
=Extremely Resilient Hyperdimensional Processor=<br />
[[File:BrainChip.jpg|thumb|left]]<br />
<br />
===Short Description===<br />
The most important aspect of hyperdimensional (HD) computing, for hardware realization, is its robustness against noise and variations in the computing platforms. Principles of HD computing allows to implement resilient controllers and state machines for extreme noisy conditions. Its tolerance in operating with faulty components and low signal-to-noise ratio (SNR) conditions is achieved by brain-inspired properties of hypervectors: (pseudo)randomness, high-dimensionality, and fully distributed holographic representations.<br />
<br />
In this project, your goal would be to design and develop an end-to-end robust HD processor with extremely resilient controller based on principles of HD computing, and measure its resiliency against noisy environment and faulty components.<br />
<br />
===Links===<br />
* [https://iis-people.ee.ethz.ch/~arahimi/papers/ISLPED16.pdf A Robust and Energy-Efficient Classifier Using Brain-Inspired Hyperdimensional Computing]<br />
* [https://iis-people.ee.ethz.ch/~arahimi/papers/DAC18.pdf PULP-HD: Accelerating Brain-Inspired High-Dimensional Computing on a Parallel Ultra-Low Power Platform]<br />
* [http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8216554 Associative Synthesis of Finite State Automata Model of a Controlled Object with Hyperdimensional Computing]<br />
<br />
=Flexible High-Density Sensors for Hand Gesture Recognition=<br />
[[File:Hyperdimensional_EMG.png|thumb|center]]<br />
[[File:FlexEMG.png|thumb|right|500px]]<br />
<br />
===Short Description===<br />
The surface electromyography (EMG) signals are the superposition of the electrical activity of underneath muscles when contractions occur.<br />
Wearable surface EMG devices have a wide range of applications in controlling the upper limb prostheses and hand gesture recognition systems intended for consumer human-machine interaction. High-density EMG electrode array covering the whole arm can ease targeting the most desired muscle locations and cope the issues with sensors misplacement.<br />
For robust gesture recognition from such EMG sensors, we rely on brain-inspired HD computing.<br />
<br />
In this project, your goal would be to develop new sensors and RTL implementation of HD computing for one-shot gesture learning in an ultra low-power device.<br />
<br />
===Links===<br />
* [https://bwrc.eecs.berkeley.edu/sites/default/files/files/u2630/flexemg_v2_lq.mp4#t=2 Flexible EMG Demo]<br />
* [https://iis-people.ee.ethz.ch/~arahimi/papers/ISCAS2018.pdf Gesture Recognition System with Flexible High-Density Sensors] <br />
* [https://iis-people.ee.ethz.ch/~arahimi/papers/papers/ICRC16.pdf Hyperdimensional Biosignal Processing: A Case Study for EMG-based Hand Gesture Recognition (paper)]<br />
* [https://arxiv.org/abs/1901.00234 Adaptive EMG-based hand gesture recognition using hyperdimensional computing (paper)]<br />
* [https://github.com/abbas-rahimi/HDC-EMG Related Matlab code]<br />
<br />
==Robot Learning by Demonstration==<br />
[[File:Robot-VSA.png|thumb|left|Image source: Neubert et al, IROS 2016]]<br />
===Short Description===<br />
Robot learning from demonstration is a paradigm for enabling robots to autonomously perform new tasks. <br />
HD computing is a nice fit in this area since it naturally enables modeling relation between sensory inputs and actuator outputs of a robot by learning from few demonstrations. <br />
In this project, your goal would be to develop algorithms and implementations based on HD computing to enhance a robot to learn from online demonstrations. <br />
Further, such HD computing-based paradigm can be coupled to a brain-computer interface device enabling to control a robot by EEG signals from the brain. It has a wonderful application in neuroprosthetics to learn from a patient (see [https://www.youtube.com/watch?time_continue=26&v=jAtcVlTqxeA this] demonstration at EPFL).<br />
<br />
===Links===<br />
* [https://actu.epfl.ch/news/when-the-neuroprosthetics-learn-from-the-patient-5/ When the neuroprosthetics learn from the patient]<br />
* [https://www.tu-chemnitz.de/etit/proaut/publications/IROS2016_neubert.pdf Learning Vector Symbolic Architectures for Reactive Robot Behaviours] <br />
* [https://www.aaai.org/ocs/index.php/WS/AAAIW13/paper/download/7075/6578 Learning Behavior Hierarchies via High-Dimensional Sensor Projection (paper)]<br />
---><br />
<br />
<br />
=Completed Projects=<br />
These are projects that were recently completed: <br />
<DynamicPageList><br />
category = Completed<br />
category = Digital<br />
category = Human Intranet<br />
<br />
</DynamicPageList><br />
<br />
=Where to find us=<br />
* [https://iis-people.ee.ethz.ch/~herschmi/ Michael Hersche]<br />
** '''e-mail''': [mailto:herschmi@iis.ee.ethz.ch herschmi@iis.ee.ethz.ch]<br />
** ETZ J76.2<br />
* [https://iis-people.ee.ethz.ch/~xiaywang/ Xiaying Wang]<br />
** '''e-mail''': [mailto:xiaywang@iis.ee.ethz.ch xiaywang@iis.ee.ethz.ch]<br />
** ETZ J68.2<br />
* [https://iis-people.ee.ethz.ch/~arahimi/ Dr. Abbas Rahimi]<br />
** '''e-mail''': [mailto:abbas@iis.ee.ethz.ch abbas@iis.ee.ethz.ch]<br />
** ETZ J85<br />
* [http://www.iis.ee.ethz.ch/people/person-detail.html?persid=194234 Prof. Luca Benini]<br />
** '''e-mail''': [mailto:lbenini@iis.ee.ethz.ch lbenini@iis.ee.ethz.ch]<br />
** ETZ J84</div>Xiaywanghttp://iis-projects.ee.ethz.ch/index.php?title=Knowledge_Distillation_for_Embedded_Machine_Learning&diff=5429Knowledge Distillation for Embedded Machine Learning2020-08-24T17:07:58Z<p>Xiaywang: </p>
<hr />
<div>[[Category:Digital]][[Category:Semester Thesis]] [[Category:Available]] [[Category:2020]][[Category:Hot]][[Category:EmbeddedAI]][[Category:Xiaywang]][[Category:Herschmi]][[Category:BCI]]<br />
==Description==<br />
The vast majority of high-performance neural networks used on datasets like ImageNet use Millions or Billions of parameters and are trained and executed with several GPUs at once. Such networks can never be deployed to devices like microcontrollers. <br />
However, using novel training techniques, we can leverage these well-trained networks to transfer their knowledge to smaller networks that can be deployed to embedded devices.<br />
<br />
Knowledge Distillation is a novel training approach for deep neural networks, which uses well-trained large networks or ensembles of specialized models to train smaller, more efficient networks. This technique shows a lot of potential for deploying models to embedded devices when used in conjunction with well-established quantization techniques. The goal of this thesis is to develop a knowledge distillation algorithm and evaluate it for the training of networks for embedded devices, comparing it to traditional training methods.<br />
<br />
<br />
The main goals (not all have to be met in a single semester project) of the project are:<br />
* Develop framework for distillation-based training in PyTorch<br />
<br />
* Combine knowledge distillation with quantization to optimize model size<br />
<br />
* Evaluate knowledge distillation as a method for deployment of networks to embedded devices<br />
<br />
<br />
===Status: Available ===<br />
Looking for a student for a Semester project. <br />
<br />
: Supervision: [[:User:xiaywang|Xiaying Wang]], [[:User:Scheremo|Moritz Scherer]]<br />
<br />
===Prerequisites===<br />
* Machine Learning<br />
* Python<br />
<br />
===Character===<br />
: 20% Theory<br />
: 80% Implementation<br />
<br />
===Literature===<br />
* [https://arxiv.org/abs/1503.02531] G. Hinton, et. al., Distilling the Knowledge in a Neural Network<br />
* [https://arxiv.org/abs/1805.04770] T. Furlanello, et. al., Born Again Neural Networks<br />
<br />
<br />
===Professor===<br />
: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]<br />
[[#top|↑ top]]<br />
<br />
<!--<br />
==Detailed Task Description==<br />
<br />
===Meetings & Presentations===<br />
The student(s) and advisor(s) agree on weekly meetings to discuss all relevant decisions and decide on how to proceed. Of course, additional meetings can be organized to address urgent issues. <br />
Around the middle of the project there is a design review, where senior members of the lab review your work (bring all the relevant information, such as prelim. specifications, block diagrams, synthesis reports, testing strategy, ...) to make sure everything is on track and decide whether further support is necessary. They also make the definite decision on whether the chip is actually manufactured (no reason to worry, if the project is on track) and whether more chip area, a different package, ... is provided. For more details confer to [http://eda.ee.ethz.ch/index.php/Design_review]. <br />
At the end of the project, you have to present/defend your work during a 15 min. presentation and 5 min. of discussion as part of the IIS colloquium (as required for any semester or master thesis at D-ITET).<br />
<br />
===Deliverables===<br />
* description of the most promising architectures, and argumentation on the decision taken (as part of the report)<br />
* synthesizable, verified VHDL code<br />
* generated test vector files<br />
* synthesis scripts & relevant software models developed for verification<br />
* synthesis results and final chip layout (GDS II data), bonding diagram<br />
* datasheet (part of report)<br />
* presentation slides<br />
* project report (in digital form; a hard copy also welcome, but not necessary)<br />
---><br />
<!--<br />
===Timeline==<br />
To give some idea on how the time can be split up, we provide some possible partitioning: <br />
* Literature survey, building a basic understanding of the problem at hand, catch up on related work <br />
* Development of a working software-based implementation running on the Zynq's ARM core<br />
* Piece-by-piece off-loading of relevant tasks to the programmable logic<br />
* Implementation of data interfaces (software or hardware)<br />
* Report and presentation <br />
--><br />
<!-- 13.5 weeks total here --><br />
<br />
===Practical Details===<br />
* '''[[Project Plan]]'''<br />
* '''[[Project Meetings]]'''<br />
* '''[[Final Report]]'''<br />
* '''[[Final Presentation]]'''<br />
<br />
[[#top|↑ top]]<br />
<br />
<!-- <br />
<br />
COPY PASTE FROM THE LIST BELOW TO ADD TO CATEGORIES<br />
<br />
GROUP<br />
[[Category:Digital]]<br />
<br />
STATUS<br />
[[Category:Available]]<br />
[[Category:In progress]]<br />
[[Category:Completed]]<br />
[[Category:Hot]]<br />
<br />
TYPE OF WORK<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]]<br />
[[Category:PhD Thesis]]<br />
[[Category:Research]]<br />
<br />
NAMES OF EU/CTI/NT PROJECTS<br />
[[Category:UltrasoundToGo]]<br />
[[Category:IcySoC]]<br />
[[Category:PSocrates]]<br />
[[Category:UlpSoC]]<br />
[[Category:Qcrypt]]<br />
<br />
YEAR (IF FINISHED)<br />
[[Category:2010]]<br />
[[Category:2011]]<br />
[[Category:2012]]<br />
[[Category:2013]]<br />
[[Category:2014]]<br />
<br />
---><br />
<br />
[[Category:Xiaywang]]</div>Xiaywanghttp://iis-projects.ee.ethz.ch/index.php?title=Knowledge_Distillation_for_Embedded_Machine_Learning&diff=5428Knowledge Distillation for Embedded Machine Learning2020-08-24T17:06:30Z<p>Xiaywang: </p>
<hr />
<div>[[Category:Digital]][[Category:Semester Thesis]] [[Category:Available]] [[Category:2020]][[Category:Hot]][[Category:EmbeddedAI]][[Category:Xiaywang]][[Category:Herschmi]][[Category:BCI]]<br />
==Description==<br />
The vast majority of high-performance neural networks used on datasets like ImageNet use Millions or Billions of parameters and are trained and executed with several GPUs at once. Such networks can never be deployed to devices like microcontrollers. <br />
However, using novel training techniques, we can leverage these well-trained networks to transfer their knowledge to smaller networks that can be deployed to embedded devices.<br />
<br />
Knowledge Distillation is a novel training approach for deep neural networks, which uses well-trained large networks or ensembles of specialized models to train smaller, more efficient networks. This technique shows a lot of potential for deploying models to embedded devices when used in conjunction with well-established quantization techniques. The goal of this thesis is to develop a knowledge distillation algorithm and evaluate it for the training of networks for embedded devices, comparing it to traditional training methods.<br />
<br />
<br />
The main goals (not all have to be met in a single semester project) of the project are:<br />
* Develop framework for distillation-based training in PyTorch<br />
<br />
* Combine knowledge distillation with quantization to optimize model size<br />
<br />
* Evaluate knowledge distillation as a method for deployment of networks to embedded devices<br />
<br />
<br />
===Status: Available ===<br />
Looking for a student for a Semester project. <br />
<br />
: Supervision: [[:User:xiaywang|Xiaying Wang]], [[:User:scheremo|Moritz Scherer]]<br />
<br />
===Prerequisites===<br />
* Machine Learning<br />
* Python<br />
<br />
===Character===<br />
: 20% Theory<br />
: 80% Implementation<br />
<br />
===Literature===<br />
* [https://arxiv.org/abs/1503.02531] G. Hinton, et. al., Distilling the Knowledge in a Neural Network<br />
* [https://arxiv.org/abs/1805.04770] T. Furlanello, et. al., Born Again Neural Networks<br />
<br />
<br />
===Professor===<br />
: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]<br />
[[#top|↑ top]]<br />
<br />
<!--<br />
==Detailed Task Description==<br />
<br />
===Meetings & Presentations===<br />
The student(s) and advisor(s) agree on weekly meetings to discuss all relevant decisions and decide on how to proceed. Of course, additional meetings can be organized to address urgent issues. <br />
Around the middle of the project there is a design review, where senior members of the lab review your work (bring all the relevant information, such as prelim. specifications, block diagrams, synthesis reports, testing strategy, ...) to make sure everything is on track and decide whether further support is necessary. They also make the definite decision on whether the chip is actually manufactured (no reason to worry, if the project is on track) and whether more chip area, a different package, ... is provided. For more details confer to [http://eda.ee.ethz.ch/index.php/Design_review]. <br />
At the end of the project, you have to present/defend your work during a 15 min. presentation and 5 min. of discussion as part of the IIS colloquium (as required for any semester or master thesis at D-ITET).<br />
<br />
===Deliverables===<br />
* description of the most promising architectures, and argumentation on the decision taken (as part of the report)<br />
* synthesizable, verified VHDL code<br />
* generated test vector files<br />
* synthesis scripts & relevant software models developed for verification<br />
* synthesis results and final chip layout (GDS II data), bonding diagram<br />
* datasheet (part of report)<br />
* presentation slides<br />
* project report (in digital form; a hard copy also welcome, but not necessary)<br />
---><br />
<!--<br />
===Timeline==<br />
To give some idea on how the time can be split up, we provide some possible partitioning: <br />
* Literature survey, building a basic understanding of the problem at hand, catch up on related work <br />
* Development of a working software-based implementation running on the Zynq's ARM core<br />
* Piece-by-piece off-loading of relevant tasks to the programmable logic<br />
* Implementation of data interfaces (software or hardware)<br />
* Report and presentation <br />
--><br />
<!-- 13.5 weeks total here --><br />
<br />
===Practical Details===<br />
* '''[[Project Plan]]'''<br />
* '''[[Project Meetings]]'''<br />
* '''[[Final Report]]'''<br />
* '''[[Final Presentation]]'''<br />
<br />
[[#top|↑ top]]<br />
<br />
<!-- <br />
<br />
COPY PASTE FROM THE LIST BELOW TO ADD TO CATEGORIES<br />
<br />
GROUP<br />
[[Category:Digital]]<br />
<br />
STATUS<br />
[[Category:Available]]<br />
[[Category:In progress]]<br />
[[Category:Completed]]<br />
[[Category:Hot]]<br />
<br />
TYPE OF WORK<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]]<br />
[[Category:PhD Thesis]]<br />
[[Category:Research]]<br />
<br />
NAMES OF EU/CTI/NT PROJECTS<br />
[[Category:UltrasoundToGo]]<br />
[[Category:IcySoC]]<br />
[[Category:PSocrates]]<br />
[[Category:UlpSoC]]<br />
[[Category:Qcrypt]]<br />
<br />
YEAR (IF FINISHED)<br />
[[Category:2010]]<br />
[[Category:2011]]<br />
[[Category:2012]]<br />
[[Category:2013]]<br />
[[Category:2014]]<br />
<br />
---><br />
<br />
[[Category:Xiaywang]]</div>Xiaywanghttp://iis-projects.ee.ethz.ch/index.php?title=Knowledge_Distillation_for_Embedded_Machine_Learning&diff=5427Knowledge Distillation for Embedded Machine Learning2020-08-24T16:20:00Z<p>Xiaywang: Created page with "Category:DigitalCategory:Semester Thesis Category:Available Category:2020Category:HotCategory:EmbeddedAICategory:XiaywangCategory:HerschmiCat..."</p>
<hr />
<div>[[Category:Digital]][[Category:Semester Thesis]] [[Category:Available]] [[Category:2020]][[Category:Hot]][[Category:EmbeddedAI]][[Category:Xiaywang]][[Category:Herschmi]][[Category:BCI]]<br />
==Description==<br />
The vast majority of high-performance neural networks used on datasets like ImageNet use Millions or Billions of parameters and are trained and executed with several GPUs at once. Such networks can never be deployed to devices like microcontrollers. <br />
However, using novel training techniques, we can leverage these well-trained networks to transfer their knowledge to smaller networks that can be deployed to embedded devices.<br />
<br />
Knowledge Distillation is a novel training approach for deep neural networks, which uses well-trained large networks or ensembles of specialized models to train smaller, more efficient networks. This technique shows a lot of potential for deploying models to embedded devices when used in conjunction with well-established quantization techniques. The goal of this thesis is to develop a knowledge distillation algorithm and evaluate it for the training of networks for embedded devices, comparing it to traditional training methods.<br />
<br />
<br />
The main goals (not all have to be met in a single semester project) of the project are:<br />
* Develop framework for distillation-based training in PyTorch<br />
<br />
* Combine knowledge distillation with quantization to optimize model size<br />
<br />
* Evaluate knowledge distillation as a method for deployment of networks to embedded devices<br />
<br />
<br />
===Status: Available ===<br />
Looking for a student for a Semester project. <br />
<br />
: Supervision:Moritz Scherer, [[:User:xiaywang|Xiaying Wang]]<br />
<br />
===Prerequisites===<br />
* Machine Learning<br />
* Python<br />
<br />
===Character===<br />
: 20% Theory<br />
: 80% Implementation<br />
<br />
===Literature===<br />
* [https://arxiv.org/abs/1503.02531] G. Hinton, et. al., Distilling the Knowledge in a Neural Network<br />
* [https://arxiv.org/abs/1805.04770] T. Furlanello, et. al., Born Again Neural Networks<br />
<br />
<br />
===Professor===<br />
: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]<br />
[[#top|↑ top]]<br />
<br />
<!--<br />
==Detailed Task Description==<br />
<br />
===Meetings & Presentations===<br />
The student(s) and advisor(s) agree on weekly meetings to discuss all relevant decisions and decide on how to proceed. Of course, additional meetings can be organized to address urgent issues. <br />
Around the middle of the project there is a design review, where senior members of the lab review your work (bring all the relevant information, such as prelim. specifications, block diagrams, synthesis reports, testing strategy, ...) to make sure everything is on track and decide whether further support is necessary. They also make the definite decision on whether the chip is actually manufactured (no reason to worry, if the project is on track) and whether more chip area, a different package, ... is provided. For more details confer to [http://eda.ee.ethz.ch/index.php/Design_review]. <br />
At the end of the project, you have to present/defend your work during a 15 min. presentation and 5 min. of discussion as part of the IIS colloquium (as required for any semester or master thesis at D-ITET).<br />
<br />
===Deliverables===<br />
* description of the most promising architectures, and argumentation on the decision taken (as part of the report)<br />
* synthesizable, verified VHDL code<br />
* generated test vector files<br />
* synthesis scripts & relevant software models developed for verification<br />
* synthesis results and final chip layout (GDS II data), bonding diagram<br />
* datasheet (part of report)<br />
* presentation slides<br />
* project report (in digital form; a hard copy also welcome, but not necessary)<br />
---><br />
<!--<br />
===Timeline==<br />
To give some idea on how the time can be split up, we provide some possible partitioning: <br />
* Literature survey, building a basic understanding of the problem at hand, catch up on related work <br />
* Development of a working software-based implementation running on the Zynq's ARM core<br />
* Piece-by-piece off-loading of relevant tasks to the programmable logic<br />
* Implementation of data interfaces (software or hardware)<br />
* Report and presentation <br />
--><br />
<!-- 13.5 weeks total here --><br />
<br />
===Practical Details===<br />
* '''[[Project Plan]]'''<br />
* '''[[Project Meetings]]'''<br />
* '''[[Final Report]]'''<br />
* '''[[Final Presentation]]'''<br />
<br />
[[#top|↑ top]]<br />
<br />
<!-- <br />
<br />
COPY PASTE FROM THE LIST BELOW TO ADD TO CATEGORIES<br />
<br />
GROUP<br />
[[Category:Digital]]<br />
<br />
STATUS<br />
[[Category:Available]]<br />
[[Category:In progress]]<br />
[[Category:Completed]]<br />
[[Category:Hot]]<br />
<br />
TYPE OF WORK<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]]<br />
[[Category:PhD Thesis]]<br />
[[Category:Research]]<br />
<br />
NAMES OF EU/CTI/NT PROJECTS<br />
[[Category:UltrasoundToGo]]<br />
[[Category:IcySoC]]<br />
[[Category:PSocrates]]<br />
[[Category:UlpSoC]]<br />
[[Category:Qcrypt]]<br />
<br />
YEAR (IF FINISHED)<br />
[[Category:2010]]<br />
[[Category:2011]]<br />
[[Category:2012]]<br />
[[Category:2013]]<br />
[[Category:2014]]<br />
<br />
---><br />
<br />
[[Category:Xiaywang]]</div>Xiaywanghttp://iis-projects.ee.ethz.ch/index.php?title=Human_Intranet&diff=5206Human Intranet2020-06-22T15:32:06Z<p>Xiaywang: /* More Projects */</p>
<hr />
<div>[[Category:Digital]]<br />
[[Category:Human Intranet]]<br />
[[Category:ASIC]]<br />
[[Category:FPGA]]<br />
[[Category:In progress]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]]<br />
<br />
__NOTOC__<br />
=What is Human Intranet?=<br />
[[File:HI.png|thumb|right|450px]]<br />
The world around us is getting a lot smarter quickly: virtually every single component of our daily living environment is being equipped with sensors, actuators, processing, and connection into a network that will soon count billions of nodes and trillions of sensors. These devices only interact with the human through the traditional input and output channels. Hence, they only indirectly communicate with our brain—through our five sense modalities—forming two separate computing systems: biological versus physical. It could be made a lot more effective if a direct high bandwidth link existed between the two systems, allowing them to truly collaborate with each other and to offer opportunities for enhanced functionality that would otherwise be hard to accomplish. The emergence of miniaturized sense, compute and actuate devices as well as interfaces that are form-fitted to the human body opens the door for a symbiotic convergence between biological function and physical computing.<br />
<br />
Human Intranet is an open, scalable platform that seamlessly integrates an ever-increasing number of sensor, actuation, computation, storage, communication and energy nodes located on, in, or around the human body acting in symbiosis with the functions provided by the body itself. Human Intranet presents a system vision in which, for example, disease would be treated by chronically measuring biosignals deep in the body, or by providing targeted, therapeutic interventions that respond on demand and in situ. To gain a holistic view of a person’s health, these sensors and actuators must communicate and collaborate with each other. Most of such systems prototyped or envisioned today serve to address deficiencies in the human sensory or motor control systems due to birth defects, illnesses, or accidents (e.g., invasive brain-machine interfaces, cochlear implants, artificial retinas, etc.). While all these systems target defects, one can easily imagine that this could lead to many types of enhancement and/or enable direct interaction with the environment: to make us humans smarter!<br />
<br />
Here, in our projects, we mainly focus on '''sensor, computation, communication, and emerging storage''' aspects to develop very efficient closed-loop sense-interpret-actuate systems, enabling distributed autonomous behavior. For example, to design the ''brain'' of our physical computing (i.e., the compute/interpret component), we rely on computing with ultra-wide words (e.g., 10,000 bits) that eases interfacing with various sensor modalities and actuators. This novel computing paradigm is called hyperdimensional (HD) computing that is inspired from the very size of the biological brain’s circuits: assuming 1 bit per synapse, the brain is made up of more than 24 billion of such ultra-wide words. You can watch some of our demos:<br />
* [https://bwrc.eecs.berkeley.edu/sites/default/files/files/u2630/flexemg_v2_lq.mp4#t=2 Video1]<br />
* [https://www.youtube.com/watch?time_continue=9&v=vTQGMQ6QaJE Video2]<br />
* [https://iis-people.ee.ethz.ch/~arahimi/papers/ISSCC18-Demo.pdf PDF] <br />
<br />
You can also find a collection of complemented projects with source codes/datasets here:<br />
* [https://github.com/HyperdimensionalComputing/collection Github link]<br />
<br />
==Prerequisites and Focus==<br />
If you are an M.S. student at the ETHZ, typically there is no prerequisite. You can come and talk to us and we adapt the projects based on your skills. The scope and focus of projects are wide. You can choose to work on:<br />
<br />
* '''Efficient hardware architectures in emerging technologies''' (e.g., [https://www.zurich.ibm.com/sto/memory/ the IBM computational memory])<br />
* '''System-level design and testing''' <br />
* '''Sensory interfaces''' (analog and digital)<br />
* '''FPGA prototyping, ASIC, and accelerators''' (SystemVerilog/ VHDL)<br />
* '''Exploring new Human Intranet/IoT applications''' (High-level Embedded Programming) <br />
* '''Algorithm design and optimizations''' (Matlab/ Python)<br />
<br />
<br />
<!-- <br />
* '''Theory''' of learning systems including HD computing, Hidden Markov Model (HMM), and clustering algorithms<br />
Overall, our projects cover '''algorithmic, hardware/software, and system level''' design and developments. <br />
However, if you have background in signal processing, VLSI or linear algebra is a super plus! --><br />
<br />
===Useful Reading===<br />
*[https://ieeexplore.ieee.org/document/7030200/ The Human Intranet--Where Swarms and Humans Meet]<br />
*[https://link.springer.com/article/10.1007/s12559-009-9009-8 Hyperdimensional Computing: An Introduction to Computing in Distributed Representation with High-Dimensional Random Vectors]<br />
*[https://ieeexplore.ieee.org/document/8422472/ Hyperdimensional Modulation for Robust Low-Power Communications]<br />
*[https://iis-people.ee.ethz.ch/~arahimi/papers/TCAS17.pdf High-dimensional Computing as a Nanoscalable Paradigm]<br />
*[http://www.oxfordscholarship.com/view/10.1093/acprof:oso/9780199794546.001.0001/acprof-9780199794546 How to Build a Brain]<br />
*[https://mitpress.mit.edu/books/sparse-distributed-memory Pentti Kanerva. 1988. Sparse Distributed Memory. MIT Press, Cambridge, MA, USA]<br />
<br />
=Available Projects=<br />
Here, we provide a short description of the related projects for you to see the scope of our work. The directions and details of the projects can be adapted based on your interests and skills. Please do not hesitate to come and talk to us for more details.<br />
<br />
=Online Brain-Computer Interfaces=<br />
<!-- [[File:BCI.png|thumb|center]] <br />
[[File:BCI-dryEEG.jpg|thumb|right]] --><br />
[[File:Emotiv-epoc-14-channel-mobile-eeg.jpg|thumb|right|200px]]<br />
<br />
<br />
===Short Description===<br />
Noninvasive brain–computer interfaces (BCI) and neuroprostheses aim to provide a communication and control channel based on the recognition of the subject’s intentions from spatiotemporal neural activity typically recorded by EEG electrodes. What makes it particularly challenging, however, is its susceptibility to errors over time in the recognition of human intentions.<br />
<br />
In this project, your goal would be to develop an efficient and fast learning hardware device that replaces the traditional signal processing and classification methods by directly operating with raw data from electrodes in an online fashion. <br />
<br />
===Links===<br />
* [https://iis-people.ee.ethz.ch/~arahimi/papers/EUSIPCO18.pdf Fast and Accurate Multiclass Inference for Motor Imagery BCIs Using Large Multiscale Temporal and Spectral Features]<br />
* [https://iis-people.ee.ethz.ch/~arahimi/papers/MONET17.pdf Hyperdimensional Computing for Blind and One-Shot Classification of EEG Error-Related Potentials]<br />
* [https://arxiv.org/abs/1812.05705 Exploring Embedding Methods in Binary Hyperdimensional Computing: A Case Study for Motor-Imagery based Brain-Computer Interfaces]<br />
<br />
<br />
===Related Projects===<br />
<DynamicPageList><br />
category = Available<br />
category = Digital<br />
category = BCI<br />
<br />
</DynamicPageList><br />
<br />
=Epilepsy Seizure Detection Device=<br />
[[File:Non-EEG Seizure.jpg|border|text-top|300px]]<br />
[[File:NeuroPace.jpg|border|text-top|400px]]<br />
<!-- Seizure-prediction.png --><br />
===Short Description===<br />
Seizure detection systems hold promise for improving the quality of life for patients with epilepsy that afflicts nearly 1% of the world's population.<br />
In this project, your goal would be to develop efficient techniques for EEG as well as non-EEG signals to detect an upcoming seizure in an ultra-low-power device. This covers a wide range of analog and digital techniques. The abilities of hyperdimensional computing for one-shot and online learning can come to rescue.<br />
<br />
===Links===<br />
* [http://ieeg-swez.ethz.ch/ The SWEC-ETHZ iEEG Database and Algorithms]<br />
* [https://www.wysscenter.ch/project/epilepsy-monitoring-seizure-forecasts/ Epilepsy monitoring and seizure forecasts at Wyss Center]<br />
* [https://www.youtube.com/watch?time_continue=87&v=ouyPXkEud40 Controlling tinnitus with neurofeedback]<br />
<br />
===Related Projects===<br />
<DynamicPageList><br />
category = Available<br />
category = Digital<br />
category = Epilepsy<br />
<br />
</DynamicPageList><br />
<br />
=Extremely Resilient Hyperdimensional Processor=<br />
[[File:BrainChip.jpg|thumb|left]]<br />
<br />
===Short Description===<br />
The most important aspect of hyperdimensional (HD) computing, for hardware realization, is its robustness against noise and variations in the computing platforms. Principles of HD computing allows to implement resilient controllers and state machines for extreme noisy conditions. Its tolerance in operating with faulty components and low signal-to-noise ratio (SNR) conditions is achieved by brain-inspired properties of hypervectors: (pseudo)randomness, high-dimensionality, and fully distributed holographic representations.<br />
<br />
In this project, your goal would be to design and develop an end-to-end robust HD processor with extremely resilient controller based on principles of HD computing, and measure its resiliency against noisy environment and faulty components.<br />
<br />
===Links===<br />
* [https://iis-people.ee.ethz.ch/~arahimi/papers/ISLPED16.pdf A Robust and Energy-Efficient Classifier Using Brain-Inspired Hyperdimensional Computing]<br />
* [https://iis-people.ee.ethz.ch/~arahimi/papers/DAC18.pdf PULP-HD: Accelerating Brain-Inspired High-Dimensional Computing on a Parallel Ultra-Low Power Platform]<br />
* [http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8216554 Associative Synthesis of Finite State Automata Model of a Controlled Object with Hyperdimensional Computing]<br />
<br />
=Flexible High-Density Sensors for Hand Gesture Recognition=<br />
<!-- [[File:Hyperdimensional_EMG.png|thumb|center]] --><br />
[[File:FlexEMG.png|thumb|right|500px]]<br />
<br />
<br />
===Short Description===<br />
The surface electromyography (EMG) signals are the superposition of the electrical activity of underneath muscles when contractions occur.<br />
Wearable surface EMG devices have a wide range of applications in controlling the upper limb prostheses and hand gesture recognition systems intended for consumer human-machine interaction. High-density EMG electrode array covering the whole arm can ease targeting the most desired muscle locations and cope the issues with sensors misplacement.<br />
For robust gesture recognition from such EMG sensors, we rely on brain-inspired HD computing.<br />
<br />
In this project, your goal would be to develop new sensors and RTL implementation of HD computing for one-shot gesture learning in an ultra low-power device.<br />
<br />
===Links===<br />
* [https://bwrc.eecs.berkeley.edu/sites/default/files/files/u2630/flexemg_v2_lq.mp4#t=2 Flexible EMG Demo]<br />
* [https://iis-people.ee.ethz.ch/~arahimi/papers/ISCAS2018.pdf Gesture Recognition System with Flexible High-Density Sensors] <br />
* [https://iis-people.ee.ethz.ch/~arahimi/papers/papers/ICRC16.pdf Hyperdimensional Biosignal Processing: A Case Study for EMG-based Hand Gesture Recognition (paper)]<br />
* [https://arxiv.org/abs/1901.00234 Adaptive EMG-based hand gesture recognition using hyperdimensional computing (paper)]<br />
* [https://github.com/abbas-rahimi/HDC-EMG Related Matlab code]<br />
<br />
==Robot Learning by Demonstration==<br />
[[File:Robot-VSA.png|thumb|left|Image source: Neubert et al, IROS 2016]]<br />
===Short Description===<br />
Robot learning from demonstration is a paradigm for enabling robots to autonomously perform new tasks. <br />
HD computing is a nice fit in this area since it naturally enables modeling relation between sensory inputs and actuator outputs of a robot by learning from few demonstrations. <br />
In this project, your goal would be to develop algorithms and implementations based on HD computing to enhance a robot to learn from online demonstrations. <br />
Further, such HD computing-based paradigm can be coupled to a brain-computer interface device enabling to control a robot by EEG signals from the brain. It has a wonderful application in neuroprosthetics to learn from a patient (see [https://www.youtube.com/watch?time_continue=26&v=jAtcVlTqxeA this] demonstration at EPFL).<br />
<br />
===Links===<br />
* [https://actu.epfl.ch/news/when-the-neuroprosthetics-learn-from-the-patient-5/ When the neuroprosthetics learn from the patient]<br />
* [https://www.tu-chemnitz.de/etit/proaut/publications/IROS2016_neubert.pdf Learning Vector Symbolic Architectures for Reactive Robot Behaviours] <br />
* [https://www.aaai.org/ocs/index.php/WS/AAAIW13/paper/download/7075/6578 Learning Behavior Hierarchies via High-Dimensional Sensor Projection (paper)]<br />
<br />
=Smart Eyeglass for Drones=<br />
[[File:Jins_6axis.png|thumb|text-top|right|250px]]<br />
[[File:Jins_EOG.png|thumb|text-top|right|250px]]<br />
<br />
===Short Description===<br />
This project plans to deploy and build upon a new breed of eyewear that allows you to look inside yourself, instead of just at what is in front of you. These insights help you see deep within yourself by showing shifts in your emotional state, your activity logs, as well as your health. This device currently provides 6-axis sensors as well as EOG sensors. These collectively allow to recognize body movements, eye movements, and status of mind. In this project, your goal is to interface with this device and extend it with other sensors to create novel machine learning applications, e.g., controlling the movements of our [http://iis-projects.ee.ethz.ch/index.php/Energy_Efficient_Autonomous_UAVs nano-size quadrotor].<br />
<br />
===Links===<br />
* [https://jins-meme.com/en/concept/ JINS Meme Smartglass]<br />
* [https://www.youtube.com/watch?v=Om_F0uyfjyc A game application of smartglass]<br />
<br />
=Hyperdimensional Affective Computing=<br />
[[File:Emotion-recognition.jpg|border|super|200px]]<br />
[[File:Emotions-on-arousal-valence-space.jpg|border|super|300px]]<br />
<br />
===Short Description===<br />
Affective computing (sometimes called artificial emotional intelligence) is the study and development of systems and devices that can recognize, interpret, process, and simulate human affects. We focus on the emotion recognition and interpretation. Emotion is a subjective mental state caused by some specific events, which is usually accompanied by characteristic behaviors and involuntary physiological changes. Therefore, multi-channel physiological signals (e.g., GSR, ECG, EEG, EOG) become good inputs for emotion analysis, which also can be collected easily and continuously by wearable sensors. However, due to the need of a huge amount of training data for a high-quality machine learning model, energy efficiency constrains and robust issues become major performance bottlenecks, especially for the wearable devices. To overcome this issue, HD computing can come to rescue by providing a low-energy, robust, and fast learning computational paradigm.<br />
<br />
In this project, your goal would be to develop an efficient and robust learning method based on hyperdimensional spaces to enhance accuracy and energy consumption.<br />
<br />
===Links===<br />
* [https://www.research-collection.ethz.ch/handle/20.500.11850/315807 Hyperdimensional Computing-based Multimodality Emotion Recognition with Physiological Signals]<br />
<br />
=More Projects=<br />
<DynamicPageList><br />
category = Available<br />
category = Digital<br />
category = Hyper-dimensional Computing<br />
</DynamicPageList><br />
<DynamicPageList><br />
category = Available<br />
category = Digital<br />
category = Human Intranet<br />
</DynamicPageList><br />
<br />
=Completed Projects=<br />
These are projects that were recently completed: <br />
<DynamicPageList><br />
category = Completed<br />
category = Digital<br />
category = Human Intranet<br />
<br />
</DynamicPageList><br />
<br />
=Where to find us=<br />
* [https://iis-people.ee.ethz.ch/~herschmi/ Michael Hersche]<br />
** '''e-mail''': [mailto:herschmi@iis.ee.ethz.ch herschmi@iis.ee.ethz.ch]<br />
** ETZ J76.2<br />
* [https://iis-people.ee.ethz.ch/~xiaywang/ Xiaying Wang]<br />
** '''e-mail''': [mailto:xiaywang@iis.ee.ethz.ch xiaywang@iis.ee.ethz.ch]<br />
** ETZ J68.2<br />
* [https://iis-people.ee.ethz.ch/~arahimi/ Dr. Abbas Rahimi]<br />
** '''e-mail''': [mailto:abbas@iis.ee.ethz.ch abbas@iis.ee.ethz.ch]<br />
** ETZ J85<br />
* [http://www.iis.ee.ethz.ch/people/person-detail.html?persid=194234 Prof. Luca Benini]<br />
** '''e-mail''': [mailto:lbenini@iis.ee.ethz.ch lbenini@iis.ee.ethz.ch]<br />
** ETZ J84</div>Xiaywanghttp://iis-projects.ee.ethz.ch/index.php?title=Human_Intranet&diff=5205Human Intranet2020-06-22T15:31:45Z<p>Xiaywang: /* More Projects */</p>
<hr />
<div>[[Category:Digital]]<br />
[[Category:Human Intranet]]<br />
[[Category:ASIC]]<br />
[[Category:FPGA]]<br />
[[Category:In progress]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]]<br />
<br />
__NOTOC__<br />
=What is Human Intranet?=<br />
[[File:HI.png|thumb|right|450px]]<br />
The world around us is getting a lot smarter quickly: virtually every single component of our daily living environment is being equipped with sensors, actuators, processing, and connection into a network that will soon count billions of nodes and trillions of sensors. These devices only interact with the human through the traditional input and output channels. Hence, they only indirectly communicate with our brain—through our five sense modalities—forming two separate computing systems: biological versus physical. It could be made a lot more effective if a direct high bandwidth link existed between the two systems, allowing them to truly collaborate with each other and to offer opportunities for enhanced functionality that would otherwise be hard to accomplish. The emergence of miniaturized sense, compute and actuate devices as well as interfaces that are form-fitted to the human body opens the door for a symbiotic convergence between biological function and physical computing.<br />
<br />
Human Intranet is an open, scalable platform that seamlessly integrates an ever-increasing number of sensor, actuation, computation, storage, communication and energy nodes located on, in, or around the human body acting in symbiosis with the functions provided by the body itself. Human Intranet presents a system vision in which, for example, disease would be treated by chronically measuring biosignals deep in the body, or by providing targeted, therapeutic interventions that respond on demand and in situ. To gain a holistic view of a person’s health, these sensors and actuators must communicate and collaborate with each other. Most of such systems prototyped or envisioned today serve to address deficiencies in the human sensory or motor control systems due to birth defects, illnesses, or accidents (e.g., invasive brain-machine interfaces, cochlear implants, artificial retinas, etc.). While all these systems target defects, one can easily imagine that this could lead to many types of enhancement and/or enable direct interaction with the environment: to make us humans smarter!<br />
<br />
Here, in our projects, we mainly focus on '''sensor, computation, communication, and emerging storage''' aspects to develop very efficient closed-loop sense-interpret-actuate systems, enabling distributed autonomous behavior. For example, to design the ''brain'' of our physical computing (i.e., the compute/interpret component), we rely on computing with ultra-wide words (e.g., 10,000 bits) that eases interfacing with various sensor modalities and actuators. This novel computing paradigm is called hyperdimensional (HD) computing that is inspired from the very size of the biological brain’s circuits: assuming 1 bit per synapse, the brain is made up of more than 24 billion of such ultra-wide words. You can watch some of our demos:<br />
* [https://bwrc.eecs.berkeley.edu/sites/default/files/files/u2630/flexemg_v2_lq.mp4#t=2 Video1]<br />
* [https://www.youtube.com/watch?time_continue=9&v=vTQGMQ6QaJE Video2]<br />
* [https://iis-people.ee.ethz.ch/~arahimi/papers/ISSCC18-Demo.pdf PDF] <br />
<br />
You can also find a collection of complemented projects with source codes/datasets here:<br />
* [https://github.com/HyperdimensionalComputing/collection Github link]<br />
<br />
==Prerequisites and Focus==<br />
If you are an M.S. student at the ETHZ, typically there is no prerequisite. You can come and talk to us and we adapt the projects based on your skills. The scope and focus of projects are wide. You can choose to work on:<br />
<br />
* '''Efficient hardware architectures in emerging technologies''' (e.g., [https://www.zurich.ibm.com/sto/memory/ the IBM computational memory])<br />
* '''System-level design and testing''' <br />
* '''Sensory interfaces''' (analog and digital)<br />
* '''FPGA prototyping, ASIC, and accelerators''' (SystemVerilog/ VHDL)<br />
* '''Exploring new Human Intranet/IoT applications''' (High-level Embedded Programming) <br />
* '''Algorithm design and optimizations''' (Matlab/ Python)<br />
<br />
<br />
<!-- <br />
* '''Theory''' of learning systems including HD computing, Hidden Markov Model (HMM), and clustering algorithms<br />
Overall, our projects cover '''algorithmic, hardware/software, and system level''' design and developments. <br />
However, if you have background in signal processing, VLSI or linear algebra is a super plus! --><br />
<br />
===Useful Reading===<br />
*[https://ieeexplore.ieee.org/document/7030200/ The Human Intranet--Where Swarms and Humans Meet]<br />
*[https://link.springer.com/article/10.1007/s12559-009-9009-8 Hyperdimensional Computing: An Introduction to Computing in Distributed Representation with High-Dimensional Random Vectors]<br />
*[https://ieeexplore.ieee.org/document/8422472/ Hyperdimensional Modulation for Robust Low-Power Communications]<br />
*[https://iis-people.ee.ethz.ch/~arahimi/papers/TCAS17.pdf High-dimensional Computing as a Nanoscalable Paradigm]<br />
*[http://www.oxfordscholarship.com/view/10.1093/acprof:oso/9780199794546.001.0001/acprof-9780199794546 How to Build a Brain]<br />
*[https://mitpress.mit.edu/books/sparse-distributed-memory Pentti Kanerva. 1988. Sparse Distributed Memory. MIT Press, Cambridge, MA, USA]<br />
<br />
=Available Projects=<br />
Here, we provide a short description of the related projects for you to see the scope of our work. The directions and details of the projects can be adapted based on your interests and skills. Please do not hesitate to come and talk to us for more details.<br />
<br />
=Online Brain-Computer Interfaces=<br />
<!-- [[File:BCI.png|thumb|center]] <br />
[[File:BCI-dryEEG.jpg|thumb|right]] --><br />
[[File:Emotiv-epoc-14-channel-mobile-eeg.jpg|thumb|right|200px]]<br />
<br />
<br />
===Short Description===<br />
Noninvasive brain–computer interfaces (BCI) and neuroprostheses aim to provide a communication and control channel based on the recognition of the subject’s intentions from spatiotemporal neural activity typically recorded by EEG electrodes. What makes it particularly challenging, however, is its susceptibility to errors over time in the recognition of human intentions.<br />
<br />
In this project, your goal would be to develop an efficient and fast learning hardware device that replaces the traditional signal processing and classification methods by directly operating with raw data from electrodes in an online fashion. <br />
<br />
===Links===<br />
* [https://iis-people.ee.ethz.ch/~arahimi/papers/EUSIPCO18.pdf Fast and Accurate Multiclass Inference for Motor Imagery BCIs Using Large Multiscale Temporal and Spectral Features]<br />
* [https://iis-people.ee.ethz.ch/~arahimi/papers/MONET17.pdf Hyperdimensional Computing for Blind and One-Shot Classification of EEG Error-Related Potentials]<br />
* [https://arxiv.org/abs/1812.05705 Exploring Embedding Methods in Binary Hyperdimensional Computing: A Case Study for Motor-Imagery based Brain-Computer Interfaces]<br />
<br />
<br />
===Related Projects===<br />
<DynamicPageList><br />
category = Available<br />
category = Digital<br />
category = BCI<br />
<br />
</DynamicPageList><br />
<br />
=Epilepsy Seizure Detection Device=<br />
[[File:Non-EEG Seizure.jpg|border|text-top|300px]]<br />
[[File:NeuroPace.jpg|border|text-top|400px]]<br />
<!-- Seizure-prediction.png --><br />
===Short Description===<br />
Seizure detection systems hold promise for improving the quality of life for patients with epilepsy that afflicts nearly 1% of the world's population.<br />
In this project, your goal would be to develop efficient techniques for EEG as well as non-EEG signals to detect an upcoming seizure in an ultra-low-power device. This covers a wide range of analog and digital techniques. The abilities of hyperdimensional computing for one-shot and online learning can come to rescue.<br />
<br />
===Links===<br />
* [http://ieeg-swez.ethz.ch/ The SWEC-ETHZ iEEG Database and Algorithms]<br />
* [https://www.wysscenter.ch/project/epilepsy-monitoring-seizure-forecasts/ Epilepsy monitoring and seizure forecasts at Wyss Center]<br />
* [https://www.youtube.com/watch?time_continue=87&v=ouyPXkEud40 Controlling tinnitus with neurofeedback]<br />
<br />
===Related Projects===<br />
<DynamicPageList><br />
category = Available<br />
category = Digital<br />
category = Epilepsy<br />
<br />
</DynamicPageList><br />
<br />
=Extremely Resilient Hyperdimensional Processor=<br />
[[File:BrainChip.jpg|thumb|left]]<br />
<br />
===Short Description===<br />
The most important aspect of hyperdimensional (HD) computing, for hardware realization, is its robustness against noise and variations in the computing platforms. Principles of HD computing allows to implement resilient controllers and state machines for extreme noisy conditions. Its tolerance in operating with faulty components and low signal-to-noise ratio (SNR) conditions is achieved by brain-inspired properties of hypervectors: (pseudo)randomness, high-dimensionality, and fully distributed holographic representations.<br />
<br />
In this project, your goal would be to design and develop an end-to-end robust HD processor with extremely resilient controller based on principles of HD computing, and measure its resiliency against noisy environment and faulty components.<br />
<br />
===Links===<br />
* [https://iis-people.ee.ethz.ch/~arahimi/papers/ISLPED16.pdf A Robust and Energy-Efficient Classifier Using Brain-Inspired Hyperdimensional Computing]<br />
* [https://iis-people.ee.ethz.ch/~arahimi/papers/DAC18.pdf PULP-HD: Accelerating Brain-Inspired High-Dimensional Computing on a Parallel Ultra-Low Power Platform]<br />
* [http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8216554 Associative Synthesis of Finite State Automata Model of a Controlled Object with Hyperdimensional Computing]<br />
<br />
=Flexible High-Density Sensors for Hand Gesture Recognition=<br />
<!-- [[File:Hyperdimensional_EMG.png|thumb|center]] --><br />
[[File:FlexEMG.png|thumb|right|500px]]<br />
<br />
<br />
===Short Description===<br />
The surface electromyography (EMG) signals are the superposition of the electrical activity of underneath muscles when contractions occur.<br />
Wearable surface EMG devices have a wide range of applications in controlling the upper limb prostheses and hand gesture recognition systems intended for consumer human-machine interaction. High-density EMG electrode array covering the whole arm can ease targeting the most desired muscle locations and cope the issues with sensors misplacement.<br />
For robust gesture recognition from such EMG sensors, we rely on brain-inspired HD computing.<br />
<br />
In this project, your goal would be to develop new sensors and RTL implementation of HD computing for one-shot gesture learning in an ultra low-power device.<br />
<br />
===Links===<br />
* [https://bwrc.eecs.berkeley.edu/sites/default/files/files/u2630/flexemg_v2_lq.mp4#t=2 Flexible EMG Demo]<br />
* [https://iis-people.ee.ethz.ch/~arahimi/papers/ISCAS2018.pdf Gesture Recognition System with Flexible High-Density Sensors] <br />
* [https://iis-people.ee.ethz.ch/~arahimi/papers/papers/ICRC16.pdf Hyperdimensional Biosignal Processing: A Case Study for EMG-based Hand Gesture Recognition (paper)]<br />
* [https://arxiv.org/abs/1901.00234 Adaptive EMG-based hand gesture recognition using hyperdimensional computing (paper)]<br />
* [https://github.com/abbas-rahimi/HDC-EMG Related Matlab code]<br />
<br />
==Robot Learning by Demonstration==<br />
[[File:Robot-VSA.png|thumb|left|Image source: Neubert et al, IROS 2016]]<br />
===Short Description===<br />
Robot learning from demonstration is a paradigm for enabling robots to autonomously perform new tasks. <br />
HD computing is a nice fit in this area since it naturally enables modeling relation between sensory inputs and actuator outputs of a robot by learning from few demonstrations. <br />
In this project, your goal would be to develop algorithms and implementations based on HD computing to enhance a robot to learn from online demonstrations. <br />
Further, such HD computing-based paradigm can be coupled to a brain-computer interface device enabling to control a robot by EEG signals from the brain. It has a wonderful application in neuroprosthetics to learn from a patient (see [https://www.youtube.com/watch?time_continue=26&v=jAtcVlTqxeA this] demonstration at EPFL).<br />
<br />
===Links===<br />
* [https://actu.epfl.ch/news/when-the-neuroprosthetics-learn-from-the-patient-5/ When the neuroprosthetics learn from the patient]<br />
* [https://www.tu-chemnitz.de/etit/proaut/publications/IROS2016_neubert.pdf Learning Vector Symbolic Architectures for Reactive Robot Behaviours] <br />
* [https://www.aaai.org/ocs/index.php/WS/AAAIW13/paper/download/7075/6578 Learning Behavior Hierarchies via High-Dimensional Sensor Projection (paper)]<br />
<br />
=Smart Eyeglass for Drones=<br />
[[File:Jins_6axis.png|thumb|text-top|right|250px]]<br />
[[File:Jins_EOG.png|thumb|text-top|right|250px]]<br />
<br />
===Short Description===<br />
This project plans to deploy and build upon a new breed of eyewear that allows you to look inside yourself, instead of just at what is in front of you. These insights help you see deep within yourself by showing shifts in your emotional state, your activity logs, as well as your health. This device currently provides 6-axis sensors as well as EOG sensors. These collectively allow to recognize body movements, eye movements, and status of mind. In this project, your goal is to interface with this device and extend it with other sensors to create novel machine learning applications, e.g., controlling the movements of our [http://iis-projects.ee.ethz.ch/index.php/Energy_Efficient_Autonomous_UAVs nano-size quadrotor].<br />
<br />
===Links===<br />
* [https://jins-meme.com/en/concept/ JINS Meme Smartglass]<br />
* [https://www.youtube.com/watch?v=Om_F0uyfjyc A game application of smartglass]<br />
<br />
=Hyperdimensional Affective Computing=<br />
[[File:Emotion-recognition.jpg|border|super|200px]]<br />
[[File:Emotions-on-arousal-valence-space.jpg|border|super|300px]]<br />
<br />
===Short Description===<br />
Affective computing (sometimes called artificial emotional intelligence) is the study and development of systems and devices that can recognize, interpret, process, and simulate human affects. We focus on the emotion recognition and interpretation. Emotion is a subjective mental state caused by some specific events, which is usually accompanied by characteristic behaviors and involuntary physiological changes. Therefore, multi-channel physiological signals (e.g., GSR, ECG, EEG, EOG) become good inputs for emotion analysis, which also can be collected easily and continuously by wearable sensors. However, due to the need of a huge amount of training data for a high-quality machine learning model, energy efficiency constrains and robust issues become major performance bottlenecks, especially for the wearable devices. To overcome this issue, HD computing can come to rescue by providing a low-energy, robust, and fast learning computational paradigm.<br />
<br />
In this project, your goal would be to develop an efficient and robust learning method based on hyperdimensional spaces to enhance accuracy and energy consumption.<br />
<br />
===Links===<br />
* [https://www.research-collection.ethz.ch/handle/20.500.11850/315807 Hyperdimensional Computing-based Multimodality Emotion Recognition with Physiological Signals]<br />
<br />
=More Projects=<br />
<DynamicPageList><br />
category = Available<br />
category = Digital<br />
category = Human Intranet<br />
</DynamicPageList><br />
<DynamicPageList><br />
category = Available<br />
category = Digital<br />
category = Hyper-dimensional Computing<br />
<br />
</DynamicPageList><br />
<br />
=Completed Projects=<br />
These are projects that were recently completed: <br />
<DynamicPageList><br />
category = Completed<br />
category = Digital<br />
category = Human Intranet<br />
<br />
</DynamicPageList><br />
<br />
=Where to find us=<br />
* [https://iis-people.ee.ethz.ch/~herschmi/ Michael Hersche]<br />
** '''e-mail''': [mailto:herschmi@iis.ee.ethz.ch herschmi@iis.ee.ethz.ch]<br />
** ETZ J76.2<br />
* [https://iis-people.ee.ethz.ch/~xiaywang/ Xiaying Wang]<br />
** '''e-mail''': [mailto:xiaywang@iis.ee.ethz.ch xiaywang@iis.ee.ethz.ch]<br />
** ETZ J68.2<br />
* [https://iis-people.ee.ethz.ch/~arahimi/ Dr. Abbas Rahimi]<br />
** '''e-mail''': [mailto:abbas@iis.ee.ethz.ch abbas@iis.ee.ethz.ch]<br />
** ETZ J85<br />
* [http://www.iis.ee.ethz.ch/people/person-detail.html?persid=194234 Prof. Luca Benini]<br />
** '''e-mail''': [mailto:lbenini@iis.ee.ethz.ch lbenini@iis.ee.ethz.ch]<br />
** ETZ J84</div>Xiaywanghttp://iis-projects.ee.ethz.ch/index.php?title=Human_Intranet&diff=5204Human Intranet2020-06-22T15:31:29Z<p>Xiaywang: /* More Projects */</p>
<hr />
<div>[[Category:Digital]]<br />
[[Category:Human Intranet]]<br />
[[Category:ASIC]]<br />
[[Category:FPGA]]<br />
[[Category:In progress]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]]<br />
<br />
__NOTOC__<br />
=What is Human Intranet?=<br />
[[File:HI.png|thumb|right|450px]]<br />
The world around us is getting a lot smarter quickly: virtually every single component of our daily living environment is being equipped with sensors, actuators, processing, and connection into a network that will soon count billions of nodes and trillions of sensors. These devices only interact with the human through the traditional input and output channels. Hence, they only indirectly communicate with our brain—through our five sense modalities—forming two separate computing systems: biological versus physical. It could be made a lot more effective if a direct high bandwidth link existed between the two systems, allowing them to truly collaborate with each other and to offer opportunities for enhanced functionality that would otherwise be hard to accomplish. The emergence of miniaturized sense, compute and actuate devices as well as interfaces that are form-fitted to the human body opens the door for a symbiotic convergence between biological function and physical computing.<br />
<br />
Human Intranet is an open, scalable platform that seamlessly integrates an ever-increasing number of sensor, actuation, computation, storage, communication and energy nodes located on, in, or around the human body acting in symbiosis with the functions provided by the body itself. Human Intranet presents a system vision in which, for example, disease would be treated by chronically measuring biosignals deep in the body, or by providing targeted, therapeutic interventions that respond on demand and in situ. To gain a holistic view of a person’s health, these sensors and actuators must communicate and collaborate with each other. Most of such systems prototyped or envisioned today serve to address deficiencies in the human sensory or motor control systems due to birth defects, illnesses, or accidents (e.g., invasive brain-machine interfaces, cochlear implants, artificial retinas, etc.). While all these systems target defects, one can easily imagine that this could lead to many types of enhancement and/or enable direct interaction with the environment: to make us humans smarter!<br />
<br />
Here, in our projects, we mainly focus on '''sensor, computation, communication, and emerging storage''' aspects to develop very efficient closed-loop sense-interpret-actuate systems, enabling distributed autonomous behavior. For example, to design the ''brain'' of our physical computing (i.e., the compute/interpret component), we rely on computing with ultra-wide words (e.g., 10,000 bits) that eases interfacing with various sensor modalities and actuators. This novel computing paradigm is called hyperdimensional (HD) computing that is inspired from the very size of the biological brain’s circuits: assuming 1 bit per synapse, the brain is made up of more than 24 billion of such ultra-wide words. You can watch some of our demos:<br />
* [https://bwrc.eecs.berkeley.edu/sites/default/files/files/u2630/flexemg_v2_lq.mp4#t=2 Video1]<br />
* [https://www.youtube.com/watch?time_continue=9&v=vTQGMQ6QaJE Video2]<br />
* [https://iis-people.ee.ethz.ch/~arahimi/papers/ISSCC18-Demo.pdf PDF] <br />
<br />
You can also find a collection of complemented projects with source codes/datasets here:<br />
* [https://github.com/HyperdimensionalComputing/collection Github link]<br />
<br />
==Prerequisites and Focus==<br />
If you are an M.S. student at the ETHZ, typically there is no prerequisite. You can come and talk to us and we adapt the projects based on your skills. The scope and focus of projects are wide. You can choose to work on:<br />
<br />
* '''Efficient hardware architectures in emerging technologies''' (e.g., [https://www.zurich.ibm.com/sto/memory/ the IBM computational memory])<br />
* '''System-level design and testing''' <br />
* '''Sensory interfaces''' (analog and digital)<br />
* '''FPGA prototyping, ASIC, and accelerators''' (SystemVerilog/ VHDL)<br />
* '''Exploring new Human Intranet/IoT applications''' (High-level Embedded Programming) <br />
* '''Algorithm design and optimizations''' (Matlab/ Python)<br />
<br />
<br />
<!-- <br />
* '''Theory''' of learning systems including HD computing, Hidden Markov Model (HMM), and clustering algorithms<br />
Overall, our projects cover '''algorithmic, hardware/software, and system level''' design and developments. <br />
However, if you have background in signal processing, VLSI or linear algebra is a super plus! --><br />
<br />
===Useful Reading===<br />
*[https://ieeexplore.ieee.org/document/7030200/ The Human Intranet--Where Swarms and Humans Meet]<br />
*[https://link.springer.com/article/10.1007/s12559-009-9009-8 Hyperdimensional Computing: An Introduction to Computing in Distributed Representation with High-Dimensional Random Vectors]<br />
*[https://ieeexplore.ieee.org/document/8422472/ Hyperdimensional Modulation for Robust Low-Power Communications]<br />
*[https://iis-people.ee.ethz.ch/~arahimi/papers/TCAS17.pdf High-dimensional Computing as a Nanoscalable Paradigm]<br />
*[http://www.oxfordscholarship.com/view/10.1093/acprof:oso/9780199794546.001.0001/acprof-9780199794546 How to Build a Brain]<br />
*[https://mitpress.mit.edu/books/sparse-distributed-memory Pentti Kanerva. 1988. Sparse Distributed Memory. MIT Press, Cambridge, MA, USA]<br />
<br />
=Available Projects=<br />
Here, we provide a short description of the related projects for you to see the scope of our work. The directions and details of the projects can be adapted based on your interests and skills. Please do not hesitate to come and talk to us for more details.<br />
<br />
=Online Brain-Computer Interfaces=<br />
<!-- [[File:BCI.png|thumb|center]] <br />
[[File:BCI-dryEEG.jpg|thumb|right]] --><br />
[[File:Emotiv-epoc-14-channel-mobile-eeg.jpg|thumb|right|200px]]<br />
<br />
<br />
===Short Description===<br />
Noninvasive brain–computer interfaces (BCI) and neuroprostheses aim to provide a communication and control channel based on the recognition of the subject’s intentions from spatiotemporal neural activity typically recorded by EEG electrodes. What makes it particularly challenging, however, is its susceptibility to errors over time in the recognition of human intentions.<br />
<br />
In this project, your goal would be to develop an efficient and fast learning hardware device that replaces the traditional signal processing and classification methods by directly operating with raw data from electrodes in an online fashion. <br />
<br />
===Links===<br />
* [https://iis-people.ee.ethz.ch/~arahimi/papers/EUSIPCO18.pdf Fast and Accurate Multiclass Inference for Motor Imagery BCIs Using Large Multiscale Temporal and Spectral Features]<br />
* [https://iis-people.ee.ethz.ch/~arahimi/papers/MONET17.pdf Hyperdimensional Computing for Blind and One-Shot Classification of EEG Error-Related Potentials]<br />
* [https://arxiv.org/abs/1812.05705 Exploring Embedding Methods in Binary Hyperdimensional Computing: A Case Study for Motor-Imagery based Brain-Computer Interfaces]<br />
<br />
<br />
===Related Projects===<br />
<DynamicPageList><br />
category = Available<br />
category = Digital<br />
category = BCI<br />
<br />
</DynamicPageList><br />
<br />
=Epilepsy Seizure Detection Device=<br />
[[File:Non-EEG Seizure.jpg|border|text-top|300px]]<br />
[[File:NeuroPace.jpg|border|text-top|400px]]<br />
<!-- Seizure-prediction.png --><br />
===Short Description===<br />
Seizure detection systems hold promise for improving the quality of life for patients with epilepsy that afflicts nearly 1% of the world's population.<br />
In this project, your goal would be to develop efficient techniques for EEG as well as non-EEG signals to detect an upcoming seizure in an ultra-low-power device. This covers a wide range of analog and digital techniques. The abilities of hyperdimensional computing for one-shot and online learning can come to rescue.<br />
<br />
===Links===<br />
* [http://ieeg-swez.ethz.ch/ The SWEC-ETHZ iEEG Database and Algorithms]<br />
* [https://www.wysscenter.ch/project/epilepsy-monitoring-seizure-forecasts/ Epilepsy monitoring and seizure forecasts at Wyss Center]<br />
* [https://www.youtube.com/watch?time_continue=87&v=ouyPXkEud40 Controlling tinnitus with neurofeedback]<br />
<br />
===Related Projects===<br />
<DynamicPageList><br />
category = Available<br />
category = Digital<br />
category = Epilepsy<br />
<br />
</DynamicPageList><br />
<br />
=Extremely Resilient Hyperdimensional Processor=<br />
[[File:BrainChip.jpg|thumb|left]]<br />
<br />
===Short Description===<br />
The most important aspect of hyperdimensional (HD) computing, for hardware realization, is its robustness against noise and variations in the computing platforms. Principles of HD computing allows to implement resilient controllers and state machines for extreme noisy conditions. Its tolerance in operating with faulty components and low signal-to-noise ratio (SNR) conditions is achieved by brain-inspired properties of hypervectors: (pseudo)randomness, high-dimensionality, and fully distributed holographic representations.<br />
<br />
In this project, your goal would be to design and develop an end-to-end robust HD processor with extremely resilient controller based on principles of HD computing, and measure its resiliency against noisy environment and faulty components.<br />
<br />
===Links===<br />
* [https://iis-people.ee.ethz.ch/~arahimi/papers/ISLPED16.pdf A Robust and Energy-Efficient Classifier Using Brain-Inspired Hyperdimensional Computing]<br />
* [https://iis-people.ee.ethz.ch/~arahimi/papers/DAC18.pdf PULP-HD: Accelerating Brain-Inspired High-Dimensional Computing on a Parallel Ultra-Low Power Platform]<br />
* [http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8216554 Associative Synthesis of Finite State Automata Model of a Controlled Object with Hyperdimensional Computing]<br />
<br />
=Flexible High-Density Sensors for Hand Gesture Recognition=<br />
<!-- [[File:Hyperdimensional_EMG.png|thumb|center]] --><br />
[[File:FlexEMG.png|thumb|right|500px]]<br />
<br />
<br />
===Short Description===<br />
The surface electromyography (EMG) signals are the superposition of the electrical activity of underneath muscles when contractions occur.<br />
Wearable surface EMG devices have a wide range of applications in controlling the upper limb prostheses and hand gesture recognition systems intended for consumer human-machine interaction. High-density EMG electrode array covering the whole arm can ease targeting the most desired muscle locations and cope the issues with sensors misplacement.<br />
For robust gesture recognition from such EMG sensors, we rely on brain-inspired HD computing.<br />
<br />
In this project, your goal would be to develop new sensors and RTL implementation of HD computing for one-shot gesture learning in an ultra low-power device.<br />
<br />
===Links===<br />
* [https://bwrc.eecs.berkeley.edu/sites/default/files/files/u2630/flexemg_v2_lq.mp4#t=2 Flexible EMG Demo]<br />
* [https://iis-people.ee.ethz.ch/~arahimi/papers/ISCAS2018.pdf Gesture Recognition System with Flexible High-Density Sensors] <br />
* [https://iis-people.ee.ethz.ch/~arahimi/papers/papers/ICRC16.pdf Hyperdimensional Biosignal Processing: A Case Study for EMG-based Hand Gesture Recognition (paper)]<br />
* [https://arxiv.org/abs/1901.00234 Adaptive EMG-based hand gesture recognition using hyperdimensional computing (paper)]<br />
* [https://github.com/abbas-rahimi/HDC-EMG Related Matlab code]<br />
<br />
==Robot Learning by Demonstration==<br />
[[File:Robot-VSA.png|thumb|left|Image source: Neubert et al, IROS 2016]]<br />
===Short Description===<br />
Robot learning from demonstration is a paradigm for enabling robots to autonomously perform new tasks. <br />
HD computing is a nice fit in this area since it naturally enables modeling relation between sensory inputs and actuator outputs of a robot by learning from few demonstrations. <br />
In this project, your goal would be to develop algorithms and implementations based on HD computing to enhance a robot to learn from online demonstrations. <br />
Further, such HD computing-based paradigm can be coupled to a brain-computer interface device enabling to control a robot by EEG signals from the brain. It has a wonderful application in neuroprosthetics to learn from a patient (see [https://www.youtube.com/watch?time_continue=26&v=jAtcVlTqxeA this] demonstration at EPFL).<br />
<br />
===Links===<br />
* [https://actu.epfl.ch/news/when-the-neuroprosthetics-learn-from-the-patient-5/ When the neuroprosthetics learn from the patient]<br />
* [https://www.tu-chemnitz.de/etit/proaut/publications/IROS2016_neubert.pdf Learning Vector Symbolic Architectures for Reactive Robot Behaviours] <br />
* [https://www.aaai.org/ocs/index.php/WS/AAAIW13/paper/download/7075/6578 Learning Behavior Hierarchies via High-Dimensional Sensor Projection (paper)]<br />
<br />
=Smart Eyeglass for Drones=<br />
[[File:Jins_6axis.png|thumb|text-top|right|250px]]<br />
[[File:Jins_EOG.png|thumb|text-top|right|250px]]<br />
<br />
===Short Description===<br />
This project plans to deploy and build upon a new breed of eyewear that allows you to look inside yourself, instead of just at what is in front of you. These insights help you see deep within yourself by showing shifts in your emotional state, your activity logs, as well as your health. This device currently provides 6-axis sensors as well as EOG sensors. These collectively allow to recognize body movements, eye movements, and status of mind. In this project, your goal is to interface with this device and extend it with other sensors to create novel machine learning applications, e.g., controlling the movements of our [http://iis-projects.ee.ethz.ch/index.php/Energy_Efficient_Autonomous_UAVs nano-size quadrotor].<br />
<br />
===Links===<br />
* [https://jins-meme.com/en/concept/ JINS Meme Smartglass]<br />
* [https://www.youtube.com/watch?v=Om_F0uyfjyc A game application of smartglass]<br />
<br />
=Hyperdimensional Affective Computing=<br />
[[File:Emotion-recognition.jpg|border|super|200px]]<br />
[[File:Emotions-on-arousal-valence-space.jpg|border|super|300px]]<br />
<br />
===Short Description===<br />
Affective computing (sometimes called artificial emotional intelligence) is the study and development of systems and devices that can recognize, interpret, process, and simulate human affects. We focus on the emotion recognition and interpretation. Emotion is a subjective mental state caused by some specific events, which is usually accompanied by characteristic behaviors and involuntary physiological changes. Therefore, multi-channel physiological signals (e.g., GSR, ECG, EEG, EOG) become good inputs for emotion analysis, which also can be collected easily and continuously by wearable sensors. However, due to the need of a huge amount of training data for a high-quality machine learning model, energy efficiency constrains and robust issues become major performance bottlenecks, especially for the wearable devices. To overcome this issue, HD computing can come to rescue by providing a low-energy, robust, and fast learning computational paradigm.<br />
<br />
In this project, your goal would be to develop an efficient and robust learning method based on hyperdimensional spaces to enhance accuracy and energy consumption.<br />
<br />
===Links===<br />
* [https://www.research-collection.ethz.ch/handle/20.500.11850/315807 Hyperdimensional Computing-based Multimodality Emotion Recognition with Physiological Signals]<br />
<br />
=More Projects=<br />
<DynamicPageList><br />
category = Available<br />
category = Digital<br />
category = Human Intranet<br />
<br />
</DynamicPageList><br />
<br />
<DynamicPageList><br />
category = Available<br />
category = Digital<br />
category = Hyper-dimensional Computing<br />
<br />
</DynamicPageList><br />
<br />
=Completed Projects=<br />
These are projects that were recently completed: <br />
<DynamicPageList><br />
category = Completed<br />
category = Digital<br />
category = Human Intranet<br />
<br />
</DynamicPageList><br />
<br />
=Where to find us=<br />
* [https://iis-people.ee.ethz.ch/~herschmi/ Michael Hersche]<br />
** '''e-mail''': [mailto:herschmi@iis.ee.ethz.ch herschmi@iis.ee.ethz.ch]<br />
** ETZ J76.2<br />
* [https://iis-people.ee.ethz.ch/~xiaywang/ Xiaying Wang]<br />
** '''e-mail''': [mailto:xiaywang@iis.ee.ethz.ch xiaywang@iis.ee.ethz.ch]<br />
** ETZ J68.2<br />
* [https://iis-people.ee.ethz.ch/~arahimi/ Dr. Abbas Rahimi]<br />
** '''e-mail''': [mailto:abbas@iis.ee.ethz.ch abbas@iis.ee.ethz.ch]<br />
** ETZ J85<br />
* [http://www.iis.ee.ethz.ch/people/person-detail.html?persid=194234 Prof. Luca Benini]<br />
** '''e-mail''': [mailto:lbenini@iis.ee.ethz.ch lbenini@iis.ee.ethz.ch]<br />
** ETZ J84</div>Xiaywanghttp://iis-projects.ee.ethz.ch/index.php?title=Human_Intranet&diff=5203Human Intranet2020-06-22T15:30:56Z<p>Xiaywang: /* More Projects */</p>
<hr />
<div>[[Category:Digital]]<br />
[[Category:Human Intranet]]<br />
[[Category:ASIC]]<br />
[[Category:FPGA]]<br />
[[Category:In progress]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]]<br />
<br />
__NOTOC__<br />
=What is Human Intranet?=<br />
[[File:HI.png|thumb|right|450px]]<br />
The world around us is getting a lot smarter quickly: virtually every single component of our daily living environment is being equipped with sensors, actuators, processing, and connection into a network that will soon count billions of nodes and trillions of sensors. These devices only interact with the human through the traditional input and output channels. Hence, they only indirectly communicate with our brain—through our five sense modalities—forming two separate computing systems: biological versus physical. It could be made a lot more effective if a direct high bandwidth link existed between the two systems, allowing them to truly collaborate with each other and to offer opportunities for enhanced functionality that would otherwise be hard to accomplish. The emergence of miniaturized sense, compute and actuate devices as well as interfaces that are form-fitted to the human body opens the door for a symbiotic convergence between biological function and physical computing.<br />
<br />
Human Intranet is an open, scalable platform that seamlessly integrates an ever-increasing number of sensor, actuation, computation, storage, communication and energy nodes located on, in, or around the human body acting in symbiosis with the functions provided by the body itself. Human Intranet presents a system vision in which, for example, disease would be treated by chronically measuring biosignals deep in the body, or by providing targeted, therapeutic interventions that respond on demand and in situ. To gain a holistic view of a person’s health, these sensors and actuators must communicate and collaborate with each other. Most of such systems prototyped or envisioned today serve to address deficiencies in the human sensory or motor control systems due to birth defects, illnesses, or accidents (e.g., invasive brain-machine interfaces, cochlear implants, artificial retinas, etc.). While all these systems target defects, one can easily imagine that this could lead to many types of enhancement and/or enable direct interaction with the environment: to make us humans smarter!<br />
<br />
Here, in our projects, we mainly focus on '''sensor, computation, communication, and emerging storage''' aspects to develop very efficient closed-loop sense-interpret-actuate systems, enabling distributed autonomous behavior. For example, to design the ''brain'' of our physical computing (i.e., the compute/interpret component), we rely on computing with ultra-wide words (e.g., 10,000 bits) that eases interfacing with various sensor modalities and actuators. This novel computing paradigm is called hyperdimensional (HD) computing that is inspired from the very size of the biological brain’s circuits: assuming 1 bit per synapse, the brain is made up of more than 24 billion of such ultra-wide words. You can watch some of our demos:<br />
* [https://bwrc.eecs.berkeley.edu/sites/default/files/files/u2630/flexemg_v2_lq.mp4#t=2 Video1]<br />
* [https://www.youtube.com/watch?time_continue=9&v=vTQGMQ6QaJE Video2]<br />
* [https://iis-people.ee.ethz.ch/~arahimi/papers/ISSCC18-Demo.pdf PDF] <br />
<br />
You can also find a collection of complemented projects with source codes/datasets here:<br />
* [https://github.com/HyperdimensionalComputing/collection Github link]<br />
<br />
==Prerequisites and Focus==<br />
If you are an M.S. student at the ETHZ, typically there is no prerequisite. You can come and talk to us and we adapt the projects based on your skills. The scope and focus of projects are wide. You can choose to work on:<br />
<br />
* '''Efficient hardware architectures in emerging technologies''' (e.g., [https://www.zurich.ibm.com/sto/memory/ the IBM computational memory])<br />
* '''System-level design and testing''' <br />
* '''Sensory interfaces''' (analog and digital)<br />
* '''FPGA prototyping, ASIC, and accelerators''' (SystemVerilog/ VHDL)<br />
* '''Exploring new Human Intranet/IoT applications''' (High-level Embedded Programming) <br />
* '''Algorithm design and optimizations''' (Matlab/ Python)<br />
<br />
<br />
<!-- <br />
* '''Theory''' of learning systems including HD computing, Hidden Markov Model (HMM), and clustering algorithms<br />
Overall, our projects cover '''algorithmic, hardware/software, and system level''' design and developments. <br />
However, if you have background in signal processing, VLSI or linear algebra is a super plus! --><br />
<br />
===Useful Reading===<br />
*[https://ieeexplore.ieee.org/document/7030200/ The Human Intranet--Where Swarms and Humans Meet]<br />
*[https://link.springer.com/article/10.1007/s12559-009-9009-8 Hyperdimensional Computing: An Introduction to Computing in Distributed Representation with High-Dimensional Random Vectors]<br />
*[https://ieeexplore.ieee.org/document/8422472/ Hyperdimensional Modulation for Robust Low-Power Communications]<br />
*[https://iis-people.ee.ethz.ch/~arahimi/papers/TCAS17.pdf High-dimensional Computing as a Nanoscalable Paradigm]<br />
*[http://www.oxfordscholarship.com/view/10.1093/acprof:oso/9780199794546.001.0001/acprof-9780199794546 How to Build a Brain]<br />
*[https://mitpress.mit.edu/books/sparse-distributed-memory Pentti Kanerva. 1988. Sparse Distributed Memory. MIT Press, Cambridge, MA, USA]<br />
<br />
=Available Projects=<br />
Here, we provide a short description of the related projects for you to see the scope of our work. The directions and details of the projects can be adapted based on your interests and skills. Please do not hesitate to come and talk to us for more details.<br />
<br />
=Online Brain-Computer Interfaces=<br />
<!-- [[File:BCI.png|thumb|center]] <br />
[[File:BCI-dryEEG.jpg|thumb|right]] --><br />
[[File:Emotiv-epoc-14-channel-mobile-eeg.jpg|thumb|right|200px]]<br />
<br />
<br />
===Short Description===<br />
Noninvasive brain–computer interfaces (BCI) and neuroprostheses aim to provide a communication and control channel based on the recognition of the subject’s intentions from spatiotemporal neural activity typically recorded by EEG electrodes. What makes it particularly challenging, however, is its susceptibility to errors over time in the recognition of human intentions.<br />
<br />
In this project, your goal would be to develop an efficient and fast learning hardware device that replaces the traditional signal processing and classification methods by directly operating with raw data from electrodes in an online fashion. <br />
<br />
===Links===<br />
* [https://iis-people.ee.ethz.ch/~arahimi/papers/EUSIPCO18.pdf Fast and Accurate Multiclass Inference for Motor Imagery BCIs Using Large Multiscale Temporal and Spectral Features]<br />
* [https://iis-people.ee.ethz.ch/~arahimi/papers/MONET17.pdf Hyperdimensional Computing for Blind and One-Shot Classification of EEG Error-Related Potentials]<br />
* [https://arxiv.org/abs/1812.05705 Exploring Embedding Methods in Binary Hyperdimensional Computing: A Case Study for Motor-Imagery based Brain-Computer Interfaces]<br />
<br />
<br />
===Related Projects===<br />
<DynamicPageList><br />
category = Available<br />
category = Digital<br />
category = BCI<br />
<br />
</DynamicPageList><br />
<br />
=Epilepsy Seizure Detection Device=<br />
[[File:Non-EEG Seizure.jpg|border|text-top|300px]]<br />
[[File:NeuroPace.jpg|border|text-top|400px]]<br />
<!-- Seizure-prediction.png --><br />
===Short Description===<br />
Seizure detection systems hold promise for improving the quality of life for patients with epilepsy that afflicts nearly 1% of the world's population.<br />
In this project, your goal would be to develop efficient techniques for EEG as well as non-EEG signals to detect an upcoming seizure in an ultra-low-power device. This covers a wide range of analog and digital techniques. The abilities of hyperdimensional computing for one-shot and online learning can come to rescue.<br />
<br />
===Links===<br />
* [http://ieeg-swez.ethz.ch/ The SWEC-ETHZ iEEG Database and Algorithms]<br />
* [https://www.wysscenter.ch/project/epilepsy-monitoring-seizure-forecasts/ Epilepsy monitoring and seizure forecasts at Wyss Center]<br />
* [https://www.youtube.com/watch?time_continue=87&v=ouyPXkEud40 Controlling tinnitus with neurofeedback]<br />
<br />
===Related Projects===<br />
<DynamicPageList><br />
category = Available<br />
category = Digital<br />
category = Epilepsy<br />
<br />
</DynamicPageList><br />
<br />
=Extremely Resilient Hyperdimensional Processor=<br />
[[File:BrainChip.jpg|thumb|left]]<br />
<br />
===Short Description===<br />
The most important aspect of hyperdimensional (HD) computing, for hardware realization, is its robustness against noise and variations in the computing platforms. Principles of HD computing allows to implement resilient controllers and state machines for extreme noisy conditions. Its tolerance in operating with faulty components and low signal-to-noise ratio (SNR) conditions is achieved by brain-inspired properties of hypervectors: (pseudo)randomness, high-dimensionality, and fully distributed holographic representations.<br />
<br />
In this project, your goal would be to design and develop an end-to-end robust HD processor with extremely resilient controller based on principles of HD computing, and measure its resiliency against noisy environment and faulty components.<br />
<br />
===Links===<br />
* [https://iis-people.ee.ethz.ch/~arahimi/papers/ISLPED16.pdf A Robust and Energy-Efficient Classifier Using Brain-Inspired Hyperdimensional Computing]<br />
* [https://iis-people.ee.ethz.ch/~arahimi/papers/DAC18.pdf PULP-HD: Accelerating Brain-Inspired High-Dimensional Computing on a Parallel Ultra-Low Power Platform]<br />
* [http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8216554 Associative Synthesis of Finite State Automata Model of a Controlled Object with Hyperdimensional Computing]<br />
<br />
=Flexible High-Density Sensors for Hand Gesture Recognition=<br />
<!-- [[File:Hyperdimensional_EMG.png|thumb|center]] --><br />
[[File:FlexEMG.png|thumb|right|500px]]<br />
<br />
<br />
===Short Description===<br />
The surface electromyography (EMG) signals are the superposition of the electrical activity of underneath muscles when contractions occur.<br />
Wearable surface EMG devices have a wide range of applications in controlling the upper limb prostheses and hand gesture recognition systems intended for consumer human-machine interaction. High-density EMG electrode array covering the whole arm can ease targeting the most desired muscle locations and cope the issues with sensors misplacement.<br />
For robust gesture recognition from such EMG sensors, we rely on brain-inspired HD computing.<br />
<br />
In this project, your goal would be to develop new sensors and RTL implementation of HD computing for one-shot gesture learning in an ultra low-power device.<br />
<br />
===Links===<br />
* [https://bwrc.eecs.berkeley.edu/sites/default/files/files/u2630/flexemg_v2_lq.mp4#t=2 Flexible EMG Demo]<br />
* [https://iis-people.ee.ethz.ch/~arahimi/papers/ISCAS2018.pdf Gesture Recognition System with Flexible High-Density Sensors] <br />
* [https://iis-people.ee.ethz.ch/~arahimi/papers/papers/ICRC16.pdf Hyperdimensional Biosignal Processing: A Case Study for EMG-based Hand Gesture Recognition (paper)]<br />
* [https://arxiv.org/abs/1901.00234 Adaptive EMG-based hand gesture recognition using hyperdimensional computing (paper)]<br />
* [https://github.com/abbas-rahimi/HDC-EMG Related Matlab code]<br />
<br />
==Robot Learning by Demonstration==<br />
[[File:Robot-VSA.png|thumb|left|Image source: Neubert et al, IROS 2016]]<br />
===Short Description===<br />
Robot learning from demonstration is a paradigm for enabling robots to autonomously perform new tasks. <br />
HD computing is a nice fit in this area since it naturally enables modeling relation between sensory inputs and actuator outputs of a robot by learning from few demonstrations. <br />
In this project, your goal would be to develop algorithms and implementations based on HD computing to enhance a robot to learn from online demonstrations. <br />
Further, such HD computing-based paradigm can be coupled to a brain-computer interface device enabling to control a robot by EEG signals from the brain. It has a wonderful application in neuroprosthetics to learn from a patient (see [https://www.youtube.com/watch?time_continue=26&v=jAtcVlTqxeA this] demonstration at EPFL).<br />
<br />
===Links===<br />
* [https://actu.epfl.ch/news/when-the-neuroprosthetics-learn-from-the-patient-5/ When the neuroprosthetics learn from the patient]<br />
* [https://www.tu-chemnitz.de/etit/proaut/publications/IROS2016_neubert.pdf Learning Vector Symbolic Architectures for Reactive Robot Behaviours] <br />
* [https://www.aaai.org/ocs/index.php/WS/AAAIW13/paper/download/7075/6578 Learning Behavior Hierarchies via High-Dimensional Sensor Projection (paper)]<br />
<br />
=Smart Eyeglass for Drones=<br />
[[File:Jins_6axis.png|thumb|text-top|right|250px]]<br />
[[File:Jins_EOG.png|thumb|text-top|right|250px]]<br />
<br />
===Short Description===<br />
This project plans to deploy and build upon a new breed of eyewear that allows you to look inside yourself, instead of just at what is in front of you. These insights help you see deep within yourself by showing shifts in your emotional state, your activity logs, as well as your health. This device currently provides 6-axis sensors as well as EOG sensors. These collectively allow to recognize body movements, eye movements, and status of mind. In this project, your goal is to interface with this device and extend it with other sensors to create novel machine learning applications, e.g., controlling the movements of our [http://iis-projects.ee.ethz.ch/index.php/Energy_Efficient_Autonomous_UAVs nano-size quadrotor].<br />
<br />
===Links===<br />
* [https://jins-meme.com/en/concept/ JINS Meme Smartglass]<br />
* [https://www.youtube.com/watch?v=Om_F0uyfjyc A game application of smartglass]<br />
<br />
=Hyperdimensional Affective Computing=<br />
[[File:Emotion-recognition.jpg|border|super|200px]]<br />
[[File:Emotions-on-arousal-valence-space.jpg|border|super|300px]]<br />
<br />
===Short Description===<br />
Affective computing (sometimes called artificial emotional intelligence) is the study and development of systems and devices that can recognize, interpret, process, and simulate human affects. We focus on the emotion recognition and interpretation. Emotion is a subjective mental state caused by some specific events, which is usually accompanied by characteristic behaviors and involuntary physiological changes. Therefore, multi-channel physiological signals (e.g., GSR, ECG, EEG, EOG) become good inputs for emotion analysis, which also can be collected easily and continuously by wearable sensors. However, due to the need of a huge amount of training data for a high-quality machine learning model, energy efficiency constrains and robust issues become major performance bottlenecks, especially for the wearable devices. To overcome this issue, HD computing can come to rescue by providing a low-energy, robust, and fast learning computational paradigm.<br />
<br />
In this project, your goal would be to develop an efficient and robust learning method based on hyperdimensional spaces to enhance accuracy and energy consumption.<br />
<br />
===Links===<br />
* [https://www.research-collection.ethz.ch/handle/20.500.11850/315807 Hyperdimensional Computing-based Multimodality Emotion Recognition with Physiological Signals]<br />
<br />
=More Projects=<br />
<DynamicPageList><br />
category = Available<br />
category = Digital<br />
category = Human Intranet<br />
<br />
</DynamicPageList><br />
<br />
=Completed Projects=<br />
These are projects that were recently completed: <br />
<DynamicPageList><br />
category = Completed<br />
category = Digital<br />
category = Human Intranet<br />
<br />
</DynamicPageList><br />
<br />
=Where to find us=<br />
* [https://iis-people.ee.ethz.ch/~herschmi/ Michael Hersche]<br />
** '''e-mail''': [mailto:herschmi@iis.ee.ethz.ch herschmi@iis.ee.ethz.ch]<br />
** ETZ J76.2<br />
* [https://iis-people.ee.ethz.ch/~xiaywang/ Xiaying Wang]<br />
** '''e-mail''': [mailto:xiaywang@iis.ee.ethz.ch xiaywang@iis.ee.ethz.ch]<br />
** ETZ J68.2<br />
* [https://iis-people.ee.ethz.ch/~arahimi/ Dr. Abbas Rahimi]<br />
** '''e-mail''': [mailto:abbas@iis.ee.ethz.ch abbas@iis.ee.ethz.ch]<br />
** ETZ J85<br />
* [http://www.iis.ee.ethz.ch/people/person-detail.html?persid=194234 Prof. Luca Benini]<br />
** '''e-mail''': [mailto:lbenini@iis.ee.ethz.ch lbenini@iis.ee.ethz.ch]<br />
** ETZ J84</div>Xiaywanghttp://iis-projects.ee.ethz.ch/index.php?title=Human_Intranet&diff=5202Human Intranet2020-06-22T15:30:33Z<p>Xiaywang: /* More Projects */</p>
<hr />
<div>[[Category:Digital]]<br />
[[Category:Human Intranet]]<br />
[[Category:ASIC]]<br />
[[Category:FPGA]]<br />
[[Category:In progress]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]]<br />
<br />
__NOTOC__<br />
=What is Human Intranet?=<br />
[[File:HI.png|thumb|right|450px]]<br />
The world around us is getting a lot smarter quickly: virtually every single component of our daily living environment is being equipped with sensors, actuators, processing, and connection into a network that will soon count billions of nodes and trillions of sensors. These devices only interact with the human through the traditional input and output channels. Hence, they only indirectly communicate with our brain—through our five sense modalities—forming two separate computing systems: biological versus physical. It could be made a lot more effective if a direct high bandwidth link existed between the two systems, allowing them to truly collaborate with each other and to offer opportunities for enhanced functionality that would otherwise be hard to accomplish. The emergence of miniaturized sense, compute and actuate devices as well as interfaces that are form-fitted to the human body opens the door for a symbiotic convergence between biological function and physical computing.<br />
<br />
Human Intranet is an open, scalable platform that seamlessly integrates an ever-increasing number of sensor, actuation, computation, storage, communication and energy nodes located on, in, or around the human body acting in symbiosis with the functions provided by the body itself. Human Intranet presents a system vision in which, for example, disease would be treated by chronically measuring biosignals deep in the body, or by providing targeted, therapeutic interventions that respond on demand and in situ. To gain a holistic view of a person’s health, these sensors and actuators must communicate and collaborate with each other. Most of such systems prototyped or envisioned today serve to address deficiencies in the human sensory or motor control systems due to birth defects, illnesses, or accidents (e.g., invasive brain-machine interfaces, cochlear implants, artificial retinas, etc.). While all these systems target defects, one can easily imagine that this could lead to many types of enhancement and/or enable direct interaction with the environment: to make us humans smarter!<br />
<br />
Here, in our projects, we mainly focus on '''sensor, computation, communication, and emerging storage''' aspects to develop very efficient closed-loop sense-interpret-actuate systems, enabling distributed autonomous behavior. For example, to design the ''brain'' of our physical computing (i.e., the compute/interpret component), we rely on computing with ultra-wide words (e.g., 10,000 bits) that eases interfacing with various sensor modalities and actuators. This novel computing paradigm is called hyperdimensional (HD) computing that is inspired from the very size of the biological brain’s circuits: assuming 1 bit per synapse, the brain is made up of more than 24 billion of such ultra-wide words. You can watch some of our demos:<br />
* [https://bwrc.eecs.berkeley.edu/sites/default/files/files/u2630/flexemg_v2_lq.mp4#t=2 Video1]<br />
* [https://www.youtube.com/watch?time_continue=9&v=vTQGMQ6QaJE Video2]<br />
* [https://iis-people.ee.ethz.ch/~arahimi/papers/ISSCC18-Demo.pdf PDF] <br />
<br />
You can also find a collection of complemented projects with source codes/datasets here:<br />
* [https://github.com/HyperdimensionalComputing/collection Github link]<br />
<br />
==Prerequisites and Focus==<br />
If you are an M.S. student at the ETHZ, typically there is no prerequisite. You can come and talk to us and we adapt the projects based on your skills. The scope and focus of projects are wide. You can choose to work on:<br />
<br />
* '''Efficient hardware architectures in emerging technologies''' (e.g., [https://www.zurich.ibm.com/sto/memory/ the IBM computational memory])<br />
* '''System-level design and testing''' <br />
* '''Sensory interfaces''' (analog and digital)<br />
* '''FPGA prototyping, ASIC, and accelerators''' (SystemVerilog/ VHDL)<br />
* '''Exploring new Human Intranet/IoT applications''' (High-level Embedded Programming) <br />
* '''Algorithm design and optimizations''' (Matlab/ Python)<br />
<br />
<br />
<!-- <br />
* '''Theory''' of learning systems including HD computing, Hidden Markov Model (HMM), and clustering algorithms<br />
Overall, our projects cover '''algorithmic, hardware/software, and system level''' design and developments. <br />
However, if you have background in signal processing, VLSI or linear algebra is a super plus! --><br />
<br />
===Useful Reading===<br />
*[https://ieeexplore.ieee.org/document/7030200/ The Human Intranet--Where Swarms and Humans Meet]<br />
*[https://link.springer.com/article/10.1007/s12559-009-9009-8 Hyperdimensional Computing: An Introduction to Computing in Distributed Representation with High-Dimensional Random Vectors]<br />
*[https://ieeexplore.ieee.org/document/8422472/ Hyperdimensional Modulation for Robust Low-Power Communications]<br />
*[https://iis-people.ee.ethz.ch/~arahimi/papers/TCAS17.pdf High-dimensional Computing as a Nanoscalable Paradigm]<br />
*[http://www.oxfordscholarship.com/view/10.1093/acprof:oso/9780199794546.001.0001/acprof-9780199794546 How to Build a Brain]<br />
*[https://mitpress.mit.edu/books/sparse-distributed-memory Pentti Kanerva. 1988. Sparse Distributed Memory. MIT Press, Cambridge, MA, USA]<br />
<br />
=Available Projects=<br />
Here, we provide a short description of the related projects for you to see the scope of our work. The directions and details of the projects can be adapted based on your interests and skills. Please do not hesitate to come and talk to us for more details.<br />
<br />
=Online Brain-Computer Interfaces=<br />
<!-- [[File:BCI.png|thumb|center]] <br />
[[File:BCI-dryEEG.jpg|thumb|right]] --><br />
[[File:Emotiv-epoc-14-channel-mobile-eeg.jpg|thumb|right|200px]]<br />
<br />
<br />
===Short Description===<br />
Noninvasive brain–computer interfaces (BCI) and neuroprostheses aim to provide a communication and control channel based on the recognition of the subject’s intentions from spatiotemporal neural activity typically recorded by EEG electrodes. What makes it particularly challenging, however, is its susceptibility to errors over time in the recognition of human intentions.<br />
<br />
In this project, your goal would be to develop an efficient and fast learning hardware device that replaces the traditional signal processing and classification methods by directly operating with raw data from electrodes in an online fashion. <br />
<br />
===Links===<br />
* [https://iis-people.ee.ethz.ch/~arahimi/papers/EUSIPCO18.pdf Fast and Accurate Multiclass Inference for Motor Imagery BCIs Using Large Multiscale Temporal and Spectral Features]<br />
* [https://iis-people.ee.ethz.ch/~arahimi/papers/MONET17.pdf Hyperdimensional Computing for Blind and One-Shot Classification of EEG Error-Related Potentials]<br />
* [https://arxiv.org/abs/1812.05705 Exploring Embedding Methods in Binary Hyperdimensional Computing: A Case Study for Motor-Imagery based Brain-Computer Interfaces]<br />
<br />
<br />
===Related Projects===<br />
<DynamicPageList><br />
category = Available<br />
category = Digital<br />
category = BCI<br />
<br />
</DynamicPageList><br />
<br />
=Epilepsy Seizure Detection Device=<br />
[[File:Non-EEG Seizure.jpg|border|text-top|300px]]<br />
[[File:NeuroPace.jpg|border|text-top|400px]]<br />
<!-- Seizure-prediction.png --><br />
===Short Description===<br />
Seizure detection systems hold promise for improving the quality of life for patients with epilepsy that afflicts nearly 1% of the world's population.<br />
In this project, your goal would be to develop efficient techniques for EEG as well as non-EEG signals to detect an upcoming seizure in an ultra-low-power device. This covers a wide range of analog and digital techniques. The abilities of hyperdimensional computing for one-shot and online learning can come to rescue.<br />
<br />
===Links===<br />
* [http://ieeg-swez.ethz.ch/ The SWEC-ETHZ iEEG Database and Algorithms]<br />
* [https://www.wysscenter.ch/project/epilepsy-monitoring-seizure-forecasts/ Epilepsy monitoring and seizure forecasts at Wyss Center]<br />
* [https://www.youtube.com/watch?time_continue=87&v=ouyPXkEud40 Controlling tinnitus with neurofeedback]<br />
<br />
===Related Projects===<br />
<DynamicPageList><br />
category = Available<br />
category = Digital<br />
category = Epilepsy<br />
<br />
</DynamicPageList><br />
<br />
=Extremely Resilient Hyperdimensional Processor=<br />
[[File:BrainChip.jpg|thumb|left]]<br />
<br />
===Short Description===<br />
The most important aspect of hyperdimensional (HD) computing, for hardware realization, is its robustness against noise and variations in the computing platforms. Principles of HD computing allows to implement resilient controllers and state machines for extreme noisy conditions. Its tolerance in operating with faulty components and low signal-to-noise ratio (SNR) conditions is achieved by brain-inspired properties of hypervectors: (pseudo)randomness, high-dimensionality, and fully distributed holographic representations.<br />
<br />
In this project, your goal would be to design and develop an end-to-end robust HD processor with extremely resilient controller based on principles of HD computing, and measure its resiliency against noisy environment and faulty components.<br />
<br />
===Links===<br />
* [https://iis-people.ee.ethz.ch/~arahimi/papers/ISLPED16.pdf A Robust and Energy-Efficient Classifier Using Brain-Inspired Hyperdimensional Computing]<br />
* [https://iis-people.ee.ethz.ch/~arahimi/papers/DAC18.pdf PULP-HD: Accelerating Brain-Inspired High-Dimensional Computing on a Parallel Ultra-Low Power Platform]<br />
* [http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8216554 Associative Synthesis of Finite State Automata Model of a Controlled Object with Hyperdimensional Computing]<br />
<br />
=Flexible High-Density Sensors for Hand Gesture Recognition=<br />
<!-- [[File:Hyperdimensional_EMG.png|thumb|center]] --><br />
[[File:FlexEMG.png|thumb|right|500px]]<br />
<br />
<br />
===Short Description===<br />
The surface electromyography (EMG) signals are the superposition of the electrical activity of underneath muscles when contractions occur.<br />
Wearable surface EMG devices have a wide range of applications in controlling the upper limb prostheses and hand gesture recognition systems intended for consumer human-machine interaction. High-density EMG electrode array covering the whole arm can ease targeting the most desired muscle locations and cope the issues with sensors misplacement.<br />
For robust gesture recognition from such EMG sensors, we rely on brain-inspired HD computing.<br />
<br />
In this project, your goal would be to develop new sensors and RTL implementation of HD computing for one-shot gesture learning in an ultra low-power device.<br />
<br />
===Links===<br />
* [https://bwrc.eecs.berkeley.edu/sites/default/files/files/u2630/flexemg_v2_lq.mp4#t=2 Flexible EMG Demo]<br />
* [https://iis-people.ee.ethz.ch/~arahimi/papers/ISCAS2018.pdf Gesture Recognition System with Flexible High-Density Sensors] <br />
* [https://iis-people.ee.ethz.ch/~arahimi/papers/papers/ICRC16.pdf Hyperdimensional Biosignal Processing: A Case Study for EMG-based Hand Gesture Recognition (paper)]<br />
* [https://arxiv.org/abs/1901.00234 Adaptive EMG-based hand gesture recognition using hyperdimensional computing (paper)]<br />
* [https://github.com/abbas-rahimi/HDC-EMG Related Matlab code]<br />
<br />
==Robot Learning by Demonstration==<br />
[[File:Robot-VSA.png|thumb|left|Image source: Neubert et al, IROS 2016]]<br />
===Short Description===<br />
Robot learning from demonstration is a paradigm for enabling robots to autonomously perform new tasks. <br />
HD computing is a nice fit in this area since it naturally enables modeling relation between sensory inputs and actuator outputs of a robot by learning from few demonstrations. <br />
In this project, your goal would be to develop algorithms and implementations based on HD computing to enhance a robot to learn from online demonstrations. <br />
Further, such HD computing-based paradigm can be coupled to a brain-computer interface device enabling to control a robot by EEG signals from the brain. It has a wonderful application in neuroprosthetics to learn from a patient (see [https://www.youtube.com/watch?time_continue=26&v=jAtcVlTqxeA this] demonstration at EPFL).<br />
<br />
===Links===<br />
* [https://actu.epfl.ch/news/when-the-neuroprosthetics-learn-from-the-patient-5/ When the neuroprosthetics learn from the patient]<br />
* [https://www.tu-chemnitz.de/etit/proaut/publications/IROS2016_neubert.pdf Learning Vector Symbolic Architectures for Reactive Robot Behaviours] <br />
* [https://www.aaai.org/ocs/index.php/WS/AAAIW13/paper/download/7075/6578 Learning Behavior Hierarchies via High-Dimensional Sensor Projection (paper)]<br />
<br />
=Smart Eyeglass for Drones=<br />
[[File:Jins_6axis.png|thumb|text-top|right|250px]]<br />
[[File:Jins_EOG.png|thumb|text-top|right|250px]]<br />
<br />
===Short Description===<br />
This project plans to deploy and build upon a new breed of eyewear that allows you to look inside yourself, instead of just at what is in front of you. These insights help you see deep within yourself by showing shifts in your emotional state, your activity logs, as well as your health. This device currently provides 6-axis sensors as well as EOG sensors. These collectively allow to recognize body movements, eye movements, and status of mind. In this project, your goal is to interface with this device and extend it with other sensors to create novel machine learning applications, e.g., controlling the movements of our [http://iis-projects.ee.ethz.ch/index.php/Energy_Efficient_Autonomous_UAVs nano-size quadrotor].<br />
<br />
===Links===<br />
* [https://jins-meme.com/en/concept/ JINS Meme Smartglass]<br />
* [https://www.youtube.com/watch?v=Om_F0uyfjyc A game application of smartglass]<br />
<br />
=Hyperdimensional Affective Computing=<br />
[[File:Emotion-recognition.jpg|border|super|200px]]<br />
[[File:Emotions-on-arousal-valence-space.jpg|border|super|300px]]<br />
<br />
===Short Description===<br />
Affective computing (sometimes called artificial emotional intelligence) is the study and development of systems and devices that can recognize, interpret, process, and simulate human affects. We focus on the emotion recognition and interpretation. Emotion is a subjective mental state caused by some specific events, which is usually accompanied by characteristic behaviors and involuntary physiological changes. Therefore, multi-channel physiological signals (e.g., GSR, ECG, EEG, EOG) become good inputs for emotion analysis, which also can be collected easily and continuously by wearable sensors. However, due to the need of a huge amount of training data for a high-quality machine learning model, energy efficiency constrains and robust issues become major performance bottlenecks, especially for the wearable devices. To overcome this issue, HD computing can come to rescue by providing a low-energy, robust, and fast learning computational paradigm.<br />
<br />
In this project, your goal would be to develop an efficient and robust learning method based on hyperdimensional spaces to enhance accuracy and energy consumption.<br />
<br />
===Links===<br />
* [https://www.research-collection.ethz.ch/handle/20.500.11850/315807 Hyperdimensional Computing-based Multimodality Emotion Recognition with Physiological Signals]<br />
<br />
=More Projects=<br />
<DynamicPageList><br />
category = Available<br />
category = Digital<br />
category = Human Intranet<br />
category = Hyper-dimensional Computing<br />
<br />
</DynamicPageList><br />
<br />
=Completed Projects=<br />
These are projects that were recently completed: <br />
<DynamicPageList><br />
category = Completed<br />
category = Digital<br />
category = Human Intranet<br />
<br />
</DynamicPageList><br />
<br />
=Where to find us=<br />
* [https://iis-people.ee.ethz.ch/~herschmi/ Michael Hersche]<br />
** '''e-mail''': [mailto:herschmi@iis.ee.ethz.ch herschmi@iis.ee.ethz.ch]<br />
** ETZ J76.2<br />
* [https://iis-people.ee.ethz.ch/~xiaywang/ Xiaying Wang]<br />
** '''e-mail''': [mailto:xiaywang@iis.ee.ethz.ch xiaywang@iis.ee.ethz.ch]<br />
** ETZ J68.2<br />
* [https://iis-people.ee.ethz.ch/~arahimi/ Dr. Abbas Rahimi]<br />
** '''e-mail''': [mailto:abbas@iis.ee.ethz.ch abbas@iis.ee.ethz.ch]<br />
** ETZ J85<br />
* [http://www.iis.ee.ethz.ch/people/person-detail.html?persid=194234 Prof. Luca Benini]<br />
** '''e-mail''': [mailto:lbenini@iis.ee.ethz.ch lbenini@iis.ee.ethz.ch]<br />
** ETZ J84</div>Xiaywanghttp://iis-projects.ee.ethz.ch/index.php?title=Classification_of_Evoked_Local-Field_Potentials_in_Rat_Barrel_Cortex_using_Hyper-dimensional_Computing&diff=5201Classification of Evoked Local-Field Potentials in Rat Barrel Cortex using Hyper-dimensional Computing2020-06-22T15:30:01Z<p>Xiaywang: </p>
<hr />
<div>[[File:iis-project-description.jpg|400px|right|thumb]]<br />
<br />
==Description==<br />
One of the most ambitious goals of neuroscience and its neuroprosthetic applications is to interface intelligent electronic devices with the biological brain to cure neurological diseases. Neural coding is the branch of neuroscience that investigates the relationship between stimulus and neuronal responses. This emerging research field builds on our growing understanding of brain circuits and on recent technological advances in miniaturization of implantable multielectrode-arrays (MEAs) to record brain signals at high spatio-temporal resolution. Data processing is needed to decode useful information from the recorded neural activity to better understand the function of underlying neural circuits and, in perspective, to operate neuroprosthetic devices. In this context, artificial intelligence combined with low-power embedded devices is a very promising starting point towards real-time decoding of cerebral activities with low power consumption digital processors for brain-machine interfacing and neuroprosthetic applications [1].<br />
<br />
Brain-inspired hyperdimensional computing (HDC) explores the emulation of cognition by computing with hypervectors as an alternative to computing with numbers. HDC has proven to be promising for energy-efficient computing applied to biosignal classification [2].<br />
<br />
This project focuses on processing data of evoked Local Field Potentials (LFPs) recorded from the rat barrel cortex using a miniaturized 16-by-16 MEA while stimulating the principal whisker. The sensor has been implanted in vivo and 2D images have been acquired from different cortical depths. The deflection of the whisker is performed by means of a piezo-electric bender using various stimulation amplitudes. The aim of the project is to assess the performance of HDC in classifying different external stimulus applied to the animal.<br />
<br />
<br />
The task includes the following main sub-points:<br />
<ul><li> Understand the LFP basics and interpret the dataset.</li><br />
<li> Develop (high-level Phython or Matlab) machine learning or deep learning algorithm to classify the stimulation amplitudes or to detect signal onset. </li><br />
<li> Map the algorithm in the hardware (C-programming PULP, parallel computing).</li><br />
<li> Conduct in-vivo experiments to validate the method with a realistic setting.</li></ul><br />
<br />
The task is anyways flexible and it will be adapted to the student's skills and will.<br />
<br />
<br />
===Status: Available ===<br />
* Semester project<br />
: Supervisors: [[:User:xiaywang|Xiaying Wang]], [[:User:herschmi|Michael Hersche]]<br />
<br />
===Character===<br />
: 20% Theory<br />
: 80% Programming<br />
<br />
===Prerequisites===<br />
: Knowledge in Machine Learning (preprocessing, feature extraction, classifier, supervised-learning)<br />
: Embedded system programming<br />
: Python, C/C++, Matlab<br />
<br />
===Literature===<br />
* [https://ieeexplore.ieee.org/abstract/document/8584830] X. Wang, et al., Embedded Classification of Local Field Potentials Recorded from Rat Barrel Cortex with Implanted Multi-Electrode Array, 2018<br />
* [https://ieeexplore.ieee.org/document/8450511] A. Rahimi, et al., Hyperdimensional biosignal processing: A case study for EMG-based hand gesture recognition, 2016<br />
<br />
<br />
===IIS Professor===<br />
: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]<br />
<!-- : [http://www.iis.ee.ethz.ch/portrait/staff/huang.en.html Qiuting Huang] ---><br />
<!-- : [http://lne.ee.ethz.ch/en/general-information/people/professor.html Vanessa Wood] ---><br />
<!-- : [http://www.iis.ee.ethz.ch/portrait/staff/mluisier.en.html Mathieu Luisier] ---><br />
<!-- : [http://www.iis.ee.ethz.ch/portrait/staff/schenk.en.html Andreas Schenk] ---><br />
<!-- : [http://www.dz.ee.ethz.ch/en/general-information/about/staff/uid/364.html Hubert Kaeslin] ---><br />
[[#top|↑ top]]<br />
<br />
<br />
===Practical Details===<br />
* '''[[Project Plan]]'''<br />
* '''[[Project Meetings]]'''<br />
* '''[[Design Review]]'''<br />
* '''[[Coding Guidelines]]'''<br />
* '''[[Final Report]]'''<br />
* '''[[Final Presentation]]'''<br />
<br />
<br />
[[Category:Digital]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Xiaywang]]<br />
[[Category:Herschmi]]<br />
[[Category:Available]]<br />
[[Category:Hyper-dimensional Computing]]<br />
<br />
<!-- <br />
<br />
COPY PASTE FROM THE LIST BELOW TO ADD TO CATEGORIES<br />
<br />
GROUP<br />
[[Category:Digital]]<br />
[[Category:Analog]]<br />
[[Category:Nano-TCAD]]<br />
[[Category:Nano Electronics]]<br />
<br />
STATUS<br />
[[Category:Available]]<br />
[[Category:In progress]]<br />
[[Category:Completed]]<br />
[[Category:Hot]]<br />
<br />
TYPE OF WORK<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]]<br />
[[Category:PhD Thesis]]<br />
[[Category:Research]]<br />
<br />
NAMES OF EU/CTI/NT PROJECTS<br />
[[Category:UltrasoundToGo]]<br />
[[Category:IcySoC]]<br />
[[Category:PSocrates]]<br />
[[Category:UlpSoC]]<br />
[[Category:Qcrypt]]<br />
<br />
YEAR (IF FINISHED)<br />
[[Category:2010]]<br />
[[Category:2011]]<br />
[[Category:2012]]<br />
[[Category:2013]]<br />
[[Category:2014]]<br />
<br />
---></div>Xiaywanghttp://iis-projects.ee.ethz.ch/index.php?title=Human_Intranet&diff=5200Human Intranet2020-06-22T15:28:09Z<p>Xiaywang: /* More Projects */</p>
<hr />
<div>[[Category:Digital]]<br />
[[Category:Human Intranet]]<br />
[[Category:ASIC]]<br />
[[Category:FPGA]]<br />
[[Category:In progress]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]]<br />
<br />
__NOTOC__<br />
=What is Human Intranet?=<br />
[[File:HI.png|thumb|right|450px]]<br />
The world around us is getting a lot smarter quickly: virtually every single component of our daily living environment is being equipped with sensors, actuators, processing, and connection into a network that will soon count billions of nodes and trillions of sensors. These devices only interact with the human through the traditional input and output channels. Hence, they only indirectly communicate with our brain—through our five sense modalities—forming two separate computing systems: biological versus physical. It could be made a lot more effective if a direct high bandwidth link existed between the two systems, allowing them to truly collaborate with each other and to offer opportunities for enhanced functionality that would otherwise be hard to accomplish. The emergence of miniaturized sense, compute and actuate devices as well as interfaces that are form-fitted to the human body opens the door for a symbiotic convergence between biological function and physical computing.<br />
<br />
Human Intranet is an open, scalable platform that seamlessly integrates an ever-increasing number of sensor, actuation, computation, storage, communication and energy nodes located on, in, or around the human body acting in symbiosis with the functions provided by the body itself. Human Intranet presents a system vision in which, for example, disease would be treated by chronically measuring biosignals deep in the body, or by providing targeted, therapeutic interventions that respond on demand and in situ. To gain a holistic view of a person’s health, these sensors and actuators must communicate and collaborate with each other. Most of such systems prototyped or envisioned today serve to address deficiencies in the human sensory or motor control systems due to birth defects, illnesses, or accidents (e.g., invasive brain-machine interfaces, cochlear implants, artificial retinas, etc.). While all these systems target defects, one can easily imagine that this could lead to many types of enhancement and/or enable direct interaction with the environment: to make us humans smarter!<br />
<br />
Here, in our projects, we mainly focus on '''sensor, computation, communication, and emerging storage''' aspects to develop very efficient closed-loop sense-interpret-actuate systems, enabling distributed autonomous behavior. For example, to design the ''brain'' of our physical computing (i.e., the compute/interpret component), we rely on computing with ultra-wide words (e.g., 10,000 bits) that eases interfacing with various sensor modalities and actuators. This novel computing paradigm is called hyperdimensional (HD) computing that is inspired from the very size of the biological brain’s circuits: assuming 1 bit per synapse, the brain is made up of more than 24 billion of such ultra-wide words. You can watch some of our demos:<br />
* [https://bwrc.eecs.berkeley.edu/sites/default/files/files/u2630/flexemg_v2_lq.mp4#t=2 Video1]<br />
* [https://www.youtube.com/watch?time_continue=9&v=vTQGMQ6QaJE Video2]<br />
* [https://iis-people.ee.ethz.ch/~arahimi/papers/ISSCC18-Demo.pdf PDF] <br />
<br />
You can also find a collection of complemented projects with source codes/datasets here:<br />
* [https://github.com/HyperdimensionalComputing/collection Github link]<br />
<br />
==Prerequisites and Focus==<br />
If you are an M.S. student at the ETHZ, typically there is no prerequisite. You can come and talk to us and we adapt the projects based on your skills. The scope and focus of projects are wide. You can choose to work on:<br />
<br />
* '''Efficient hardware architectures in emerging technologies''' (e.g., [https://www.zurich.ibm.com/sto/memory/ the IBM computational memory])<br />
* '''System-level design and testing''' <br />
* '''Sensory interfaces''' (analog and digital)<br />
* '''FPGA prototyping, ASIC, and accelerators''' (SystemVerilog/ VHDL)<br />
* '''Exploring new Human Intranet/IoT applications''' (High-level Embedded Programming) <br />
* '''Algorithm design and optimizations''' (Matlab/ Python)<br />
<br />
<br />
<!-- <br />
* '''Theory''' of learning systems including HD computing, Hidden Markov Model (HMM), and clustering algorithms<br />
Overall, our projects cover '''algorithmic, hardware/software, and system level''' design and developments. <br />
However, if you have background in signal processing, VLSI or linear algebra is a super plus! --><br />
<br />
===Useful Reading===<br />
*[https://ieeexplore.ieee.org/document/7030200/ The Human Intranet--Where Swarms and Humans Meet]<br />
*[https://link.springer.com/article/10.1007/s12559-009-9009-8 Hyperdimensional Computing: An Introduction to Computing in Distributed Representation with High-Dimensional Random Vectors]<br />
*[https://ieeexplore.ieee.org/document/8422472/ Hyperdimensional Modulation for Robust Low-Power Communications]<br />
*[https://iis-people.ee.ethz.ch/~arahimi/papers/TCAS17.pdf High-dimensional Computing as a Nanoscalable Paradigm]<br />
*[http://www.oxfordscholarship.com/view/10.1093/acprof:oso/9780199794546.001.0001/acprof-9780199794546 How to Build a Brain]<br />
*[https://mitpress.mit.edu/books/sparse-distributed-memory Pentti Kanerva. 1988. Sparse Distributed Memory. MIT Press, Cambridge, MA, USA]<br />
<br />
=Available Projects=<br />
Here, we provide a short description of the related projects for you to see the scope of our work. The directions and details of the projects can be adapted based on your interests and skills. Please do not hesitate to come and talk to us for more details.<br />
<br />
=Online Brain-Computer Interfaces=<br />
<!-- [[File:BCI.png|thumb|center]] <br />
[[File:BCI-dryEEG.jpg|thumb|right]] --><br />
[[File:Emotiv-epoc-14-channel-mobile-eeg.jpg|thumb|right|200px]]<br />
<br />
<br />
===Short Description===<br />
Noninvasive brain–computer interfaces (BCI) and neuroprostheses aim to provide a communication and control channel based on the recognition of the subject’s intentions from spatiotemporal neural activity typically recorded by EEG electrodes. What makes it particularly challenging, however, is its susceptibility to errors over time in the recognition of human intentions.<br />
<br />
In this project, your goal would be to develop an efficient and fast learning hardware device that replaces the traditional signal processing and classification methods by directly operating with raw data from electrodes in an online fashion. <br />
<br />
===Links===<br />
* [https://iis-people.ee.ethz.ch/~arahimi/papers/EUSIPCO18.pdf Fast and Accurate Multiclass Inference for Motor Imagery BCIs Using Large Multiscale Temporal and Spectral Features]<br />
* [https://iis-people.ee.ethz.ch/~arahimi/papers/MONET17.pdf Hyperdimensional Computing for Blind and One-Shot Classification of EEG Error-Related Potentials]<br />
* [https://arxiv.org/abs/1812.05705 Exploring Embedding Methods in Binary Hyperdimensional Computing: A Case Study for Motor-Imagery based Brain-Computer Interfaces]<br />
<br />
<br />
===Related Projects===<br />
<DynamicPageList><br />
category = Available<br />
category = Digital<br />
category = BCI<br />
<br />
</DynamicPageList><br />
<br />
=Epilepsy Seizure Detection Device=<br />
[[File:Non-EEG Seizure.jpg|border|text-top|300px]]<br />
[[File:NeuroPace.jpg|border|text-top|400px]]<br />
<!-- Seizure-prediction.png --><br />
===Short Description===<br />
Seizure detection systems hold promise for improving the quality of life for patients with epilepsy that afflicts nearly 1% of the world's population.<br />
In this project, your goal would be to develop efficient techniques for EEG as well as non-EEG signals to detect an upcoming seizure in an ultra-low-power device. This covers a wide range of analog and digital techniques. The abilities of hyperdimensional computing for one-shot and online learning can come to rescue.<br />
<br />
===Links===<br />
* [http://ieeg-swez.ethz.ch/ The SWEC-ETHZ iEEG Database and Algorithms]<br />
* [https://www.wysscenter.ch/project/epilepsy-monitoring-seizure-forecasts/ Epilepsy monitoring and seizure forecasts at Wyss Center]<br />
* [https://www.youtube.com/watch?time_continue=87&v=ouyPXkEud40 Controlling tinnitus with neurofeedback]<br />
<br />
===Related Projects===<br />
<DynamicPageList><br />
category = Available<br />
category = Digital<br />
category = Epilepsy<br />
<br />
</DynamicPageList><br />
<br />
=Extremely Resilient Hyperdimensional Processor=<br />
[[File:BrainChip.jpg|thumb|left]]<br />
<br />
===Short Description===<br />
The most important aspect of hyperdimensional (HD) computing, for hardware realization, is its robustness against noise and variations in the computing platforms. Principles of HD computing allows to implement resilient controllers and state machines for extreme noisy conditions. Its tolerance in operating with faulty components and low signal-to-noise ratio (SNR) conditions is achieved by brain-inspired properties of hypervectors: (pseudo)randomness, high-dimensionality, and fully distributed holographic representations.<br />
<br />
In this project, your goal would be to design and develop an end-to-end robust HD processor with extremely resilient controller based on principles of HD computing, and measure its resiliency against noisy environment and faulty components.<br />
<br />
===Links===<br />
* [https://iis-people.ee.ethz.ch/~arahimi/papers/ISLPED16.pdf A Robust and Energy-Efficient Classifier Using Brain-Inspired Hyperdimensional Computing]<br />
* [https://iis-people.ee.ethz.ch/~arahimi/papers/DAC18.pdf PULP-HD: Accelerating Brain-Inspired High-Dimensional Computing on a Parallel Ultra-Low Power Platform]<br />
* [http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8216554 Associative Synthesis of Finite State Automata Model of a Controlled Object with Hyperdimensional Computing]<br />
<br />
=Flexible High-Density Sensors for Hand Gesture Recognition=<br />
<!-- [[File:Hyperdimensional_EMG.png|thumb|center]] --><br />
[[File:FlexEMG.png|thumb|right|500px]]<br />
<br />
<br />
===Short Description===<br />
The surface electromyography (EMG) signals are the superposition of the electrical activity of underneath muscles when contractions occur.<br />
Wearable surface EMG devices have a wide range of applications in controlling the upper limb prostheses and hand gesture recognition systems intended for consumer human-machine interaction. High-density EMG electrode array covering the whole arm can ease targeting the most desired muscle locations and cope the issues with sensors misplacement.<br />
For robust gesture recognition from such EMG sensors, we rely on brain-inspired HD computing.<br />
<br />
In this project, your goal would be to develop new sensors and RTL implementation of HD computing for one-shot gesture learning in an ultra low-power device.<br />
<br />
===Links===<br />
* [https://bwrc.eecs.berkeley.edu/sites/default/files/files/u2630/flexemg_v2_lq.mp4#t=2 Flexible EMG Demo]<br />
* [https://iis-people.ee.ethz.ch/~arahimi/papers/ISCAS2018.pdf Gesture Recognition System with Flexible High-Density Sensors] <br />
* [https://iis-people.ee.ethz.ch/~arahimi/papers/papers/ICRC16.pdf Hyperdimensional Biosignal Processing: A Case Study for EMG-based Hand Gesture Recognition (paper)]<br />
* [https://arxiv.org/abs/1901.00234 Adaptive EMG-based hand gesture recognition using hyperdimensional computing (paper)]<br />
* [https://github.com/abbas-rahimi/HDC-EMG Related Matlab code]<br />
<br />
==Robot Learning by Demonstration==<br />
[[File:Robot-VSA.png|thumb|left|Image source: Neubert et al, IROS 2016]]<br />
===Short Description===<br />
Robot learning from demonstration is a paradigm for enabling robots to autonomously perform new tasks. <br />
HD computing is a nice fit in this area since it naturally enables modeling relation between sensory inputs and actuator outputs of a robot by learning from few demonstrations. <br />
In this project, your goal would be to develop algorithms and implementations based on HD computing to enhance a robot to learn from online demonstrations. <br />
Further, such HD computing-based paradigm can be coupled to a brain-computer interface device enabling to control a robot by EEG signals from the brain. It has a wonderful application in neuroprosthetics to learn from a patient (see [https://www.youtube.com/watch?time_continue=26&v=jAtcVlTqxeA this] demonstration at EPFL).<br />
<br />
===Links===<br />
* [https://actu.epfl.ch/news/when-the-neuroprosthetics-learn-from-the-patient-5/ When the neuroprosthetics learn from the patient]<br />
* [https://www.tu-chemnitz.de/etit/proaut/publications/IROS2016_neubert.pdf Learning Vector Symbolic Architectures for Reactive Robot Behaviours] <br />
* [https://www.aaai.org/ocs/index.php/WS/AAAIW13/paper/download/7075/6578 Learning Behavior Hierarchies via High-Dimensional Sensor Projection (paper)]<br />
<br />
=Smart Eyeglass for Drones=<br />
[[File:Jins_6axis.png|thumb|text-top|right|250px]]<br />
[[File:Jins_EOG.png|thumb|text-top|right|250px]]<br />
<br />
===Short Description===<br />
This project plans to deploy and build upon a new breed of eyewear that allows you to look inside yourself, instead of just at what is in front of you. These insights help you see deep within yourself by showing shifts in your emotional state, your activity logs, as well as your health. This device currently provides 6-axis sensors as well as EOG sensors. These collectively allow to recognize body movements, eye movements, and status of mind. In this project, your goal is to interface with this device and extend it with other sensors to create novel machine learning applications, e.g., controlling the movements of our [http://iis-projects.ee.ethz.ch/index.php/Energy_Efficient_Autonomous_UAVs nano-size quadrotor].<br />
<br />
===Links===<br />
* [https://jins-meme.com/en/concept/ JINS Meme Smartglass]<br />
* [https://www.youtube.com/watch?v=Om_F0uyfjyc A game application of smartglass]<br />
<br />
=Hyperdimensional Affective Computing=<br />
[[File:Emotion-recognition.jpg|border|super|200px]]<br />
[[File:Emotions-on-arousal-valence-space.jpg|border|super|300px]]<br />
<br />
===Short Description===<br />
Affective computing (sometimes called artificial emotional intelligence) is the study and development of systems and devices that can recognize, interpret, process, and simulate human affects. We focus on the emotion recognition and interpretation. Emotion is a subjective mental state caused by some specific events, which is usually accompanied by characteristic behaviors and involuntary physiological changes. Therefore, multi-channel physiological signals (e.g., GSR, ECG, EEG, EOG) become good inputs for emotion analysis, which also can be collected easily and continuously by wearable sensors. However, due to the need of a huge amount of training data for a high-quality machine learning model, energy efficiency constrains and robust issues become major performance bottlenecks, especially for the wearable devices. To overcome this issue, HD computing can come to rescue by providing a low-energy, robust, and fast learning computational paradigm.<br />
<br />
In this project, your goal would be to develop an efficient and robust learning method based on hyperdimensional spaces to enhance accuracy and energy consumption.<br />
<br />
===Links===<br />
* [https://www.research-collection.ethz.ch/handle/20.500.11850/315807 Hyperdimensional Computing-based Multimodality Emotion Recognition with Physiological Signals]<br />
<br />
=More Projects=<br />
<DynamicPageList><br />
category = Available<br />
category = Digital<br />
category = Human Intranet<br />
<br />
</DynamicPageList><br />
<br />
=Completed Projects=<br />
These are projects that were recently completed: <br />
<DynamicPageList><br />
category = Completed<br />
category = Digital<br />
category = Human Intranet<br />
<br />
</DynamicPageList><br />
<br />
=Where to find us=<br />
* [https://iis-people.ee.ethz.ch/~herschmi/ Michael Hersche]<br />
** '''e-mail''': [mailto:herschmi@iis.ee.ethz.ch herschmi@iis.ee.ethz.ch]<br />
** ETZ J76.2<br />
* [https://iis-people.ee.ethz.ch/~xiaywang/ Xiaying Wang]<br />
** '''e-mail''': [mailto:xiaywang@iis.ee.ethz.ch xiaywang@iis.ee.ethz.ch]<br />
** ETZ J68.2<br />
* [https://iis-people.ee.ethz.ch/~arahimi/ Dr. Abbas Rahimi]<br />
** '''e-mail''': [mailto:abbas@iis.ee.ethz.ch abbas@iis.ee.ethz.ch]<br />
** ETZ J85<br />
* [http://www.iis.ee.ethz.ch/people/person-detail.html?persid=194234 Prof. Luca Benini]<br />
** '''e-mail''': [mailto:lbenini@iis.ee.ethz.ch lbenini@iis.ee.ethz.ch]<br />
** ETZ J84</div>Xiaywanghttp://iis-projects.ee.ethz.ch/index.php?title=Human_Intranet&diff=5199Human Intranet2020-06-22T15:27:55Z<p>Xiaywang: /* More Projects */</p>
<hr />
<div>[[Category:Digital]]<br />
[[Category:Human Intranet]]<br />
[[Category:ASIC]]<br />
[[Category:FPGA]]<br />
[[Category:In progress]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]]<br />
<br />
__NOTOC__<br />
=What is Human Intranet?=<br />
[[File:HI.png|thumb|right|450px]]<br />
The world around us is getting a lot smarter quickly: virtually every single component of our daily living environment is being equipped with sensors, actuators, processing, and connection into a network that will soon count billions of nodes and trillions of sensors. These devices only interact with the human through the traditional input and output channels. Hence, they only indirectly communicate with our brain—through our five sense modalities—forming two separate computing systems: biological versus physical. It could be made a lot more effective if a direct high bandwidth link existed between the two systems, allowing them to truly collaborate with each other and to offer opportunities for enhanced functionality that would otherwise be hard to accomplish. The emergence of miniaturized sense, compute and actuate devices as well as interfaces that are form-fitted to the human body opens the door for a symbiotic convergence between biological function and physical computing.<br />
<br />
Human Intranet is an open, scalable platform that seamlessly integrates an ever-increasing number of sensor, actuation, computation, storage, communication and energy nodes located on, in, or around the human body acting in symbiosis with the functions provided by the body itself. Human Intranet presents a system vision in which, for example, disease would be treated by chronically measuring biosignals deep in the body, or by providing targeted, therapeutic interventions that respond on demand and in situ. To gain a holistic view of a person’s health, these sensors and actuators must communicate and collaborate with each other. Most of such systems prototyped or envisioned today serve to address deficiencies in the human sensory or motor control systems due to birth defects, illnesses, or accidents (e.g., invasive brain-machine interfaces, cochlear implants, artificial retinas, etc.). While all these systems target defects, one can easily imagine that this could lead to many types of enhancement and/or enable direct interaction with the environment: to make us humans smarter!<br />
<br />
Here, in our projects, we mainly focus on '''sensor, computation, communication, and emerging storage''' aspects to develop very efficient closed-loop sense-interpret-actuate systems, enabling distributed autonomous behavior. For example, to design the ''brain'' of our physical computing (i.e., the compute/interpret component), we rely on computing with ultra-wide words (e.g., 10,000 bits) that eases interfacing with various sensor modalities and actuators. This novel computing paradigm is called hyperdimensional (HD) computing that is inspired from the very size of the biological brain’s circuits: assuming 1 bit per synapse, the brain is made up of more than 24 billion of such ultra-wide words. You can watch some of our demos:<br />
* [https://bwrc.eecs.berkeley.edu/sites/default/files/files/u2630/flexemg_v2_lq.mp4#t=2 Video1]<br />
* [https://www.youtube.com/watch?time_continue=9&v=vTQGMQ6QaJE Video2]<br />
* [https://iis-people.ee.ethz.ch/~arahimi/papers/ISSCC18-Demo.pdf PDF] <br />
<br />
You can also find a collection of complemented projects with source codes/datasets here:<br />
* [https://github.com/HyperdimensionalComputing/collection Github link]<br />
<br />
==Prerequisites and Focus==<br />
If you are an M.S. student at the ETHZ, typically there is no prerequisite. You can come and talk to us and we adapt the projects based on your skills. The scope and focus of projects are wide. You can choose to work on:<br />
<br />
* '''Efficient hardware architectures in emerging technologies''' (e.g., [https://www.zurich.ibm.com/sto/memory/ the IBM computational memory])<br />
* '''System-level design and testing''' <br />
* '''Sensory interfaces''' (analog and digital)<br />
* '''FPGA prototyping, ASIC, and accelerators''' (SystemVerilog/ VHDL)<br />
* '''Exploring new Human Intranet/IoT applications''' (High-level Embedded Programming) <br />
* '''Algorithm design and optimizations''' (Matlab/ Python)<br />
<br />
<br />
<!-- <br />
* '''Theory''' of learning systems including HD computing, Hidden Markov Model (HMM), and clustering algorithms<br />
Overall, our projects cover '''algorithmic, hardware/software, and system level''' design and developments. <br />
However, if you have background in signal processing, VLSI or linear algebra is a super plus! --><br />
<br />
===Useful Reading===<br />
*[https://ieeexplore.ieee.org/document/7030200/ The Human Intranet--Where Swarms and Humans Meet]<br />
*[https://link.springer.com/article/10.1007/s12559-009-9009-8 Hyperdimensional Computing: An Introduction to Computing in Distributed Representation with High-Dimensional Random Vectors]<br />
*[https://ieeexplore.ieee.org/document/8422472/ Hyperdimensional Modulation for Robust Low-Power Communications]<br />
*[https://iis-people.ee.ethz.ch/~arahimi/papers/TCAS17.pdf High-dimensional Computing as a Nanoscalable Paradigm]<br />
*[http://www.oxfordscholarship.com/view/10.1093/acprof:oso/9780199794546.001.0001/acprof-9780199794546 How to Build a Brain]<br />
*[https://mitpress.mit.edu/books/sparse-distributed-memory Pentti Kanerva. 1988. Sparse Distributed Memory. MIT Press, Cambridge, MA, USA]<br />
<br />
=Available Projects=<br />
Here, we provide a short description of the related projects for you to see the scope of our work. The directions and details of the projects can be adapted based on your interests and skills. Please do not hesitate to come and talk to us for more details.<br />
<br />
=Online Brain-Computer Interfaces=<br />
<!-- [[File:BCI.png|thumb|center]] <br />
[[File:BCI-dryEEG.jpg|thumb|right]] --><br />
[[File:Emotiv-epoc-14-channel-mobile-eeg.jpg|thumb|right|200px]]<br />
<br />
<br />
===Short Description===<br />
Noninvasive brain–computer interfaces (BCI) and neuroprostheses aim to provide a communication and control channel based on the recognition of the subject’s intentions from spatiotemporal neural activity typically recorded by EEG electrodes. What makes it particularly challenging, however, is its susceptibility to errors over time in the recognition of human intentions.<br />
<br />
In this project, your goal would be to develop an efficient and fast learning hardware device that replaces the traditional signal processing and classification methods by directly operating with raw data from electrodes in an online fashion. <br />
<br />
===Links===<br />
* [https://iis-people.ee.ethz.ch/~arahimi/papers/EUSIPCO18.pdf Fast and Accurate Multiclass Inference for Motor Imagery BCIs Using Large Multiscale Temporal and Spectral Features]<br />
* [https://iis-people.ee.ethz.ch/~arahimi/papers/MONET17.pdf Hyperdimensional Computing for Blind and One-Shot Classification of EEG Error-Related Potentials]<br />
* [https://arxiv.org/abs/1812.05705 Exploring Embedding Methods in Binary Hyperdimensional Computing: A Case Study for Motor-Imagery based Brain-Computer Interfaces]<br />
<br />
<br />
===Related Projects===<br />
<DynamicPageList><br />
category = Available<br />
category = Digital<br />
category = BCI<br />
<br />
</DynamicPageList><br />
<br />
=Epilepsy Seizure Detection Device=<br />
[[File:Non-EEG Seizure.jpg|border|text-top|300px]]<br />
[[File:NeuroPace.jpg|border|text-top|400px]]<br />
<!-- Seizure-prediction.png --><br />
===Short Description===<br />
Seizure detection systems hold promise for improving the quality of life for patients with epilepsy that afflicts nearly 1% of the world's population.<br />
In this project, your goal would be to develop efficient techniques for EEG as well as non-EEG signals to detect an upcoming seizure in an ultra-low-power device. This covers a wide range of analog and digital techniques. The abilities of hyperdimensional computing for one-shot and online learning can come to rescue.<br />
<br />
===Links===<br />
* [http://ieeg-swez.ethz.ch/ The SWEC-ETHZ iEEG Database and Algorithms]<br />
* [https://www.wysscenter.ch/project/epilepsy-monitoring-seizure-forecasts/ Epilepsy monitoring and seizure forecasts at Wyss Center]<br />
* [https://www.youtube.com/watch?time_continue=87&v=ouyPXkEud40 Controlling tinnitus with neurofeedback]<br />
<br />
===Related Projects===<br />
<DynamicPageList><br />
category = Available<br />
category = Digital<br />
category = Epilepsy<br />
<br />
</DynamicPageList><br />
<br />
=Extremely Resilient Hyperdimensional Processor=<br />
[[File:BrainChip.jpg|thumb|left]]<br />
<br />
===Short Description===<br />
The most important aspect of hyperdimensional (HD) computing, for hardware realization, is its robustness against noise and variations in the computing platforms. Principles of HD computing allows to implement resilient controllers and state machines for extreme noisy conditions. Its tolerance in operating with faulty components and low signal-to-noise ratio (SNR) conditions is achieved by brain-inspired properties of hypervectors: (pseudo)randomness, high-dimensionality, and fully distributed holographic representations.<br />
<br />
In this project, your goal would be to design and develop an end-to-end robust HD processor with extremely resilient controller based on principles of HD computing, and measure its resiliency against noisy environment and faulty components.<br />
<br />
===Links===<br />
* [https://iis-people.ee.ethz.ch/~arahimi/papers/ISLPED16.pdf A Robust and Energy-Efficient Classifier Using Brain-Inspired Hyperdimensional Computing]<br />
* [https://iis-people.ee.ethz.ch/~arahimi/papers/DAC18.pdf PULP-HD: Accelerating Brain-Inspired High-Dimensional Computing on a Parallel Ultra-Low Power Platform]<br />
* [http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8216554 Associative Synthesis of Finite State Automata Model of a Controlled Object with Hyperdimensional Computing]<br />
<br />
=Flexible High-Density Sensors for Hand Gesture Recognition=<br />
<!-- [[File:Hyperdimensional_EMG.png|thumb|center]] --><br />
[[File:FlexEMG.png|thumb|right|500px]]<br />
<br />
<br />
===Short Description===<br />
The surface electromyography (EMG) signals are the superposition of the electrical activity of underneath muscles when contractions occur.<br />
Wearable surface EMG devices have a wide range of applications in controlling the upper limb prostheses and hand gesture recognition systems intended for consumer human-machine interaction. High-density EMG electrode array covering the whole arm can ease targeting the most desired muscle locations and cope the issues with sensors misplacement.<br />
For robust gesture recognition from such EMG sensors, we rely on brain-inspired HD computing.<br />
<br />
In this project, your goal would be to develop new sensors and RTL implementation of HD computing for one-shot gesture learning in an ultra low-power device.<br />
<br />
===Links===<br />
* [https://bwrc.eecs.berkeley.edu/sites/default/files/files/u2630/flexemg_v2_lq.mp4#t=2 Flexible EMG Demo]<br />
* [https://iis-people.ee.ethz.ch/~arahimi/papers/ISCAS2018.pdf Gesture Recognition System with Flexible High-Density Sensors] <br />
* [https://iis-people.ee.ethz.ch/~arahimi/papers/papers/ICRC16.pdf Hyperdimensional Biosignal Processing: A Case Study for EMG-based Hand Gesture Recognition (paper)]<br />
* [https://arxiv.org/abs/1901.00234 Adaptive EMG-based hand gesture recognition using hyperdimensional computing (paper)]<br />
* [https://github.com/abbas-rahimi/HDC-EMG Related Matlab code]<br />
<br />
==Robot Learning by Demonstration==<br />
[[File:Robot-VSA.png|thumb|left|Image source: Neubert et al, IROS 2016]]<br />
===Short Description===<br />
Robot learning from demonstration is a paradigm for enabling robots to autonomously perform new tasks. <br />
HD computing is a nice fit in this area since it naturally enables modeling relation between sensory inputs and actuator outputs of a robot by learning from few demonstrations. <br />
In this project, your goal would be to develop algorithms and implementations based on HD computing to enhance a robot to learn from online demonstrations. <br />
Further, such HD computing-based paradigm can be coupled to a brain-computer interface device enabling to control a robot by EEG signals from the brain. It has a wonderful application in neuroprosthetics to learn from a patient (see [https://www.youtube.com/watch?time_continue=26&v=jAtcVlTqxeA this] demonstration at EPFL).<br />
<br />
===Links===<br />
* [https://actu.epfl.ch/news/when-the-neuroprosthetics-learn-from-the-patient-5/ When the neuroprosthetics learn from the patient]<br />
* [https://www.tu-chemnitz.de/etit/proaut/publications/IROS2016_neubert.pdf Learning Vector Symbolic Architectures for Reactive Robot Behaviours] <br />
* [https://www.aaai.org/ocs/index.php/WS/AAAIW13/paper/download/7075/6578 Learning Behavior Hierarchies via High-Dimensional Sensor Projection (paper)]<br />
<br />
=Smart Eyeglass for Drones=<br />
[[File:Jins_6axis.png|thumb|text-top|right|250px]]<br />
[[File:Jins_EOG.png|thumb|text-top|right|250px]]<br />
<br />
===Short Description===<br />
This project plans to deploy and build upon a new breed of eyewear that allows you to look inside yourself, instead of just at what is in front of you. These insights help you see deep within yourself by showing shifts in your emotional state, your activity logs, as well as your health. This device currently provides 6-axis sensors as well as EOG sensors. These collectively allow to recognize body movements, eye movements, and status of mind. In this project, your goal is to interface with this device and extend it with other sensors to create novel machine learning applications, e.g., controlling the movements of our [http://iis-projects.ee.ethz.ch/index.php/Energy_Efficient_Autonomous_UAVs nano-size quadrotor].<br />
<br />
===Links===<br />
* [https://jins-meme.com/en/concept/ JINS Meme Smartglass]<br />
* [https://www.youtube.com/watch?v=Om_F0uyfjyc A game application of smartglass]<br />
<br />
=Hyperdimensional Affective Computing=<br />
[[File:Emotion-recognition.jpg|border|super|200px]]<br />
[[File:Emotions-on-arousal-valence-space.jpg|border|super|300px]]<br />
<br />
===Short Description===<br />
Affective computing (sometimes called artificial emotional intelligence) is the study and development of systems and devices that can recognize, interpret, process, and simulate human affects. We focus on the emotion recognition and interpretation. Emotion is a subjective mental state caused by some specific events, which is usually accompanied by characteristic behaviors and involuntary physiological changes. Therefore, multi-channel physiological signals (e.g., GSR, ECG, EEG, EOG) become good inputs for emotion analysis, which also can be collected easily and continuously by wearable sensors. However, due to the need of a huge amount of training data for a high-quality machine learning model, energy efficiency constrains and robust issues become major performance bottlenecks, especially for the wearable devices. To overcome this issue, HD computing can come to rescue by providing a low-energy, robust, and fast learning computational paradigm.<br />
<br />
In this project, your goal would be to develop an efficient and robust learning method based on hyperdimensional spaces to enhance accuracy and energy consumption.<br />
<br />
===Links===<br />
* [https://www.research-collection.ethz.ch/handle/20.500.11850/315807 Hyperdimensional Computing-based Multimodality Emotion Recognition with Physiological Signals]<br />
<br />
=More Projects=<br />
<DynamicPageList><br />
category = Available<br />
category = Digital<br />
category = Human Intranet<br />
category = HD<br />
<br />
</DynamicPageList><br />
<br />
=Completed Projects=<br />
These are projects that were recently completed: <br />
<DynamicPageList><br />
category = Completed<br />
category = Digital<br />
category = Human Intranet<br />
<br />
</DynamicPageList><br />
<br />
=Where to find us=<br />
* [https://iis-people.ee.ethz.ch/~herschmi/ Michael Hersche]<br />
** '''e-mail''': [mailto:herschmi@iis.ee.ethz.ch herschmi@iis.ee.ethz.ch]<br />
** ETZ J76.2<br />
* [https://iis-people.ee.ethz.ch/~xiaywang/ Xiaying Wang]<br />
** '''e-mail''': [mailto:xiaywang@iis.ee.ethz.ch xiaywang@iis.ee.ethz.ch]<br />
** ETZ J68.2<br />
* [https://iis-people.ee.ethz.ch/~arahimi/ Dr. Abbas Rahimi]<br />
** '''e-mail''': [mailto:abbas@iis.ee.ethz.ch abbas@iis.ee.ethz.ch]<br />
** ETZ J85<br />
* [http://www.iis.ee.ethz.ch/people/person-detail.html?persid=194234 Prof. Luca Benini]<br />
** '''e-mail''': [mailto:lbenini@iis.ee.ethz.ch lbenini@iis.ee.ethz.ch]<br />
** ETZ J84</div>Xiaywanghttp://iis-projects.ee.ethz.ch/index.php?title=Classification_of_Evoked_Local-Field_Potentials_in_Rat_Barrel_Cortex_using_Hyper-dimensional_Computing&diff=5198Classification of Evoked Local-Field Potentials in Rat Barrel Cortex using Hyper-dimensional Computing2020-06-22T15:00:42Z<p>Xiaywang: </p>
<hr />
<div>[[File:iis-project-description.jpg|400px|right|thumb]]<br />
<br />
==Description==<br />
One of the most ambitious goals of neuroscience and its neuroprosthetic applications is to interface intelligent electronic devices with the biological brain to cure neurological diseases. Neural coding is the branch of neuroscience that investigates the relationship between stimulus and neuronal responses. This emerging research field builds on our growing understanding of brain circuits and on recent technological advances in miniaturization of implantable multielectrode-arrays (MEAs) to record brain signals at high spatio-temporal resolution. Data processing is needed to decode useful information from the recorded neural activity to better understand the function of underlying neural circuits and, in perspective, to operate neuroprosthetic devices. In this context, artificial intelligence combined with low-power embedded devices is a very promising starting point towards real-time decoding of cerebral activities with low power consumption digital processors for brain-machine interfacing and neuroprosthetic applications [1].<br />
<br />
Brain-inspired hyperdimensional computing (HDC) explores the emulation of cognition by computing with hypervectors as an alternative to computing with numbers. HDC has proven to be promising for energy-efficient computing applied to biosignal classification [2].<br />
<br />
This project focuses on processing data of evoked Local Field Potentials (LFPs) recorded from the rat barrel cortex using a miniaturized 16-by-16 MEA while stimulating the principal whisker. The sensor has been implanted in vivo and 2D images have been acquired from different cortical depths. The deflection of the whisker is performed by means of a piezo-electric bender using various stimulation amplitudes. The aim of the project is to assess the performance of HDC in classifying different external stimulus applied to the animal.<br />
<br />
<br />
The task includes the following main sub-points:<br />
<ul><li> Understand the LFP basics and interpret the dataset.</li><br />
<li> Develop (high-level Phython or Matlab) machine learning or deep learning algorithm to classify the stimulation amplitudes or to detect signal onset. </li><br />
<li> Map the algorithm in the hardware (C-programming PULP, parallel computing).</li><br />
<li> Conduct in-vivo experiments to validate the method with a realistic setting.</li></ul><br />
<br />
The task is anyways flexible and it will be adapted to the student's skills and will.<br />
<br />
<br />
===Status: Available ===<br />
* Semester project<br />
: Supervisors: [[:User:xiaywang|Xiaying Wang]], [[:User:herschmi|Michael Hersche]]<br />
<br />
===Character===<br />
: 20% Theory<br />
: 80% Programming<br />
<br />
===Prerequisites===<br />
: Knowledge in Machine Learning (preprocessing, feature extraction, classifier, supervised-learning)<br />
: Embedded system programming<br />
: Python, C/C++, Matlab<br />
<br />
===Literature===<br />
* [https://ieeexplore.ieee.org/abstract/document/8584830] X. Wang, et al., Embedded Classification of Local Field Potentials Recorded from Rat Barrel Cortex with Implanted Multi-Electrode Array, 2018<br />
* [https://ieeexplore.ieee.org/document/8450511] A. Rahimi, et al., Hyperdimensional biosignal processing: A case study for EMG-based hand gesture recognition, 2016<br />
<br />
<br />
===IIS Professor===<br />
: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]<br />
<!-- : [http://www.iis.ee.ethz.ch/portrait/staff/huang.en.html Qiuting Huang] ---><br />
<!-- : [http://lne.ee.ethz.ch/en/general-information/people/professor.html Vanessa Wood] ---><br />
<!-- : [http://www.iis.ee.ethz.ch/portrait/staff/mluisier.en.html Mathieu Luisier] ---><br />
<!-- : [http://www.iis.ee.ethz.ch/portrait/staff/schenk.en.html Andreas Schenk] ---><br />
<!-- : [http://www.dz.ee.ethz.ch/en/general-information/about/staff/uid/364.html Hubert Kaeslin] ---><br />
[[#top|↑ top]]<br />
<br />
<br />
===Practical Details===<br />
* '''[[Project Plan]]'''<br />
* '''[[Project Meetings]]'''<br />
* '''[[Design Review]]'''<br />
* '''[[Coding Guidelines]]'''<br />
* '''[[Final Report]]'''<br />
* '''[[Final Presentation]]'''<br />
<br />
<br />
[[Category:Digital]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Xiaywang]]<br />
[[Category:Herschmi]]<br />
[[Category:Available]]<br />
[[Category:HD]]<br />
<br />
<!-- <br />
<br />
COPY PASTE FROM THE LIST BELOW TO ADD TO CATEGORIES<br />
<br />
GROUP<br />
[[Category:Digital]]<br />
[[Category:Analog]]<br />
[[Category:Nano-TCAD]]<br />
[[Category:Nano Electronics]]<br />
<br />
STATUS<br />
[[Category:Available]]<br />
[[Category:In progress]]<br />
[[Category:Completed]]<br />
[[Category:Hot]]<br />
<br />
TYPE OF WORK<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]]<br />
[[Category:PhD Thesis]]<br />
[[Category:Research]]<br />
<br />
NAMES OF EU/CTI/NT PROJECTS<br />
[[Category:UltrasoundToGo]]<br />
[[Category:IcySoC]]<br />
[[Category:PSocrates]]<br />
[[Category:UlpSoC]]<br />
[[Category:Qcrypt]]<br />
<br />
YEAR (IF FINISHED)<br />
[[Category:2010]]<br />
[[Category:2011]]<br />
[[Category:2012]]<br />
[[Category:2013]]<br />
[[Category:2014]]<br />
<br />
---></div>Xiaywang