http://iis-projects.ee.ethz.ch/api.php?action=feedcontributions&user=Herschmi&feedformat=atomiis-projects - User contributions [en]2024-03-29T09:02:11ZUser contributionsMediaWiki 1.28.0http://iis-projects.ee.ethz.ch/index.php?title=IBM_Research&diff=7042IBM Research2021-10-05T14:27:49Z<p>Herschmi: /* Available Projects */</p>
<hr />
<div>[[Category:Digital]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]] <br />
[[Category:Available]] <br />
[[Category:2021]]<br />
[[Category:Hot]]<br />
[[File:IBM_ZRLab.png|center]]<br />
<br />
==Short Description==<br />
Today, we are entering the era of cognitive computing, which holds great promise in deriving intelligence and knowledge from huge volumes of data. In today’s computers based on von Neumann architecture, huge amounts of data need to be shuttled back and forth at high speeds, a task at which this architecture is inefficient.<br />
<br />
It is becoming increasingly clear that to build efficient cognitive computers, we need to transition to non-von Neumann architectures in which memory and processing coexist in some form. At IBM Research–Zurich in the [https://www.zurich.ibm.com/sto/memory/ Neuromorphic and In-memory Computing Group], we explore various such computing paradigms from in-memory computing to brain-inspired neuromorphic computing. Our research spans from devices and architectures to algorithms and applications. <!--Here is a list of available projects:<br />
<br />
* [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf '''Artificial general intelligence (AGI):''' lifelong learning challenge] <br />
* [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf '''Machine learning based on optimal transport using in-memory computing''']<br />
* [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Developing Efficient Models of '''Strong AI for Intelligence Quotient (IQ) Test''']<br />
--><br />
<br />
===About the IBM Research–Zurich===<br />
The location in Zurich is one of IBM’s 12 global research labs. IBM has maintained a research laboratory in Switzerland since 1956. As the first European branch of IBM Research, the mission of the Zurich Lab, in addition to pursuing cutting-edge research for tomorrow’s information technology, is to cultivate close relationships with academic and industrial partners, be one of the premier places to work for world-class researchers, to promote women in IT and science, and to help drive Europe’s innovation agenda. [https://www.zurich.ibm.com/pdf/ZRL_Fact_Sheet.pdf Download factsheet]<br />
<br />
==Hybrid AI Systems (HAS)==<br />
<br />
[[File:NatureElectronics20.jpg|thumb|right|200px]]<br />
Neither symbolic AI nor neural networks alone has produced the kind of intelligence expressed in human and animal behavior. Why? Each has a long and rich history, but has addressed a relatively narrow aspect of the problem. Symbolic AI focuses on solving cognitive problems, drawing upon the rich framework of symbolic computation to manipulate internal representations in order to perform reasoning and inference. But it suffers from being non-adaptive, lacking the ability to learn from example or by direct observation of the world. Neural networks on the other hand have the ability to learn from data, and derive much of their power from nonlinear function approximation combined with stochastic gradient descent. But intelligence requires more than modeling input-output relationships. Without the richness of symbolic computation, neural nets lack the simple but powerful operations such as variable binding that allow for analogy making and reasoning, which underlie the ability to generalize from few examples.<br />
<br />
We approach the problem from a very different perspective, inspired by the brain’s high-dimensional circuits and the unique mathematical properties of high-dimensional spaces. ''It leads us to a novel information processing architecture that combines the strengths of symbolic AI and neural networks, and yet has novel emergent properties of its own.'' By combining a small set of basic operations on high-dimensional vectors, we obtain hybrid AI system (HAS) that makes it possible to represent and manipulate data in ways familiar to us from symbolic AI, and to learn from the statistics of data in ways familiar to us from artificial neural networks and deep learning. Further, principles of such HAS allow few-shot learning capabilities, and extremely robust operations against failures, defects, variations, and noise, all of which are complementary to ultra-low energy computation on nanoscale fabrics such as phase-change memory devices. Exciting further research (listed in below table) awaiting in this direction spans high-level algorithmic exploration all the way to efficient hardware design for emerging computational fabrics.<br />
<br />
===Useful Reading===<br />
*[https://www.nature.com/articles/s41467-021-22364-0 Robust high-dimensional memory-augmented neural networks], Nature Communications, 2021.<br />
*[https://www.nature.com/articles/s41928-020-0410-3 In-memory hyperdimensional computing], Nature Electronics, 2020.<br />
*[https://www.nature.com/articles/s41928-020-00510-8 A wearable biosensing system with in-sensor adaptive machine learning for hand gesture recognition], Nature Electronics, 2020. <br />
*[https://link.springer.com/article/10.1007/s12559-009-9009-8 Hyperdimensional computing: an introduction to computing in distributed representation with high-dimensional random vectors], Cognitive Computation, 2009.<br />
<br />
===Prerequisites===<br />
*Python<br />
*Background in machine learning (''recommended'')<br />
*Experience with any deep learning framework such as TensorFlow or PyTorch (''recommended'')<br />
*VLSI I (''recommended'')<br />
<br />
<!-- ==In-Memory Computing (IMC)==<br />
<br />
[[File:NNcover_imc.jpg|thumb|right|200px]]<br />
For decades, conventional computers based on the von Neumann architecture have performed computation by repeatedly transferring data between their processing and their memory units, which are physically separated. As computation becomes increasingly data-centric and as the scalability limits in terms of performance and power are being reached, alternative computing paradigms are searched for in which computation and storage are collocated. A fascinating new approach is that of computational memory where the physics of nanoscale memory devices are used to perform certain computational tasks within the memory unit in a non-von Neumann manner. '''Computational Memory''' (CM) is finding application in a variety of areas such as machine learning and signal processing. In addition to novel non-volatile memory technologies like '''PCM''' and '''ReRAM''', also conventional '''SRAM''' has been proposed as underlying storage technology in Computational Memories.<br />
<br />
===Useful Reading===<br />
*[https://ieeexplore.ieee.org/document/9288639 An SRAM-Based Multibit In-Memory Matrix-Vector Multiplier With a Precision That Scales Linearly in Area, Time, and Power]<br />
*[https://www.nature.com/articles/s41565-020-0655-z Memory devices and applications for in-memory computing]<br />
*[https://ieeexplore.ieee.org/abstract/document/8865099 Deep learning acceleration based on in-memory computing]<br />
<br />
===Prerequisites===<br />
*General interest in Deep Learning and memory/system design<br />
*VLSI I and VLSI II (''recommended'')<br />
Specific requirements for the different projects vary and are generally negotiable.<br />
---><br />
<br />
==Available Projects==<br />
We are inviting applications from students to conduct their thesis (bachelor, semester, and master) or an internship project at the IBM Research lab in Zurich on this exciting new topic. <br />
<!--<br />
The work performed could span low-level hardware experiments on phase-change memory chips comprising more than 1 million devices to high-level algorithmic development in a deep learning framework such as TensorFlow or PyTorch. It also involves interactions with several researchers across IBM research focusing on various aspects of the project. The ideal candidate should have a multi-disciplinary background, strong mathematical aptitude and programming skills. Prior knowledge on emerging memory technologies such as phase-change memory is a bonus but not necessary.<br />
--><br />
<br />
{| class="wikitable" style="text-align: center;"<br />
|-<br />
! style="width: 5%;"|Type !! style="width: 20%"|Project !! style="width: 60%"|Description !! style="width: 5%"|Workload Type <br />
|-<br />
<br />
| MA/SA/BA || Developing Efficient Models of Strong AI for Intelligence Quotient (IQ) Test || [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Link to description] || algorithmic design<br />
|-<br />
<br />
| MA/SA/BA || Zero-shot learning || TBD || algorithmic design<br />
|-<br />
<br />
| MA/SA || Accelerating Mixers with Computational Memory || [http://iis-projects.ee.ethz.ch/images/e/e8/IBM_Mixer.pdf Link to description] || Hardware design and experiments<br />
|-<br />
<br />
| MA/SA || Accelerating Transformers with Computational Memory || [http://iis-projects.ee.ethz.ch/images/b/be/IBM_TransfAcc.pdf Link to description] || Hardware/algorithmic design<br />
|-<br />
<br />
| MA/SA/BA || Face Recognition at Scale || [http://iis-projects.ee.ethz.ch/images/3/3d/IBM_FaceRec_at_Scale.pdf Link to description] || algorithmic design<br />
|-<br />
<br />
<br />
| MA|| Lifelong learning challenge || [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf Link to description] || algorithmic/hardware design<br />
|-<br />
<br />
| MA|| Crytography meets in-memory computing || [https://iis-projects.ee.ethz.ch/images/6/62/Y2021_10_Advertisement_VME.pdf Link to description] || algorithmic design and hardware experiments<br />
|-<br />
<br />
<br />
|}<br />
<br />
==Contact==<br />
: Thesis will be at IBM Zurich in Rüschlikon<br />
; Hybrid AI Systems (HAS) projects<br />
: Contact (at ETH Zurich): [[:User:kgf | Dr. Frank K. Gurkaynak]], [[:User:Herschmi | Michael Hersche]]<br />
: Contact (at IBM): [mailto:kar@zurich.ibm.com Geethan Karunaratne]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian] <br />
: Contact (at IBM): [mailto:abr@zurich.ibm.com Dr. Abbas Rahimi]<br />
: Professor: [https://iis.ee.ethz.ch/people/person-detail.html?persid=194234 Luca Benini]</div>Herschmihttp://iis-projects.ee.ethz.ch/index.php?title=IBM_Research&diff=6946IBM Research2021-09-16T07:55:00Z<p>Herschmi: /* Available Projects */</p>
<hr />
<div>[[Category:Digital]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]] <br />
[[Category:Available]] <br />
[[Category:2021]]<br />
[[Category:Hot]]<br />
[[File:IBM_ZRLab.png|center]]<br />
<br />
==Short Description==<br />
Today, we are entering the era of cognitive computing, which holds great promise in deriving intelligence and knowledge from huge volumes of data. In today’s computers based on von Neumann architecture, huge amounts of data need to be shuttled back and forth at high speeds, a task at which this architecture is inefficient.<br />
<br />
It is becoming increasingly clear that to build efficient cognitive computers, we need to transition to non-von Neumann architectures in which memory and processing coexist in some form. At IBM Research–Zurich in the [https://www.zurich.ibm.com/sto/memory/ Neuromorphic and In-memory Computing Group], we explore various such computing paradigms from in-memory computing to brain-inspired neuromorphic computing. Our research spans from devices and architectures to algorithms and applications. <!--Here is a list of available projects:<br />
<br />
* [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf '''Artificial general intelligence (AGI):''' lifelong learning challenge] <br />
* [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf '''Machine learning based on optimal transport using in-memory computing''']<br />
* [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Developing Efficient Models of '''Strong AI for Intelligence Quotient (IQ) Test''']<br />
--><br />
<br />
===About the IBM Research–Zurich===<br />
The location in Zurich is one of IBM’s 12 global research labs. IBM has maintained a research laboratory in Switzerland since 1956. As the first European branch of IBM Research, the mission of the Zurich Lab, in addition to pursuing cutting-edge research for tomorrow’s information technology, is to cultivate close relationships with academic and industrial partners, be one of the premier places to work for world-class researchers, to promote women in IT and science, and to help drive Europe’s innovation agenda. [https://www.zurich.ibm.com/pdf/ZRL_Fact_Sheet.pdf Download factsheet]<br />
<br />
==Hybrid AI Systems (HAS)==<br />
<br />
[[File:NatureElectronics20.jpg|thumb|right|200px]]<br />
Neither symbolic AI nor neural networks alone has produced the kind of intelligence expressed in human and animal behavior. Why? Each has a long and rich history, but has addressed a relatively narrow aspect of the problem. Symbolic AI focuses on solving cognitive problems, drawing upon the rich framework of symbolic computation to manipulate internal representations in order to perform reasoning and inference. But it suffers from being non-adaptive, lacking the ability to learn from example or by direct observation of the world. Neural networks on the other hand have the ability to learn from data, and derive much of their power from nonlinear function approximation combined with stochastic gradient descent. But intelligence requires more than modeling input-output relationships. Without the richness of symbolic computation, neural nets lack the simple but powerful operations such as variable binding that allow for analogy making and reasoning, which underlie the ability to generalize from few examples.<br />
<br />
We approach the problem from a very different perspective, inspired by the brain’s high-dimensional circuits and the unique mathematical properties of high-dimensional spaces. ''It leads us to a novel information processing architecture that combines the strengths of symbolic AI and neural networks, and yet has novel emergent properties of its own.'' By combining a small set of basic operations on high-dimensional vectors, we obtain hybrid AI system (HAS) that makes it possible to represent and manipulate data in ways familiar to us from symbolic AI, and to learn from the statistics of data in ways familiar to us from artificial neural networks and deep learning. Further, principles of such HAS allow few-shot learning capabilities, and extremely robust operations against failures, defects, variations, and noise, all of which are complementary to ultra-low energy computation on nanoscale fabrics such as phase-change memory devices. Exciting further research (listed in below table) awaiting in this direction spans high-level algorithmic exploration all the way to efficient hardware design for emerging computational fabrics.<br />
<br />
===Useful Reading===<br />
*[https://www.nature.com/articles/s41467-021-22364-0 Robust high-dimensional memory-augmented neural networks], Nature Communications, 2021.<br />
*[https://www.nature.com/articles/s41928-020-0410-3 In-memory hyperdimensional computing], Nature Electronics, 2020.<br />
*[https://www.nature.com/articles/s41928-020-00510-8 A wearable biosensing system with in-sensor adaptive machine learning for hand gesture recognition], Nature Electronics, 2020. <br />
*[https://link.springer.com/article/10.1007/s12559-009-9009-8 Hyperdimensional computing: an introduction to computing in distributed representation with high-dimensional random vectors], Cognitive Computation, 2009.<br />
<br />
===Prerequisites===<br />
*Python<br />
*Background in machine learning (''recommended'')<br />
*Experience with any deep learning framework such as TensorFlow or PyTorch (''recommended'')<br />
*VLSI I (''recommended'')<br />
<br />
<!-- ==In-Memory Computing (IMC)==<br />
<br />
[[File:NNcover_imc.jpg|thumb|right|200px]]<br />
For decades, conventional computers based on the von Neumann architecture have performed computation by repeatedly transferring data between their processing and their memory units, which are physically separated. As computation becomes increasingly data-centric and as the scalability limits in terms of performance and power are being reached, alternative computing paradigms are searched for in which computation and storage are collocated. A fascinating new approach is that of computational memory where the physics of nanoscale memory devices are used to perform certain computational tasks within the memory unit in a non-von Neumann manner. '''Computational Memory''' (CM) is finding application in a variety of areas such as machine learning and signal processing. In addition to novel non-volatile memory technologies like '''PCM''' and '''ReRAM''', also conventional '''SRAM''' has been proposed as underlying storage technology in Computational Memories.<br />
<br />
===Useful Reading===<br />
*[https://ieeexplore.ieee.org/document/9288639 An SRAM-Based Multibit In-Memory Matrix-Vector Multiplier With a Precision That Scales Linearly in Area, Time, and Power]<br />
*[https://www.nature.com/articles/s41565-020-0655-z Memory devices and applications for in-memory computing]<br />
*[https://ieeexplore.ieee.org/abstract/document/8865099 Deep learning acceleration based on in-memory computing]<br />
<br />
===Prerequisites===<br />
*General interest in Deep Learning and memory/system design<br />
*VLSI I and VLSI II (''recommended'')<br />
Specific requirements for the different projects vary and are generally negotiable.<br />
---><br />
<br />
==Available Projects==<br />
We are inviting applications from students to conduct their thesis (bachelor, semester, and master) or an internship project at the IBM Research lab in Zurich on this exciting new topic. <br />
<!--<br />
The work performed could span low-level hardware experiments on phase-change memory chips comprising more than 1 million devices to high-level algorithmic development in a deep learning framework such as TensorFlow or PyTorch. It also involves interactions with several researchers across IBM research focusing on various aspects of the project. The ideal candidate should have a multi-disciplinary background, strong mathematical aptitude and programming skills. Prior knowledge on emerging memory technologies such as phase-change memory is a bonus but not necessary.<br />
--><br />
<br />
{| class="wikitable" style="text-align: center;"<br />
|-<br />
! style="width: 5%;"|Type !! style="width: 20%"|Project !! style="width: 60%"|Description !! style="width: 5%"|Workload Type <br />
|-<br />
<br />
| MA/SA/BA || Developing Efficient Models of Strong AI for Intelligence Quotient (IQ) Test || [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Link to description] || algorithmic design<br />
|-<br />
<br />
| MA/SA/BA || Zero-shot learning || TBD || algorithmic design<br />
|-<br />
<br />
| MA || Accelerating Mixers with Computational Memory || [http://iis-projects.ee.ethz.ch/images/e/e8/IBM_Mixer.pdf Link to description] || Hardware design and experiments<br />
|-<br />
<br />
| MA || Accelerating Transformers with Computational Memory || [http://iis-projects.ee.ethz.ch/images/b/be/IBM_TransfAcc.pdf Link to description] || Hardware/algorithmic design<br />
|-<br />
<br />
| MA/SA/BA || Face Recognition at Scale || [http://iis-projects.ee.ethz.ch/images/3/3d/IBM_FaceRec_at_Scale.pdf Link to description] || algorithmic design<br />
|-<br />
<br />
<br />
| MA|| Lifelong learning challenge || [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf Link to description] || algorithmic/hardware design<br />
|-<br />
<br />
<br />
<br />
|}<br />
<br />
==Contact==<br />
: Thesis will be at IBM Zurich in Rüschlikon<br />
; Hybrid AI Systems (HAS) projects<br />
: Contact (at ETH Zurich): [[:User:kgf | Dr. Frank K. Gurkaynak]], [[:User:Herschmi | Michael Hersche]]<br />
: Contact (at IBM): [mailto:kar@zurich.ibm.com Geethan Karunaratne]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian] <br />
: Contact (at IBM): [mailto:abr@zurich.ibm.com Dr. Abbas Rahimi]<br />
: Professor: [https://iis.ee.ethz.ch/people/person-detail.html?persid=194234 Luca Benini]</div>Herschmihttp://iis-projects.ee.ethz.ch/index.php?title=IBM_Research&diff=6945IBM Research2021-09-16T07:54:11Z<p>Herschmi: /* Available Projects */</p>
<hr />
<div>[[Category:Digital]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]] <br />
[[Category:Available]] <br />
[[Category:2021]]<br />
[[Category:Hot]]<br />
[[File:IBM_ZRLab.png|center]]<br />
<br />
==Short Description==<br />
Today, we are entering the era of cognitive computing, which holds great promise in deriving intelligence and knowledge from huge volumes of data. In today’s computers based on von Neumann architecture, huge amounts of data need to be shuttled back and forth at high speeds, a task at which this architecture is inefficient.<br />
<br />
It is becoming increasingly clear that to build efficient cognitive computers, we need to transition to non-von Neumann architectures in which memory and processing coexist in some form. At IBM Research–Zurich in the [https://www.zurich.ibm.com/sto/memory/ Neuromorphic and In-memory Computing Group], we explore various such computing paradigms from in-memory computing to brain-inspired neuromorphic computing. Our research spans from devices and architectures to algorithms and applications. <!--Here is a list of available projects:<br />
<br />
* [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf '''Artificial general intelligence (AGI):''' lifelong learning challenge] <br />
* [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf '''Machine learning based on optimal transport using in-memory computing''']<br />
* [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Developing Efficient Models of '''Strong AI for Intelligence Quotient (IQ) Test''']<br />
--><br />
<br />
===About the IBM Research–Zurich===<br />
The location in Zurich is one of IBM’s 12 global research labs. IBM has maintained a research laboratory in Switzerland since 1956. As the first European branch of IBM Research, the mission of the Zurich Lab, in addition to pursuing cutting-edge research for tomorrow’s information technology, is to cultivate close relationships with academic and industrial partners, be one of the premier places to work for world-class researchers, to promote women in IT and science, and to help drive Europe’s innovation agenda. [https://www.zurich.ibm.com/pdf/ZRL_Fact_Sheet.pdf Download factsheet]<br />
<br />
==Hybrid AI Systems (HAS)==<br />
<br />
[[File:NatureElectronics20.jpg|thumb|right|200px]]<br />
Neither symbolic AI nor neural networks alone has produced the kind of intelligence expressed in human and animal behavior. Why? Each has a long and rich history, but has addressed a relatively narrow aspect of the problem. Symbolic AI focuses on solving cognitive problems, drawing upon the rich framework of symbolic computation to manipulate internal representations in order to perform reasoning and inference. But it suffers from being non-adaptive, lacking the ability to learn from example or by direct observation of the world. Neural networks on the other hand have the ability to learn from data, and derive much of their power from nonlinear function approximation combined with stochastic gradient descent. But intelligence requires more than modeling input-output relationships. Without the richness of symbolic computation, neural nets lack the simple but powerful operations such as variable binding that allow for analogy making and reasoning, which underlie the ability to generalize from few examples.<br />
<br />
We approach the problem from a very different perspective, inspired by the brain’s high-dimensional circuits and the unique mathematical properties of high-dimensional spaces. ''It leads us to a novel information processing architecture that combines the strengths of symbolic AI and neural networks, and yet has novel emergent properties of its own.'' By combining a small set of basic operations on high-dimensional vectors, we obtain hybrid AI system (HAS) that makes it possible to represent and manipulate data in ways familiar to us from symbolic AI, and to learn from the statistics of data in ways familiar to us from artificial neural networks and deep learning. Further, principles of such HAS allow few-shot learning capabilities, and extremely robust operations against failures, defects, variations, and noise, all of which are complementary to ultra-low energy computation on nanoscale fabrics such as phase-change memory devices. Exciting further research (listed in below table) awaiting in this direction spans high-level algorithmic exploration all the way to efficient hardware design for emerging computational fabrics.<br />
<br />
===Useful Reading===<br />
*[https://www.nature.com/articles/s41467-021-22364-0 Robust high-dimensional memory-augmented neural networks], Nature Communications, 2021.<br />
*[https://www.nature.com/articles/s41928-020-0410-3 In-memory hyperdimensional computing], Nature Electronics, 2020.<br />
*[https://www.nature.com/articles/s41928-020-00510-8 A wearable biosensing system with in-sensor adaptive machine learning for hand gesture recognition], Nature Electronics, 2020. <br />
*[https://link.springer.com/article/10.1007/s12559-009-9009-8 Hyperdimensional computing: an introduction to computing in distributed representation with high-dimensional random vectors], Cognitive Computation, 2009.<br />
<br />
===Prerequisites===<br />
*Python<br />
*Background in machine learning (''recommended'')<br />
*Experience with any deep learning framework such as TensorFlow or PyTorch (''recommended'')<br />
*VLSI I (''recommended'')<br />
<br />
<!-- ==In-Memory Computing (IMC)==<br />
<br />
[[File:NNcover_imc.jpg|thumb|right|200px]]<br />
For decades, conventional computers based on the von Neumann architecture have performed computation by repeatedly transferring data between their processing and their memory units, which are physically separated. As computation becomes increasingly data-centric and as the scalability limits in terms of performance and power are being reached, alternative computing paradigms are searched for in which computation and storage are collocated. A fascinating new approach is that of computational memory where the physics of nanoscale memory devices are used to perform certain computational tasks within the memory unit in a non-von Neumann manner. '''Computational Memory''' (CM) is finding application in a variety of areas such as machine learning and signal processing. In addition to novel non-volatile memory technologies like '''PCM''' and '''ReRAM''', also conventional '''SRAM''' has been proposed as underlying storage technology in Computational Memories.<br />
<br />
===Useful Reading===<br />
*[https://ieeexplore.ieee.org/document/9288639 An SRAM-Based Multibit In-Memory Matrix-Vector Multiplier With a Precision That Scales Linearly in Area, Time, and Power]<br />
*[https://www.nature.com/articles/s41565-020-0655-z Memory devices and applications for in-memory computing]<br />
*[https://ieeexplore.ieee.org/abstract/document/8865099 Deep learning acceleration based on in-memory computing]<br />
<br />
===Prerequisites===<br />
*General interest in Deep Learning and memory/system design<br />
*VLSI I and VLSI II (''recommended'')<br />
Specific requirements for the different projects vary and are generally negotiable.<br />
---><br />
<br />
==Available Projects==<br />
We are inviting applications from students to conduct their thesis (bachelor, semester, and master) or an internship project at the IBM Research lab in Zurich on this exciting new topic. <br />
<!--<br />
The work performed could span low-level hardware experiments on phase-change memory chips comprising more than 1 million devices to high-level algorithmic development in a deep learning framework such as TensorFlow or PyTorch. It also involves interactions with several researchers across IBM research focusing on various aspects of the project. The ideal candidate should have a multi-disciplinary background, strong mathematical aptitude and programming skills. Prior knowledge on emerging memory technologies such as phase-change memory is a bonus but not necessary.<br />
--><br />
<br />
{| class="wikitable" style="text-align: center;"<br />
|-<br />
! style="width: 5%;"|Type !! style="width: 20%"|Project !! style="width: 60%"|Description !! style="width: 5%"|Workload Type <br />
|-<br />
<br />
| MA/SA/BA || Developing Efficient Models of Strong AI for Intelligence Quotient (IQ) Test || [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Link to description] || algorithmic design<br />
|-<br />
<br />
| MA || Accelerating Mixers with Computational Memory || [http://iis-projects.ee.ethz.ch/images/e/e8/IBM_Mixer.pdf Link to description] || Hardware design and experiments<br />
|-<br />
<br />
| MA || Accelerating Transformers with Computational Memory || [http://iis-projects.ee.ethz.ch/images/b/be/IBM_TransfAcc.pdf Link to description] || Hardware/algorithmic design<br />
|-<br />
<br />
| MA/SA/BA || Face Recognition at Scale || [http://iis-projects.ee.ethz.ch/images/3/3d/IBM_FaceRec_at_Scale.pdf Link to description] || algorithmic design<br />
|-<br />
<br />
<br />
| MA|| Lifelong learning challenge || [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf Link to description] || algorithmic/hardware design<br />
|-<br />
<br />
<br />
<br />
|}<br />
<br />
==Contact==<br />
: Thesis will be at IBM Zurich in Rüschlikon<br />
; Hybrid AI Systems (HAS) projects<br />
: Contact (at ETH Zurich): [[:User:kgf | Dr. Frank K. Gurkaynak]], [[:User:Herschmi | Michael Hersche]]<br />
: Contact (at IBM): [mailto:kar@zurich.ibm.com Geethan Karunaratne]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian] <br />
: Contact (at IBM): [mailto:abr@zurich.ibm.com Dr. Abbas Rahimi]<br />
: Professor: [https://iis.ee.ethz.ch/people/person-detail.html?persid=194234 Luca Benini]</div>Herschmihttp://iis-projects.ee.ethz.ch/index.php?title=IBM_Research&diff=6944IBM Research2021-09-16T07:49:48Z<p>Herschmi: /* Available Projects */</p>
<hr />
<div>[[Category:Digital]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]] <br />
[[Category:Available]] <br />
[[Category:2021]]<br />
[[Category:Hot]]<br />
[[File:IBM_ZRLab.png|center]]<br />
<br />
==Short Description==<br />
Today, we are entering the era of cognitive computing, which holds great promise in deriving intelligence and knowledge from huge volumes of data. In today’s computers based on von Neumann architecture, huge amounts of data need to be shuttled back and forth at high speeds, a task at which this architecture is inefficient.<br />
<br />
It is becoming increasingly clear that to build efficient cognitive computers, we need to transition to non-von Neumann architectures in which memory and processing coexist in some form. At IBM Research–Zurich in the [https://www.zurich.ibm.com/sto/memory/ Neuromorphic and In-memory Computing Group], we explore various such computing paradigms from in-memory computing to brain-inspired neuromorphic computing. Our research spans from devices and architectures to algorithms and applications. <!--Here is a list of available projects:<br />
<br />
* [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf '''Artificial general intelligence (AGI):''' lifelong learning challenge] <br />
* [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf '''Machine learning based on optimal transport using in-memory computing''']<br />
* [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Developing Efficient Models of '''Strong AI for Intelligence Quotient (IQ) Test''']<br />
--><br />
<br />
===About the IBM Research–Zurich===<br />
The location in Zurich is one of IBM’s 12 global research labs. IBM has maintained a research laboratory in Switzerland since 1956. As the first European branch of IBM Research, the mission of the Zurich Lab, in addition to pursuing cutting-edge research for tomorrow’s information technology, is to cultivate close relationships with academic and industrial partners, be one of the premier places to work for world-class researchers, to promote women in IT and science, and to help drive Europe’s innovation agenda. [https://www.zurich.ibm.com/pdf/ZRL_Fact_Sheet.pdf Download factsheet]<br />
<br />
==Hybrid AI Systems (HAS)==<br />
<br />
[[File:NatureElectronics20.jpg|thumb|right|200px]]<br />
Neither symbolic AI nor neural networks alone has produced the kind of intelligence expressed in human and animal behavior. Why? Each has a long and rich history, but has addressed a relatively narrow aspect of the problem. Symbolic AI focuses on solving cognitive problems, drawing upon the rich framework of symbolic computation to manipulate internal representations in order to perform reasoning and inference. But it suffers from being non-adaptive, lacking the ability to learn from example or by direct observation of the world. Neural networks on the other hand have the ability to learn from data, and derive much of their power from nonlinear function approximation combined with stochastic gradient descent. But intelligence requires more than modeling input-output relationships. Without the richness of symbolic computation, neural nets lack the simple but powerful operations such as variable binding that allow for analogy making and reasoning, which underlie the ability to generalize from few examples.<br />
<br />
We approach the problem from a very different perspective, inspired by the brain’s high-dimensional circuits and the unique mathematical properties of high-dimensional spaces. ''It leads us to a novel information processing architecture that combines the strengths of symbolic AI and neural networks, and yet has novel emergent properties of its own.'' By combining a small set of basic operations on high-dimensional vectors, we obtain hybrid AI system (HAS) that makes it possible to represent and manipulate data in ways familiar to us from symbolic AI, and to learn from the statistics of data in ways familiar to us from artificial neural networks and deep learning. Further, principles of such HAS allow few-shot learning capabilities, and extremely robust operations against failures, defects, variations, and noise, all of which are complementary to ultra-low energy computation on nanoscale fabrics such as phase-change memory devices. Exciting further research (listed in below table) awaiting in this direction spans high-level algorithmic exploration all the way to efficient hardware design for emerging computational fabrics.<br />
<br />
===Useful Reading===<br />
*[https://www.nature.com/articles/s41467-021-22364-0 Robust high-dimensional memory-augmented neural networks], Nature Communications, 2021.<br />
*[https://www.nature.com/articles/s41928-020-0410-3 In-memory hyperdimensional computing], Nature Electronics, 2020.<br />
*[https://www.nature.com/articles/s41928-020-00510-8 A wearable biosensing system with in-sensor adaptive machine learning for hand gesture recognition], Nature Electronics, 2020. <br />
*[https://link.springer.com/article/10.1007/s12559-009-9009-8 Hyperdimensional computing: an introduction to computing in distributed representation with high-dimensional random vectors], Cognitive Computation, 2009.<br />
<br />
===Prerequisites===<br />
*Python<br />
*Background in machine learning (''recommended'')<br />
*Experience with any deep learning framework such as TensorFlow or PyTorch (''recommended'')<br />
*VLSI I (''recommended'')<br />
<br />
<!-- ==In-Memory Computing (IMC)==<br />
<br />
[[File:NNcover_imc.jpg|thumb|right|200px]]<br />
For decades, conventional computers based on the von Neumann architecture have performed computation by repeatedly transferring data between their processing and their memory units, which are physically separated. As computation becomes increasingly data-centric and as the scalability limits in terms of performance and power are being reached, alternative computing paradigms are searched for in which computation and storage are collocated. A fascinating new approach is that of computational memory where the physics of nanoscale memory devices are used to perform certain computational tasks within the memory unit in a non-von Neumann manner. '''Computational Memory''' (CM) is finding application in a variety of areas such as machine learning and signal processing. In addition to novel non-volatile memory technologies like '''PCM''' and '''ReRAM''', also conventional '''SRAM''' has been proposed as underlying storage technology in Computational Memories.<br />
<br />
===Useful Reading===<br />
*[https://ieeexplore.ieee.org/document/9288639 An SRAM-Based Multibit In-Memory Matrix-Vector Multiplier With a Precision That Scales Linearly in Area, Time, and Power]<br />
*[https://www.nature.com/articles/s41565-020-0655-z Memory devices and applications for in-memory computing]<br />
*[https://ieeexplore.ieee.org/abstract/document/8865099 Deep learning acceleration based on in-memory computing]<br />
<br />
===Prerequisites===<br />
*General interest in Deep Learning and memory/system design<br />
*VLSI I and VLSI II (''recommended'')<br />
Specific requirements for the different projects vary and are generally negotiable.<br />
---><br />
<br />
==Available Projects==<br />
We are inviting applications from students to conduct their thesis or an internship project at the IBM Research lab in Zurich on this exciting new topic. <br />
<!--<br />
The work performed could span low-level hardware experiments on phase-change memory chips comprising more than 1 million devices to high-level algorithmic development in a deep learning framework such as TensorFlow or PyTorch. It also involves interactions with several researchers across IBM research focusing on various aspects of the project. The ideal candidate should have a multi-disciplinary background, strong mathematical aptitude and programming skills. Prior knowledge on emerging memory technologies such as phase-change memory is a bonus but not necessary.<br />
--><br />
<br />
{| class="wikitable" style="text-align: center;"<br />
|-<br />
! style="width: 5%;"|Type !! style="width: 20%"|Project !! style="width: 60%"|Description !! style="width: 5%"|Workload Type <br />
|-<br />
<br />
| MA/SA/BA || Developing Efficient Models of Strong AI for Intelligence Quotient (IQ) Test || [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Link to description] || algorithmic design<br />
|-<br />
<br />
| MA || Accelerating Mixers with Computational Memory || [http://iis-projects.ee.ethz.ch/images/e/e8/IBM_Mixer.pdf Link to description] || Hardware design and experiments<br />
|-<br />
<br />
| MA || Accelerating Transformers with Computational Memory || [http://iis-projects.ee.ethz.ch/images/b/be/IBM_TransfAcc.pdf Link to description] || Hardware/algorithmic design<br />
|-<br />
<br />
| MA/SA/BA || Face Recognition at Scale || [http://iis-projects.ee.ethz.ch/images/3/3d/IBM_FaceRec_at_Scale.pdf Link to description] || algorithmic design<br />
|-<br />
<br />
<br />
| MA|| Lifelong learning challenge || [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf Link to description] || algorithmic/hardware design<br />
|-<br />
<br />
<br />
<br />
|}<br />
<br />
==Contact==<br />
: Thesis will be at IBM Zurich in Rüschlikon<br />
; Hybrid AI Systems (HAS) projects<br />
: Contact (at ETH Zurich): [[:User:kgf | Dr. Frank K. Gurkaynak]], [[:User:Herschmi | Michael Hersche]]<br />
: Contact (at IBM): [mailto:kar@zurich.ibm.com Geethan Karunaratne]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian] <br />
: Contact (at IBM): [mailto:abr@zurich.ibm.com Dr. Abbas Rahimi]<br />
: Professor: [https://iis.ee.ethz.ch/people/person-detail.html?persid=194234 Luca Benini]</div>Herschmihttp://iis-projects.ee.ethz.ch/index.php?title=IBM_Research&diff=6943IBM Research2021-09-16T07:48:57Z<p>Herschmi: /* In-Memory Computing (IMC) */</p>
<hr />
<div>[[Category:Digital]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]] <br />
[[Category:Available]] <br />
[[Category:2021]]<br />
[[Category:Hot]]<br />
[[File:IBM_ZRLab.png|center]]<br />
<br />
==Short Description==<br />
Today, we are entering the era of cognitive computing, which holds great promise in deriving intelligence and knowledge from huge volumes of data. In today’s computers based on von Neumann architecture, huge amounts of data need to be shuttled back and forth at high speeds, a task at which this architecture is inefficient.<br />
<br />
It is becoming increasingly clear that to build efficient cognitive computers, we need to transition to non-von Neumann architectures in which memory and processing coexist in some form. At IBM Research–Zurich in the [https://www.zurich.ibm.com/sto/memory/ Neuromorphic and In-memory Computing Group], we explore various such computing paradigms from in-memory computing to brain-inspired neuromorphic computing. Our research spans from devices and architectures to algorithms and applications. <!--Here is a list of available projects:<br />
<br />
* [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf '''Artificial general intelligence (AGI):''' lifelong learning challenge] <br />
* [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf '''Machine learning based on optimal transport using in-memory computing''']<br />
* [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Developing Efficient Models of '''Strong AI for Intelligence Quotient (IQ) Test''']<br />
--><br />
<br />
===About the IBM Research–Zurich===<br />
The location in Zurich is one of IBM’s 12 global research labs. IBM has maintained a research laboratory in Switzerland since 1956. As the first European branch of IBM Research, the mission of the Zurich Lab, in addition to pursuing cutting-edge research for tomorrow’s information technology, is to cultivate close relationships with academic and industrial partners, be one of the premier places to work for world-class researchers, to promote women in IT and science, and to help drive Europe’s innovation agenda. [https://www.zurich.ibm.com/pdf/ZRL_Fact_Sheet.pdf Download factsheet]<br />
<br />
==Hybrid AI Systems (HAS)==<br />
<br />
[[File:NatureElectronics20.jpg|thumb|right|200px]]<br />
Neither symbolic AI nor neural networks alone has produced the kind of intelligence expressed in human and animal behavior. Why? Each has a long and rich history, but has addressed a relatively narrow aspect of the problem. Symbolic AI focuses on solving cognitive problems, drawing upon the rich framework of symbolic computation to manipulate internal representations in order to perform reasoning and inference. But it suffers from being non-adaptive, lacking the ability to learn from example or by direct observation of the world. Neural networks on the other hand have the ability to learn from data, and derive much of their power from nonlinear function approximation combined with stochastic gradient descent. But intelligence requires more than modeling input-output relationships. Without the richness of symbolic computation, neural nets lack the simple but powerful operations such as variable binding that allow for analogy making and reasoning, which underlie the ability to generalize from few examples.<br />
<br />
We approach the problem from a very different perspective, inspired by the brain’s high-dimensional circuits and the unique mathematical properties of high-dimensional spaces. ''It leads us to a novel information processing architecture that combines the strengths of symbolic AI and neural networks, and yet has novel emergent properties of its own.'' By combining a small set of basic operations on high-dimensional vectors, we obtain hybrid AI system (HAS) that makes it possible to represent and manipulate data in ways familiar to us from symbolic AI, and to learn from the statistics of data in ways familiar to us from artificial neural networks and deep learning. Further, principles of such HAS allow few-shot learning capabilities, and extremely robust operations against failures, defects, variations, and noise, all of which are complementary to ultra-low energy computation on nanoscale fabrics such as phase-change memory devices. Exciting further research (listed in below table) awaiting in this direction spans high-level algorithmic exploration all the way to efficient hardware design for emerging computational fabrics.<br />
<br />
===Useful Reading===<br />
*[https://www.nature.com/articles/s41467-021-22364-0 Robust high-dimensional memory-augmented neural networks], Nature Communications, 2021.<br />
*[https://www.nature.com/articles/s41928-020-0410-3 In-memory hyperdimensional computing], Nature Electronics, 2020.<br />
*[https://www.nature.com/articles/s41928-020-00510-8 A wearable biosensing system with in-sensor adaptive machine learning for hand gesture recognition], Nature Electronics, 2020. <br />
*[https://link.springer.com/article/10.1007/s12559-009-9009-8 Hyperdimensional computing: an introduction to computing in distributed representation with high-dimensional random vectors], Cognitive Computation, 2009.<br />
<br />
===Prerequisites===<br />
*Python<br />
*Background in machine learning (''recommended'')<br />
*Experience with any deep learning framework such as TensorFlow or PyTorch (''recommended'')<br />
*VLSI I (''recommended'')<br />
<br />
<!-- ==In-Memory Computing (IMC)==<br />
<br />
[[File:NNcover_imc.jpg|thumb|right|200px]]<br />
For decades, conventional computers based on the von Neumann architecture have performed computation by repeatedly transferring data between their processing and their memory units, which are physically separated. As computation becomes increasingly data-centric and as the scalability limits in terms of performance and power are being reached, alternative computing paradigms are searched for in which computation and storage are collocated. A fascinating new approach is that of computational memory where the physics of nanoscale memory devices are used to perform certain computational tasks within the memory unit in a non-von Neumann manner. '''Computational Memory''' (CM) is finding application in a variety of areas such as machine learning and signal processing. In addition to novel non-volatile memory technologies like '''PCM''' and '''ReRAM''', also conventional '''SRAM''' has been proposed as underlying storage technology in Computational Memories.<br />
<br />
===Useful Reading===<br />
*[https://ieeexplore.ieee.org/document/9288639 An SRAM-Based Multibit In-Memory Matrix-Vector Multiplier With a Precision That Scales Linearly in Area, Time, and Power]<br />
*[https://www.nature.com/articles/s41565-020-0655-z Memory devices and applications for in-memory computing]<br />
*[https://ieeexplore.ieee.org/abstract/document/8865099 Deep learning acceleration based on in-memory computing]<br />
<br />
===Prerequisites===<br />
*General interest in Deep Learning and memory/system design<br />
*VLSI I and VLSI II (''recommended'')<br />
Specific requirements for the different projects vary and are generally negotiable.<br />
---><br />
<br />
==Available Projects==<br />
We are inviting applications from students to conduct their thesis or an internship project at the IBM Research lab in Zurich on this exciting new topic. <br />
<!--<br />
The work performed could span low-level hardware experiments on phase-change memory chips comprising more than 1 million devices to high-level algorithmic development in a deep learning framework such as TensorFlow or PyTorch. It also involves interactions with several researchers across IBM research focusing on various aspects of the project. The ideal candidate should have a multi-disciplinary background, strong mathematical aptitude and programming skills. Prior knowledge on emerging memory technologies such as phase-change memory is a bonus but not necessary.<br />
--><br />
<br />
{| class="wikitable" style="text-align: center;"<br />
|-<br />
! style="width: 5%;"|Type !! style="width: 20%"|Project !! style="width: 60%"|Description !! style="width: 5%"|Workload Type <br />
|-<br />
<br />
| MA || Accelerating Mixers with Computational Memory || [http://iis-projects.ee.ethz.ch/images/e/e8/IBM_Mixer.pdf Link to description] || Hardware design and experiments<br />
|-<br />
<br />
| MA || Accelerating Transformers with Computational Memory || [http://iis-projects.ee.ethz.ch/images/b/be/IBM_TransfAcc.pdf Link to description] || Hardware/algorithmic design<br />
|-<br />
<br />
| MA/SA/BA || Face Recognition at Scale || [http://iis-projects.ee.ethz.ch/images/3/3d/IBM_FaceRec_at_Scale.pdf Link to description] || algorithmic design<br />
|-<br />
<br />
| MA/SA/BA || Developing Efficient Models of Strong AI for Intelligence Quotient (IQ) Test || [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Link to description] || algorithmic design<br />
|-<br />
<br />
| MA|| Lifelong learning challenge || [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf Link to description] || algorithmic/hardware design<br />
|-<br />
<br />
<br />
<br />
|}<br />
<br />
==Contact==<br />
: Thesis will be at IBM Zurich in Rüschlikon<br />
; Hybrid AI Systems (HAS) projects<br />
: Contact (at ETH Zurich): [[:User:kgf | Dr. Frank K. Gurkaynak]], [[:User:Herschmi | Michael Hersche]]<br />
: Contact (at IBM): [mailto:kar@zurich.ibm.com Geethan Karunaratne]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian] <br />
: Contact (at IBM): [mailto:abr@zurich.ibm.com Dr. Abbas Rahimi]<br />
: Professor: [https://iis.ee.ethz.ch/people/person-detail.html?persid=194234 Luca Benini]</div>Herschmihttp://iis-projects.ee.ethz.ch/index.php?title=IBM_Research&diff=6942IBM Research2021-09-16T07:46:18Z<p>Herschmi: /* Prerequisites */</p>
<hr />
<div>[[Category:Digital]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]] <br />
[[Category:Available]] <br />
[[Category:2021]]<br />
[[Category:Hot]]<br />
[[File:IBM_ZRLab.png|center]]<br />
<br />
==Short Description==<br />
Today, we are entering the era of cognitive computing, which holds great promise in deriving intelligence and knowledge from huge volumes of data. In today’s computers based on von Neumann architecture, huge amounts of data need to be shuttled back and forth at high speeds, a task at which this architecture is inefficient.<br />
<br />
It is becoming increasingly clear that to build efficient cognitive computers, we need to transition to non-von Neumann architectures in which memory and processing coexist in some form. At IBM Research–Zurich in the [https://www.zurich.ibm.com/sto/memory/ Neuromorphic and In-memory Computing Group], we explore various such computing paradigms from in-memory computing to brain-inspired neuromorphic computing. Our research spans from devices and architectures to algorithms and applications. <!--Here is a list of available projects:<br />
<br />
* [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf '''Artificial general intelligence (AGI):''' lifelong learning challenge] <br />
* [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf '''Machine learning based on optimal transport using in-memory computing''']<br />
* [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Developing Efficient Models of '''Strong AI for Intelligence Quotient (IQ) Test''']<br />
--><br />
<br />
===About the IBM Research–Zurich===<br />
The location in Zurich is one of IBM’s 12 global research labs. IBM has maintained a research laboratory in Switzerland since 1956. As the first European branch of IBM Research, the mission of the Zurich Lab, in addition to pursuing cutting-edge research for tomorrow’s information technology, is to cultivate close relationships with academic and industrial partners, be one of the premier places to work for world-class researchers, to promote women in IT and science, and to help drive Europe’s innovation agenda. [https://www.zurich.ibm.com/pdf/ZRL_Fact_Sheet.pdf Download factsheet]<br />
<br />
==Hybrid AI Systems (HAS)==<br />
<br />
[[File:NatureElectronics20.jpg|thumb|right|200px]]<br />
Neither symbolic AI nor neural networks alone has produced the kind of intelligence expressed in human and animal behavior. Why? Each has a long and rich history, but has addressed a relatively narrow aspect of the problem. Symbolic AI focuses on solving cognitive problems, drawing upon the rich framework of symbolic computation to manipulate internal representations in order to perform reasoning and inference. But it suffers from being non-adaptive, lacking the ability to learn from example or by direct observation of the world. Neural networks on the other hand have the ability to learn from data, and derive much of their power from nonlinear function approximation combined with stochastic gradient descent. But intelligence requires more than modeling input-output relationships. Without the richness of symbolic computation, neural nets lack the simple but powerful operations such as variable binding that allow for analogy making and reasoning, which underlie the ability to generalize from few examples.<br />
<br />
We approach the problem from a very different perspective, inspired by the brain’s high-dimensional circuits and the unique mathematical properties of high-dimensional spaces. ''It leads us to a novel information processing architecture that combines the strengths of symbolic AI and neural networks, and yet has novel emergent properties of its own.'' By combining a small set of basic operations on high-dimensional vectors, we obtain hybrid AI system (HAS) that makes it possible to represent and manipulate data in ways familiar to us from symbolic AI, and to learn from the statistics of data in ways familiar to us from artificial neural networks and deep learning. Further, principles of such HAS allow few-shot learning capabilities, and extremely robust operations against failures, defects, variations, and noise, all of which are complementary to ultra-low energy computation on nanoscale fabrics such as phase-change memory devices. Exciting further research (listed in below table) awaiting in this direction spans high-level algorithmic exploration all the way to efficient hardware design for emerging computational fabrics.<br />
<br />
===Useful Reading===<br />
*[https://www.nature.com/articles/s41467-021-22364-0 Robust high-dimensional memory-augmented neural networks], Nature Communications, 2021.<br />
*[https://www.nature.com/articles/s41928-020-0410-3 In-memory hyperdimensional computing], Nature Electronics, 2020.<br />
*[https://www.nature.com/articles/s41928-020-00510-8 A wearable biosensing system with in-sensor adaptive machine learning for hand gesture recognition], Nature Electronics, 2020. <br />
*[https://link.springer.com/article/10.1007/s12559-009-9009-8 Hyperdimensional computing: an introduction to computing in distributed representation with high-dimensional random vectors], Cognitive Computation, 2009.<br />
<br />
===Prerequisites===<br />
*Python<br />
*Background in machine learning (''recommended'')<br />
*Experience with any deep learning framework such as TensorFlow or PyTorch (''recommended'')<br />
*VLSI I (''recommended'')<br />
<br />
==In-Memory Computing (IMC)==<br />
<br />
[[File:NNcover_imc.jpg|thumb|right|200px]]<br />
For decades, conventional computers based on the von Neumann architecture have performed computation by repeatedly transferring data between their processing and their memory units, which are physically separated. As computation becomes increasingly data-centric and as the scalability limits in terms of performance and power are being reached, alternative computing paradigms are searched for in which computation and storage are collocated. A fascinating new approach is that of computational memory where the physics of nanoscale memory devices are used to perform certain computational tasks within the memory unit in a non-von Neumann manner. '''Computational Memory''' (CM) is finding application in a variety of areas such as machine learning and signal processing. In addition to novel non-volatile memory technologies like '''PCM''' and '''ReRAM''', also conventional '''SRAM''' has been proposed as underlying storage technology in Computational Memories.<br />
<br />
===Useful Reading===<br />
*[https://ieeexplore.ieee.org/document/9288639 An SRAM-Based Multibit In-Memory Matrix-Vector Multiplier With a Precision That Scales Linearly in Area, Time, and Power]<br />
*[https://www.nature.com/articles/s41565-020-0655-z Memory devices and applications for in-memory computing]<br />
*[https://ieeexplore.ieee.org/abstract/document/8865099 Deep learning acceleration based on in-memory computing]<br />
<br />
===Prerequisites===<br />
*General interest in Deep Learning and memory/system design<br />
*VLSI I and VLSI II (''recommended'')<br />
Specific requirements for the different projects vary and are generally negotiable.<br />
<br />
==Available Projects==<br />
We are inviting applications from students to conduct their thesis or an internship project at the IBM Research lab in Zurich on this exciting new topic. <br />
<!--<br />
The work performed could span low-level hardware experiments on phase-change memory chips comprising more than 1 million devices to high-level algorithmic development in a deep learning framework such as TensorFlow or PyTorch. It also involves interactions with several researchers across IBM research focusing on various aspects of the project. The ideal candidate should have a multi-disciplinary background, strong mathematical aptitude and programming skills. Prior knowledge on emerging memory technologies such as phase-change memory is a bonus but not necessary.<br />
--><br />
<br />
{| class="wikitable" style="text-align: center;"<br />
|-<br />
! style="width: 5%;"|Type !! style="width: 20%"|Project !! style="width: 60%"|Description !! style="width: 5%"|Workload Type <br />
|-<br />
<br />
| MA || Accelerating Mixers with Computational Memory || [http://iis-projects.ee.ethz.ch/images/e/e8/IBM_Mixer.pdf Link to description] || Hardware design and experiments<br />
|-<br />
<br />
| MA || Accelerating Transformers with Computational Memory || [http://iis-projects.ee.ethz.ch/images/b/be/IBM_TransfAcc.pdf Link to description] || Hardware/algorithmic design<br />
|-<br />
<br />
| MA/SA/BA || Face Recognition at Scale || [http://iis-projects.ee.ethz.ch/images/3/3d/IBM_FaceRec_at_Scale.pdf Link to description] || algorithmic design<br />
|-<br />
<br />
| MA/SA/BA || Developing Efficient Models of Strong AI for Intelligence Quotient (IQ) Test || [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Link to description] || algorithmic design<br />
|-<br />
<br />
| MA|| Lifelong learning challenge || [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf Link to description] || algorithmic/hardware design<br />
|-<br />
<br />
<br />
<br />
|}<br />
<br />
==Contact==<br />
: Thesis will be at IBM Zurich in Rüschlikon<br />
; Hybrid AI Systems (HAS) projects<br />
: Contact (at ETH Zurich): [[:User:kgf | Dr. Frank K. Gurkaynak]], [[:User:Herschmi | Michael Hersche]]<br />
: Contact (at IBM): [mailto:kar@zurich.ibm.com Geethan Karunaratne]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian] <br />
: Contact (at IBM): [mailto:abr@zurich.ibm.com Dr. Abbas Rahimi]<br />
: Professor: [https://iis.ee.ethz.ch/people/person-detail.html?persid=194234 Luca Benini]</div>Herschmihttp://iis-projects.ee.ethz.ch/index.php?title=IBM_Research&diff=6941IBM Research2021-09-16T07:45:42Z<p>Herschmi: /* Contact */</p>
<hr />
<div>[[Category:Digital]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]] <br />
[[Category:Available]] <br />
[[Category:2021]]<br />
[[Category:Hot]]<br />
[[File:IBM_ZRLab.png|center]]<br />
<br />
==Short Description==<br />
Today, we are entering the era of cognitive computing, which holds great promise in deriving intelligence and knowledge from huge volumes of data. In today’s computers based on von Neumann architecture, huge amounts of data need to be shuttled back and forth at high speeds, a task at which this architecture is inefficient.<br />
<br />
It is becoming increasingly clear that to build efficient cognitive computers, we need to transition to non-von Neumann architectures in which memory and processing coexist in some form. At IBM Research–Zurich in the [https://www.zurich.ibm.com/sto/memory/ Neuromorphic and In-memory Computing Group], we explore various such computing paradigms from in-memory computing to brain-inspired neuromorphic computing. Our research spans from devices and architectures to algorithms and applications. <!--Here is a list of available projects:<br />
<br />
* [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf '''Artificial general intelligence (AGI):''' lifelong learning challenge] <br />
* [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf '''Machine learning based on optimal transport using in-memory computing''']<br />
* [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Developing Efficient Models of '''Strong AI for Intelligence Quotient (IQ) Test''']<br />
--><br />
<br />
===About the IBM Research–Zurich===<br />
The location in Zurich is one of IBM’s 12 global research labs. IBM has maintained a research laboratory in Switzerland since 1956. As the first European branch of IBM Research, the mission of the Zurich Lab, in addition to pursuing cutting-edge research for tomorrow’s information technology, is to cultivate close relationships with academic and industrial partners, be one of the premier places to work for world-class researchers, to promote women in IT and science, and to help drive Europe’s innovation agenda. [https://www.zurich.ibm.com/pdf/ZRL_Fact_Sheet.pdf Download factsheet]<br />
<br />
==Hybrid AI Systems (HAS)==<br />
<br />
[[File:NatureElectronics20.jpg|thumb|right|200px]]<br />
Neither symbolic AI nor neural networks alone has produced the kind of intelligence expressed in human and animal behavior. Why? Each has a long and rich history, but has addressed a relatively narrow aspect of the problem. Symbolic AI focuses on solving cognitive problems, drawing upon the rich framework of symbolic computation to manipulate internal representations in order to perform reasoning and inference. But it suffers from being non-adaptive, lacking the ability to learn from example or by direct observation of the world. Neural networks on the other hand have the ability to learn from data, and derive much of their power from nonlinear function approximation combined with stochastic gradient descent. But intelligence requires more than modeling input-output relationships. Without the richness of symbolic computation, neural nets lack the simple but powerful operations such as variable binding that allow for analogy making and reasoning, which underlie the ability to generalize from few examples.<br />
<br />
We approach the problem from a very different perspective, inspired by the brain’s high-dimensional circuits and the unique mathematical properties of high-dimensional spaces. ''It leads us to a novel information processing architecture that combines the strengths of symbolic AI and neural networks, and yet has novel emergent properties of its own.'' By combining a small set of basic operations on high-dimensional vectors, we obtain hybrid AI system (HAS) that makes it possible to represent and manipulate data in ways familiar to us from symbolic AI, and to learn from the statistics of data in ways familiar to us from artificial neural networks and deep learning. Further, principles of such HAS allow few-shot learning capabilities, and extremely robust operations against failures, defects, variations, and noise, all of which are complementary to ultra-low energy computation on nanoscale fabrics such as phase-change memory devices. Exciting further research (listed in below table) awaiting in this direction spans high-level algorithmic exploration all the way to efficient hardware design for emerging computational fabrics.<br />
<br />
===Useful Reading===<br />
*[https://www.nature.com/articles/s41467-021-22364-0 Robust high-dimensional memory-augmented neural networks], Nature Communications, 2021.<br />
*[https://www.nature.com/articles/s41928-020-0410-3 In-memory hyperdimensional computing], Nature Electronics, 2020.<br />
*[https://www.nature.com/articles/s41928-020-00510-8 A wearable biosensing system with in-sensor adaptive machine learning for hand gesture recognition], Nature Electronics, 2020. <br />
*[https://link.springer.com/article/10.1007/s12559-009-9009-8 Hyperdimensional computing: an introduction to computing in distributed representation with high-dimensional random vectors], Cognitive Computation, 2009.<br />
<br />
===Prerequisites===<br />
*Python<br />
*Background in machine learning (''recommended'')<br />
*Experience with any deep learning framework such as TensorFlow or PyTorch (''recommended'')<br />
*VLSI I (''recommended'')<br />
<br />
==In-Memory Computing (IMC)==<br />
<br />
[[File:NNcover_imc.jpg|thumb|right|200px]]<br />
For decades, conventional computers based on the von Neumann architecture have performed computation by repeatedly transferring data between their processing and their memory units, which are physically separated. As computation becomes increasingly data-centric and as the scalability limits in terms of performance and power are being reached, alternative computing paradigms are searched for in which computation and storage are collocated. A fascinating new approach is that of computational memory where the physics of nanoscale memory devices are used to perform certain computational tasks within the memory unit in a non-von Neumann manner. '''Computational Memory''' (CM) is finding application in a variety of areas such as machine learning and signal processing. In addition to novel non-volatile memory technologies like '''PCM''' and '''ReRAM''', also conventional '''SRAM''' has been proposed as underlying storage technology in Computational Memories.<br />
<br />
===Useful Reading===<br />
*[https://ieeexplore.ieee.org/document/9288639 An SRAM-Based Multibit In-Memory Matrix-Vector Multiplier With a Precision That Scales Linearly in Area, Time, and Power]<br />
*[https://www.nature.com/articles/s41565-020-0655-z Memory devices and applications for in-memory computing]<br />
*[https://ieeexplore.ieee.org/abstract/document/8865099 Deep learning acceleration based on in-memory computing]<br />
<br />
===Prerequisites===<br />
*General interest in Deep Learning and memory/system design<br />
*VLSI I and VLSI II (''recommended'')<br />
*Analog Circuit Design (''recommended'')<br />
Specific requirements for the different projects vary and are generally negotiable.<br />
<br />
==Available Projects==<br />
We are inviting applications from students to conduct their thesis or an internship project at the IBM Research lab in Zurich on this exciting new topic. <br />
<!--<br />
The work performed could span low-level hardware experiments on phase-change memory chips comprising more than 1 million devices to high-level algorithmic development in a deep learning framework such as TensorFlow or PyTorch. It also involves interactions with several researchers across IBM research focusing on various aspects of the project. The ideal candidate should have a multi-disciplinary background, strong mathematical aptitude and programming skills. Prior knowledge on emerging memory technologies such as phase-change memory is a bonus but not necessary.<br />
--><br />
<br />
{| class="wikitable" style="text-align: center;"<br />
|-<br />
! style="width: 5%;"|Type !! style="width: 20%"|Project !! style="width: 60%"|Description !! style="width: 5%"|Workload Type <br />
|-<br />
<br />
| MA || Accelerating Mixers with Computational Memory || [http://iis-projects.ee.ethz.ch/images/e/e8/IBM_Mixer.pdf Link to description] || Hardware design and experiments<br />
|-<br />
<br />
| MA || Accelerating Transformers with Computational Memory || [http://iis-projects.ee.ethz.ch/images/b/be/IBM_TransfAcc.pdf Link to description] || Hardware/algorithmic design<br />
|-<br />
<br />
| MA/SA/BA || Face Recognition at Scale || [http://iis-projects.ee.ethz.ch/images/3/3d/IBM_FaceRec_at_Scale.pdf Link to description] || algorithmic design<br />
|-<br />
<br />
| MA/SA/BA || Developing Efficient Models of Strong AI for Intelligence Quotient (IQ) Test || [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Link to description] || algorithmic design<br />
|-<br />
<br />
| MA|| Lifelong learning challenge || [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf Link to description] || algorithmic/hardware design<br />
|-<br />
<br />
<br />
<br />
|}<br />
<br />
==Contact==<br />
: Thesis will be at IBM Zurich in Rüschlikon<br />
; Hybrid AI Systems (HAS) projects<br />
: Contact (at ETH Zurich): [[:User:kgf | Dr. Frank K. Gurkaynak]], [[:User:Herschmi | Michael Hersche]]<br />
: Contact (at IBM): [mailto:kar@zurich.ibm.com Geethan Karunaratne]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian] <br />
: Contact (at IBM): [mailto:abr@zurich.ibm.com Dr. Abbas Rahimi]<br />
: Professor: [https://iis.ee.ethz.ch/people/person-detail.html?persid=194234 Luca Benini]</div>Herschmihttp://iis-projects.ee.ethz.ch/index.php?title=IBM_Research&diff=6940IBM Research2021-09-16T07:44:22Z<p>Herschmi: /* Contact */</p>
<hr />
<div>[[Category:Digital]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]] <br />
[[Category:Available]] <br />
[[Category:2021]]<br />
[[Category:Hot]]<br />
[[File:IBM_ZRLab.png|center]]<br />
<br />
==Short Description==<br />
Today, we are entering the era of cognitive computing, which holds great promise in deriving intelligence and knowledge from huge volumes of data. In today’s computers based on von Neumann architecture, huge amounts of data need to be shuttled back and forth at high speeds, a task at which this architecture is inefficient.<br />
<br />
It is becoming increasingly clear that to build efficient cognitive computers, we need to transition to non-von Neumann architectures in which memory and processing coexist in some form. At IBM Research–Zurich in the [https://www.zurich.ibm.com/sto/memory/ Neuromorphic and In-memory Computing Group], we explore various such computing paradigms from in-memory computing to brain-inspired neuromorphic computing. Our research spans from devices and architectures to algorithms and applications. <!--Here is a list of available projects:<br />
<br />
* [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf '''Artificial general intelligence (AGI):''' lifelong learning challenge] <br />
* [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf '''Machine learning based on optimal transport using in-memory computing''']<br />
* [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Developing Efficient Models of '''Strong AI for Intelligence Quotient (IQ) Test''']<br />
--><br />
<br />
===About the IBM Research–Zurich===<br />
The location in Zurich is one of IBM’s 12 global research labs. IBM has maintained a research laboratory in Switzerland since 1956. As the first European branch of IBM Research, the mission of the Zurich Lab, in addition to pursuing cutting-edge research for tomorrow’s information technology, is to cultivate close relationships with academic and industrial partners, be one of the premier places to work for world-class researchers, to promote women in IT and science, and to help drive Europe’s innovation agenda. [https://www.zurich.ibm.com/pdf/ZRL_Fact_Sheet.pdf Download factsheet]<br />
<br />
==Hybrid AI Systems (HAS)==<br />
<br />
[[File:NatureElectronics20.jpg|thumb|right|200px]]<br />
Neither symbolic AI nor neural networks alone has produced the kind of intelligence expressed in human and animal behavior. Why? Each has a long and rich history, but has addressed a relatively narrow aspect of the problem. Symbolic AI focuses on solving cognitive problems, drawing upon the rich framework of symbolic computation to manipulate internal representations in order to perform reasoning and inference. But it suffers from being non-adaptive, lacking the ability to learn from example or by direct observation of the world. Neural networks on the other hand have the ability to learn from data, and derive much of their power from nonlinear function approximation combined with stochastic gradient descent. But intelligence requires more than modeling input-output relationships. Without the richness of symbolic computation, neural nets lack the simple but powerful operations such as variable binding that allow for analogy making and reasoning, which underlie the ability to generalize from few examples.<br />
<br />
We approach the problem from a very different perspective, inspired by the brain’s high-dimensional circuits and the unique mathematical properties of high-dimensional spaces. ''It leads us to a novel information processing architecture that combines the strengths of symbolic AI and neural networks, and yet has novel emergent properties of its own.'' By combining a small set of basic operations on high-dimensional vectors, we obtain hybrid AI system (HAS) that makes it possible to represent and manipulate data in ways familiar to us from symbolic AI, and to learn from the statistics of data in ways familiar to us from artificial neural networks and deep learning. Further, principles of such HAS allow few-shot learning capabilities, and extremely robust operations against failures, defects, variations, and noise, all of which are complementary to ultra-low energy computation on nanoscale fabrics such as phase-change memory devices. Exciting further research (listed in below table) awaiting in this direction spans high-level algorithmic exploration all the way to efficient hardware design for emerging computational fabrics.<br />
<br />
===Useful Reading===<br />
*[https://www.nature.com/articles/s41467-021-22364-0 Robust high-dimensional memory-augmented neural networks], Nature Communications, 2021.<br />
*[https://www.nature.com/articles/s41928-020-0410-3 In-memory hyperdimensional computing], Nature Electronics, 2020.<br />
*[https://www.nature.com/articles/s41928-020-00510-8 A wearable biosensing system with in-sensor adaptive machine learning for hand gesture recognition], Nature Electronics, 2020. <br />
*[https://link.springer.com/article/10.1007/s12559-009-9009-8 Hyperdimensional computing: an introduction to computing in distributed representation with high-dimensional random vectors], Cognitive Computation, 2009.<br />
<br />
===Prerequisites===<br />
*Python<br />
*Background in machine learning (''recommended'')<br />
*Experience with any deep learning framework such as TensorFlow or PyTorch (''recommended'')<br />
*VLSI I (''recommended'')<br />
<br />
==In-Memory Computing (IMC)==<br />
<br />
[[File:NNcover_imc.jpg|thumb|right|200px]]<br />
For decades, conventional computers based on the von Neumann architecture have performed computation by repeatedly transferring data between their processing and their memory units, which are physically separated. As computation becomes increasingly data-centric and as the scalability limits in terms of performance and power are being reached, alternative computing paradigms are searched for in which computation and storage are collocated. A fascinating new approach is that of computational memory where the physics of nanoscale memory devices are used to perform certain computational tasks within the memory unit in a non-von Neumann manner. '''Computational Memory''' (CM) is finding application in a variety of areas such as machine learning and signal processing. In addition to novel non-volatile memory technologies like '''PCM''' and '''ReRAM''', also conventional '''SRAM''' has been proposed as underlying storage technology in Computational Memories.<br />
<br />
===Useful Reading===<br />
*[https://ieeexplore.ieee.org/document/9288639 An SRAM-Based Multibit In-Memory Matrix-Vector Multiplier With a Precision That Scales Linearly in Area, Time, and Power]<br />
*[https://www.nature.com/articles/s41565-020-0655-z Memory devices and applications for in-memory computing]<br />
*[https://ieeexplore.ieee.org/abstract/document/8865099 Deep learning acceleration based on in-memory computing]<br />
<br />
===Prerequisites===<br />
*General interest in Deep Learning and memory/system design<br />
*VLSI I and VLSI II (''recommended'')<br />
*Analog Circuit Design (''recommended'')<br />
Specific requirements for the different projects vary and are generally negotiable.<br />
<br />
==Available Projects==<br />
We are inviting applications from students to conduct their thesis or an internship project at the IBM Research lab in Zurich on this exciting new topic. <br />
<!--<br />
The work performed could span low-level hardware experiments on phase-change memory chips comprising more than 1 million devices to high-level algorithmic development in a deep learning framework such as TensorFlow or PyTorch. It also involves interactions with several researchers across IBM research focusing on various aspects of the project. The ideal candidate should have a multi-disciplinary background, strong mathematical aptitude and programming skills. Prior knowledge on emerging memory technologies such as phase-change memory is a bonus but not necessary.<br />
--><br />
<br />
{| class="wikitable" style="text-align: center;"<br />
|-<br />
! style="width: 5%;"|Type !! style="width: 20%"|Project !! style="width: 60%"|Description !! style="width: 5%"|Workload Type <br />
|-<br />
<br />
| MA || Accelerating Mixers with Computational Memory || [http://iis-projects.ee.ethz.ch/images/e/e8/IBM_Mixer.pdf Link to description] || Hardware design and experiments<br />
|-<br />
<br />
| MA || Accelerating Transformers with Computational Memory || [http://iis-projects.ee.ethz.ch/images/b/be/IBM_TransfAcc.pdf Link to description] || Hardware/algorithmic design<br />
|-<br />
<br />
| MA/SA/BA || Face Recognition at Scale || [http://iis-projects.ee.ethz.ch/images/3/3d/IBM_FaceRec_at_Scale.pdf Link to description] || algorithmic design<br />
|-<br />
<br />
| MA/SA/BA || Developing Efficient Models of Strong AI for Intelligence Quotient (IQ) Test || [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Link to description] || algorithmic design<br />
|-<br />
<br />
| MA|| Lifelong learning challenge || [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf Link to description] || algorithmic/hardware design<br />
|-<br />
<br />
<br />
<br />
|}<br />
<br />
==Contact==<br />
: Thesis will be at IBM Zurich in Rüschlikon<br />
; Hybrid AI Systems (HAS) projects<br />
: Contact (at ETH Zurich): [[:User:kgf | Dr. Frank K. Gurkaynak]], [[:User:Herschmi | Michael Hersche]]<br />
: Contact (at IBM): [mailto:kar@zurich.ibm.com Geethan Karunaratne]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian] <br />
: Contact (at IBM): [mailto:abr@zurich.ibm.com Dr. Abbas Rahimi]<br />
: Professor: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]</div>Herschmihttp://iis-projects.ee.ethz.ch/index.php?title=IBM_Research&diff=6939IBM Research2021-09-16T07:40:28Z<p>Herschmi: /* Available Projects */</p>
<hr />
<div>[[Category:Digital]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]] <br />
[[Category:Available]] <br />
[[Category:2021]]<br />
[[Category:Hot]]<br />
[[File:IBM_ZRLab.png|center]]<br />
<br />
==Short Description==<br />
Today, we are entering the era of cognitive computing, which holds great promise in deriving intelligence and knowledge from huge volumes of data. In today’s computers based on von Neumann architecture, huge amounts of data need to be shuttled back and forth at high speeds, a task at which this architecture is inefficient.<br />
<br />
It is becoming increasingly clear that to build efficient cognitive computers, we need to transition to non-von Neumann architectures in which memory and processing coexist in some form. At IBM Research–Zurich in the [https://www.zurich.ibm.com/sto/memory/ Neuromorphic and In-memory Computing Group], we explore various such computing paradigms from in-memory computing to brain-inspired neuromorphic computing. Our research spans from devices and architectures to algorithms and applications. <!--Here is a list of available projects:<br />
<br />
* [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf '''Artificial general intelligence (AGI):''' lifelong learning challenge] <br />
* [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf '''Machine learning based on optimal transport using in-memory computing''']<br />
* [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Developing Efficient Models of '''Strong AI for Intelligence Quotient (IQ) Test''']<br />
--><br />
<br />
===About the IBM Research–Zurich===<br />
The location in Zurich is one of IBM’s 12 global research labs. IBM has maintained a research laboratory in Switzerland since 1956. As the first European branch of IBM Research, the mission of the Zurich Lab, in addition to pursuing cutting-edge research for tomorrow’s information technology, is to cultivate close relationships with academic and industrial partners, be one of the premier places to work for world-class researchers, to promote women in IT and science, and to help drive Europe’s innovation agenda. [https://www.zurich.ibm.com/pdf/ZRL_Fact_Sheet.pdf Download factsheet]<br />
<br />
==Hybrid AI Systems (HAS)==<br />
<br />
[[File:NatureElectronics20.jpg|thumb|right|200px]]<br />
Neither symbolic AI nor neural networks alone has produced the kind of intelligence expressed in human and animal behavior. Why? Each has a long and rich history, but has addressed a relatively narrow aspect of the problem. Symbolic AI focuses on solving cognitive problems, drawing upon the rich framework of symbolic computation to manipulate internal representations in order to perform reasoning and inference. But it suffers from being non-adaptive, lacking the ability to learn from example or by direct observation of the world. Neural networks on the other hand have the ability to learn from data, and derive much of their power from nonlinear function approximation combined with stochastic gradient descent. But intelligence requires more than modeling input-output relationships. Without the richness of symbolic computation, neural nets lack the simple but powerful operations such as variable binding that allow for analogy making and reasoning, which underlie the ability to generalize from few examples.<br />
<br />
We approach the problem from a very different perspective, inspired by the brain’s high-dimensional circuits and the unique mathematical properties of high-dimensional spaces. ''It leads us to a novel information processing architecture that combines the strengths of symbolic AI and neural networks, and yet has novel emergent properties of its own.'' By combining a small set of basic operations on high-dimensional vectors, we obtain hybrid AI system (HAS) that makes it possible to represent and manipulate data in ways familiar to us from symbolic AI, and to learn from the statistics of data in ways familiar to us from artificial neural networks and deep learning. Further, principles of such HAS allow few-shot learning capabilities, and extremely robust operations against failures, defects, variations, and noise, all of which are complementary to ultra-low energy computation on nanoscale fabrics such as phase-change memory devices. Exciting further research (listed in below table) awaiting in this direction spans high-level algorithmic exploration all the way to efficient hardware design for emerging computational fabrics.<br />
<br />
===Useful Reading===<br />
*[https://www.nature.com/articles/s41467-021-22364-0 Robust high-dimensional memory-augmented neural networks], Nature Communications, 2021.<br />
*[https://www.nature.com/articles/s41928-020-0410-3 In-memory hyperdimensional computing], Nature Electronics, 2020.<br />
*[https://www.nature.com/articles/s41928-020-00510-8 A wearable biosensing system with in-sensor adaptive machine learning for hand gesture recognition], Nature Electronics, 2020. <br />
*[https://link.springer.com/article/10.1007/s12559-009-9009-8 Hyperdimensional computing: an introduction to computing in distributed representation with high-dimensional random vectors], Cognitive Computation, 2009.<br />
<br />
===Prerequisites===<br />
*Python<br />
*Background in machine learning (''recommended'')<br />
*Experience with any deep learning framework such as TensorFlow or PyTorch (''recommended'')<br />
*VLSI I (''recommended'')<br />
<br />
==In-Memory Computing (IMC)==<br />
<br />
[[File:NNcover_imc.jpg|thumb|right|200px]]<br />
For decades, conventional computers based on the von Neumann architecture have performed computation by repeatedly transferring data between their processing and their memory units, which are physically separated. As computation becomes increasingly data-centric and as the scalability limits in terms of performance and power are being reached, alternative computing paradigms are searched for in which computation and storage are collocated. A fascinating new approach is that of computational memory where the physics of nanoscale memory devices are used to perform certain computational tasks within the memory unit in a non-von Neumann manner. '''Computational Memory''' (CM) is finding application in a variety of areas such as machine learning and signal processing. In addition to novel non-volatile memory technologies like '''PCM''' and '''ReRAM''', also conventional '''SRAM''' has been proposed as underlying storage technology in Computational Memories.<br />
<br />
===Useful Reading===<br />
*[https://ieeexplore.ieee.org/document/9288639 An SRAM-Based Multibit In-Memory Matrix-Vector Multiplier With a Precision That Scales Linearly in Area, Time, and Power]<br />
*[https://www.nature.com/articles/s41565-020-0655-z Memory devices and applications for in-memory computing]<br />
*[https://ieeexplore.ieee.org/abstract/document/8865099 Deep learning acceleration based on in-memory computing]<br />
<br />
===Prerequisites===<br />
*General interest in Deep Learning and memory/system design<br />
*VLSI I and VLSI II (''recommended'')<br />
*Analog Circuit Design (''recommended'')<br />
Specific requirements for the different projects vary and are generally negotiable.<br />
<br />
==Available Projects==<br />
We are inviting applications from students to conduct their thesis or an internship project at the IBM Research lab in Zurich on this exciting new topic. <br />
<!--<br />
The work performed could span low-level hardware experiments on phase-change memory chips comprising more than 1 million devices to high-level algorithmic development in a deep learning framework such as TensorFlow or PyTorch. It also involves interactions with several researchers across IBM research focusing on various aspects of the project. The ideal candidate should have a multi-disciplinary background, strong mathematical aptitude and programming skills. Prior knowledge on emerging memory technologies such as phase-change memory is a bonus but not necessary.<br />
--><br />
<br />
{| class="wikitable" style="text-align: center;"<br />
|-<br />
! style="width: 5%;"|Type !! style="width: 20%"|Project !! style="width: 60%"|Description !! style="width: 5%"|Workload Type <br />
|-<br />
<br />
| MA || Accelerating Mixers with Computational Memory || [http://iis-projects.ee.ethz.ch/images/e/e8/IBM_Mixer.pdf Link to description] || Hardware design and experiments<br />
|-<br />
<br />
| MA || Accelerating Transformers with Computational Memory || [http://iis-projects.ee.ethz.ch/images/b/be/IBM_TransfAcc.pdf Link to description] || Hardware/algorithmic design<br />
|-<br />
<br />
| MA/SA/BA || Face Recognition at Scale || [http://iis-projects.ee.ethz.ch/images/3/3d/IBM_FaceRec_at_Scale.pdf Link to description] || algorithmic design<br />
|-<br />
<br />
| MA/SA/BA || Developing Efficient Models of Strong AI for Intelligence Quotient (IQ) Test || [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Link to description] || algorithmic design<br />
|-<br />
<br />
| MA|| Lifelong learning challenge || [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf Link to description] || algorithmic/hardware design<br />
|-<br />
<br />
<br />
<br />
|}<br />
<br />
==Contact==<br />
: Thesis will be at IBM Zurich in Rüschlikon<br />
; Hybrid AI Systems (HAS) projects<br />
: Contact (at ETH Zurich): [[:User:kgf | Dr. Frank K. Gurkaynak]], [[:User:Herschmi | Michael Hersche]]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian] <br />
: Contact (at IBM): [mailto:abr@zurich.ibm.com Dr. Abbas Rahimi]<br />
: Professor: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]</div>Herschmihttp://iis-projects.ee.ethz.ch/index.php?title=IBM_Research&diff=6938IBM Research2021-09-16T07:40:02Z<p>Herschmi: /* Available Projects */</p>
<hr />
<div>[[Category:Digital]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]] <br />
[[Category:Available]] <br />
[[Category:2021]]<br />
[[Category:Hot]]<br />
[[File:IBM_ZRLab.png|center]]<br />
<br />
==Short Description==<br />
Today, we are entering the era of cognitive computing, which holds great promise in deriving intelligence and knowledge from huge volumes of data. In today’s computers based on von Neumann architecture, huge amounts of data need to be shuttled back and forth at high speeds, a task at which this architecture is inefficient.<br />
<br />
It is becoming increasingly clear that to build efficient cognitive computers, we need to transition to non-von Neumann architectures in which memory and processing coexist in some form. At IBM Research–Zurich in the [https://www.zurich.ibm.com/sto/memory/ Neuromorphic and In-memory Computing Group], we explore various such computing paradigms from in-memory computing to brain-inspired neuromorphic computing. Our research spans from devices and architectures to algorithms and applications. <!--Here is a list of available projects:<br />
<br />
* [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf '''Artificial general intelligence (AGI):''' lifelong learning challenge] <br />
* [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf '''Machine learning based on optimal transport using in-memory computing''']<br />
* [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Developing Efficient Models of '''Strong AI for Intelligence Quotient (IQ) Test''']<br />
--><br />
<br />
===About the IBM Research–Zurich===<br />
The location in Zurich is one of IBM’s 12 global research labs. IBM has maintained a research laboratory in Switzerland since 1956. As the first European branch of IBM Research, the mission of the Zurich Lab, in addition to pursuing cutting-edge research for tomorrow’s information technology, is to cultivate close relationships with academic and industrial partners, be one of the premier places to work for world-class researchers, to promote women in IT and science, and to help drive Europe’s innovation agenda. [https://www.zurich.ibm.com/pdf/ZRL_Fact_Sheet.pdf Download factsheet]<br />
<br />
==Hybrid AI Systems (HAS)==<br />
<br />
[[File:NatureElectronics20.jpg|thumb|right|200px]]<br />
Neither symbolic AI nor neural networks alone has produced the kind of intelligence expressed in human and animal behavior. Why? Each has a long and rich history, but has addressed a relatively narrow aspect of the problem. Symbolic AI focuses on solving cognitive problems, drawing upon the rich framework of symbolic computation to manipulate internal representations in order to perform reasoning and inference. But it suffers from being non-adaptive, lacking the ability to learn from example or by direct observation of the world. Neural networks on the other hand have the ability to learn from data, and derive much of their power from nonlinear function approximation combined with stochastic gradient descent. But intelligence requires more than modeling input-output relationships. Without the richness of symbolic computation, neural nets lack the simple but powerful operations such as variable binding that allow for analogy making and reasoning, which underlie the ability to generalize from few examples.<br />
<br />
We approach the problem from a very different perspective, inspired by the brain’s high-dimensional circuits and the unique mathematical properties of high-dimensional spaces. ''It leads us to a novel information processing architecture that combines the strengths of symbolic AI and neural networks, and yet has novel emergent properties of its own.'' By combining a small set of basic operations on high-dimensional vectors, we obtain hybrid AI system (HAS) that makes it possible to represent and manipulate data in ways familiar to us from symbolic AI, and to learn from the statistics of data in ways familiar to us from artificial neural networks and deep learning. Further, principles of such HAS allow few-shot learning capabilities, and extremely robust operations against failures, defects, variations, and noise, all of which are complementary to ultra-low energy computation on nanoscale fabrics such as phase-change memory devices. Exciting further research (listed in below table) awaiting in this direction spans high-level algorithmic exploration all the way to efficient hardware design for emerging computational fabrics.<br />
<br />
===Useful Reading===<br />
*[https://www.nature.com/articles/s41467-021-22364-0 Robust high-dimensional memory-augmented neural networks], Nature Communications, 2021.<br />
*[https://www.nature.com/articles/s41928-020-0410-3 In-memory hyperdimensional computing], Nature Electronics, 2020.<br />
*[https://www.nature.com/articles/s41928-020-00510-8 A wearable biosensing system with in-sensor adaptive machine learning for hand gesture recognition], Nature Electronics, 2020. <br />
*[https://link.springer.com/article/10.1007/s12559-009-9009-8 Hyperdimensional computing: an introduction to computing in distributed representation with high-dimensional random vectors], Cognitive Computation, 2009.<br />
<br />
===Prerequisites===<br />
*Python<br />
*Background in machine learning (''recommended'')<br />
*Experience with any deep learning framework such as TensorFlow or PyTorch (''recommended'')<br />
*VLSI I (''recommended'')<br />
<br />
==In-Memory Computing (IMC)==<br />
<br />
[[File:NNcover_imc.jpg|thumb|right|200px]]<br />
For decades, conventional computers based on the von Neumann architecture have performed computation by repeatedly transferring data between their processing and their memory units, which are physically separated. As computation becomes increasingly data-centric and as the scalability limits in terms of performance and power are being reached, alternative computing paradigms are searched for in which computation and storage are collocated. A fascinating new approach is that of computational memory where the physics of nanoscale memory devices are used to perform certain computational tasks within the memory unit in a non-von Neumann manner. '''Computational Memory''' (CM) is finding application in a variety of areas such as machine learning and signal processing. In addition to novel non-volatile memory technologies like '''PCM''' and '''ReRAM''', also conventional '''SRAM''' has been proposed as underlying storage technology in Computational Memories.<br />
<br />
===Useful Reading===<br />
*[https://ieeexplore.ieee.org/document/9288639 An SRAM-Based Multibit In-Memory Matrix-Vector Multiplier With a Precision That Scales Linearly in Area, Time, and Power]<br />
*[https://www.nature.com/articles/s41565-020-0655-z Memory devices and applications for in-memory computing]<br />
*[https://ieeexplore.ieee.org/abstract/document/8865099 Deep learning acceleration based on in-memory computing]<br />
<br />
===Prerequisites===<br />
*General interest in Deep Learning and memory/system design<br />
*VLSI I and VLSI II (''recommended'')<br />
*Analog Circuit Design (''recommended'')<br />
Specific requirements for the different projects vary and are generally negotiable.<br />
<br />
==Available Projects==<br />
We are inviting applications from students to conduct their master’s thesis work or an internship project at the IBM Research lab in Zurich on this exciting new topic. <br />
<!--<br />
The work performed could span low-level hardware experiments on phase-change memory chips comprising more than 1 million devices to high-level algorithmic development in a deep learning framework such as TensorFlow or PyTorch. It also involves interactions with several researchers across IBM research focusing on various aspects of the project. The ideal candidate should have a multi-disciplinary background, strong mathematical aptitude and programming skills. Prior knowledge on emerging memory technologies such as phase-change memory is a bonus but not necessary.<br />
--><br />
<br />
{| class="wikitable" style="text-align: center;"<br />
|-<br />
! style="width: 5%;"|Type !! style="width: 20%"|Project !! style="width: 60%"|Description !! style="width: 5%"|Workload Type <br />
|-<br />
<br />
| MA || Accelerating Mixers with Computational Memory || [http://iis-projects.ee.ethz.ch/images/e/e8/IBM_Mixer.pdf Link to description] || Hardware design and experiments<br />
|-<br />
<br />
| MA || Accelerating Transformers with Computational Memory || [http://iis-projects.ee.ethz.ch/images/b/be/IBM_TransfAcc.pdf Link to description] || Hardware/algorithmic design<br />
|-<br />
<br />
| MA/SA/BA || Face Recognition at Scale || [http://iis-projects.ee.ethz.ch/images/3/3d/IBM_FaceRec_at_Scale.pdf Link to description] || algorithmic design<br />
|-<br />
<br />
| MA/SA/BA || Developing Efficient Models of Strong AI for Intelligence Quotient (IQ) Test || [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Link to description] || algorithmic design<br />
|-<br />
<br />
| MA|| Lifelong learning challenge || [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf Link to description] || algorithmic/hardware design<br />
|-<br />
<br />
<br />
<br />
|}<br />
<br />
==Contact==<br />
: Thesis will be at IBM Zurich in Rüschlikon<br />
; Hybrid AI Systems (HAS) projects<br />
: Contact (at ETH Zurich): [[:User:kgf | Dr. Frank K. Gurkaynak]], [[:User:Herschmi | Michael Hersche]]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian] <br />
: Contact (at IBM): [mailto:abr@zurich.ibm.com Dr. Abbas Rahimi]<br />
: Professor: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]</div>Herschmihttp://iis-projects.ee.ethz.ch/index.php?title=IBM_Research&diff=6937IBM Research2021-09-16T07:39:28Z<p>Herschmi: /* Available Projects */</p>
<hr />
<div>[[Category:Digital]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]] <br />
[[Category:Available]] <br />
[[Category:2021]]<br />
[[Category:Hot]]<br />
[[File:IBM_ZRLab.png|center]]<br />
<br />
==Short Description==<br />
Today, we are entering the era of cognitive computing, which holds great promise in deriving intelligence and knowledge from huge volumes of data. In today’s computers based on von Neumann architecture, huge amounts of data need to be shuttled back and forth at high speeds, a task at which this architecture is inefficient.<br />
<br />
It is becoming increasingly clear that to build efficient cognitive computers, we need to transition to non-von Neumann architectures in which memory and processing coexist in some form. At IBM Research–Zurich in the [https://www.zurich.ibm.com/sto/memory/ Neuromorphic and In-memory Computing Group], we explore various such computing paradigms from in-memory computing to brain-inspired neuromorphic computing. Our research spans from devices and architectures to algorithms and applications. <!--Here is a list of available projects:<br />
<br />
* [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf '''Artificial general intelligence (AGI):''' lifelong learning challenge] <br />
* [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf '''Machine learning based on optimal transport using in-memory computing''']<br />
* [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Developing Efficient Models of '''Strong AI for Intelligence Quotient (IQ) Test''']<br />
--><br />
<br />
===About the IBM Research–Zurich===<br />
The location in Zurich is one of IBM’s 12 global research labs. IBM has maintained a research laboratory in Switzerland since 1956. As the first European branch of IBM Research, the mission of the Zurich Lab, in addition to pursuing cutting-edge research for tomorrow’s information technology, is to cultivate close relationships with academic and industrial partners, be one of the premier places to work for world-class researchers, to promote women in IT and science, and to help drive Europe’s innovation agenda. [https://www.zurich.ibm.com/pdf/ZRL_Fact_Sheet.pdf Download factsheet]<br />
<br />
==Hybrid AI Systems (HAS)==<br />
<br />
[[File:NatureElectronics20.jpg|thumb|right|200px]]<br />
Neither symbolic AI nor neural networks alone has produced the kind of intelligence expressed in human and animal behavior. Why? Each has a long and rich history, but has addressed a relatively narrow aspect of the problem. Symbolic AI focuses on solving cognitive problems, drawing upon the rich framework of symbolic computation to manipulate internal representations in order to perform reasoning and inference. But it suffers from being non-adaptive, lacking the ability to learn from example or by direct observation of the world. Neural networks on the other hand have the ability to learn from data, and derive much of their power from nonlinear function approximation combined with stochastic gradient descent. But intelligence requires more than modeling input-output relationships. Without the richness of symbolic computation, neural nets lack the simple but powerful operations such as variable binding that allow for analogy making and reasoning, which underlie the ability to generalize from few examples.<br />
<br />
We approach the problem from a very different perspective, inspired by the brain’s high-dimensional circuits and the unique mathematical properties of high-dimensional spaces. ''It leads us to a novel information processing architecture that combines the strengths of symbolic AI and neural networks, and yet has novel emergent properties of its own.'' By combining a small set of basic operations on high-dimensional vectors, we obtain hybrid AI system (HAS) that makes it possible to represent and manipulate data in ways familiar to us from symbolic AI, and to learn from the statistics of data in ways familiar to us from artificial neural networks and deep learning. Further, principles of such HAS allow few-shot learning capabilities, and extremely robust operations against failures, defects, variations, and noise, all of which are complementary to ultra-low energy computation on nanoscale fabrics such as phase-change memory devices. Exciting further research (listed in below table) awaiting in this direction spans high-level algorithmic exploration all the way to efficient hardware design for emerging computational fabrics.<br />
<br />
===Useful Reading===<br />
*[https://www.nature.com/articles/s41467-021-22364-0 Robust high-dimensional memory-augmented neural networks], Nature Communications, 2021.<br />
*[https://www.nature.com/articles/s41928-020-0410-3 In-memory hyperdimensional computing], Nature Electronics, 2020.<br />
*[https://www.nature.com/articles/s41928-020-00510-8 A wearable biosensing system with in-sensor adaptive machine learning for hand gesture recognition], Nature Electronics, 2020. <br />
*[https://link.springer.com/article/10.1007/s12559-009-9009-8 Hyperdimensional computing: an introduction to computing in distributed representation with high-dimensional random vectors], Cognitive Computation, 2009.<br />
<br />
===Prerequisites===<br />
*Python<br />
*Background in machine learning (''recommended'')<br />
*Experience with any deep learning framework such as TensorFlow or PyTorch (''recommended'')<br />
*VLSI I (''recommended'')<br />
<br />
==In-Memory Computing (IMC)==<br />
<br />
[[File:NNcover_imc.jpg|thumb|right|200px]]<br />
For decades, conventional computers based on the von Neumann architecture have performed computation by repeatedly transferring data between their processing and their memory units, which are physically separated. As computation becomes increasingly data-centric and as the scalability limits in terms of performance and power are being reached, alternative computing paradigms are searched for in which computation and storage are collocated. A fascinating new approach is that of computational memory where the physics of nanoscale memory devices are used to perform certain computational tasks within the memory unit in a non-von Neumann manner. '''Computational Memory''' (CM) is finding application in a variety of areas such as machine learning and signal processing. In addition to novel non-volatile memory technologies like '''PCM''' and '''ReRAM''', also conventional '''SRAM''' has been proposed as underlying storage technology in Computational Memories.<br />
<br />
===Useful Reading===<br />
*[https://ieeexplore.ieee.org/document/9288639 An SRAM-Based Multibit In-Memory Matrix-Vector Multiplier With a Precision That Scales Linearly in Area, Time, and Power]<br />
*[https://www.nature.com/articles/s41565-020-0655-z Memory devices and applications for in-memory computing]<br />
*[https://ieeexplore.ieee.org/abstract/document/8865099 Deep learning acceleration based on in-memory computing]<br />
<br />
===Prerequisites===<br />
*General interest in Deep Learning and memory/system design<br />
*VLSI I and VLSI II (''recommended'')<br />
*Analog Circuit Design (''recommended'')<br />
Specific requirements for the different projects vary and are generally negotiable.<br />
<br />
==Available Projects==<br />
We are inviting applications from students to conduct their master’s thesis work or an internship project at the IBM Research lab in Zurich on this exciting new topic. <br />
<!--<br />
The work performed could span low-level hardware experiments on phase-change memory chips comprising more than 1 million devices to high-level algorithmic development in a deep learning framework such as TensorFlow or PyTorch. It also involves interactions with several researchers across IBM research focusing on various aspects of the project. The ideal candidate should have a multi-disciplinary background, strong mathematical aptitude and programming skills. Prior knowledge on emerging memory technologies such as phase-change memory is a bonus but not necessary.<br />
--><br />
<br />
{| class="wikitable" style="text-align: center;"<br />
|-<br />
! style="width: 5%;"|Type !! style="width: 20%"|Project !! style="width: 60%"|Description !! style="width: 5%"|Topic !! style="width: 15%"|Workload Type <br />
|-<br />
<br />
| MA || Accelerating Mixers with Computational Memory || [http://iis-projects.ee.ethz.ch/images/e/e8/IBM_Mixer.pdf Link to description] || Hardware design and experiments<br />
|-<br />
<br />
| MA || Accelerating Transformers with Computational Memory || [http://iis-projects.ee.ethz.ch/images/b/be/IBM_TransfAcc.pdf Link to description] || Hardware/algorithmic design<br />
|-<br />
<br />
| MA/SA/BA || Face Recognition at Scale || [http://iis-projects.ee.ethz.ch/images/3/3d/IBM_FaceRec_at_Scale.pdf Link to description] || algorithmic design<br />
|-<br />
<br />
| MA/SA/BA || Developing Efficient Models of Strong AI for Intelligence Quotient (IQ) Test || [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Link to description] || algorithmic design<br />
|-<br />
<br />
| MA|| Lifelong learning challenge || [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf Link to description] || algorithmic/hardware design<br />
|-<br />
<br />
<br />
<br />
|}<br />
<br />
==Contact==<br />
: Thesis will be at IBM Zurich in Rüschlikon<br />
; Hybrid AI Systems (HAS) projects<br />
: Contact (at ETH Zurich): [[:User:kgf | Dr. Frank K. Gurkaynak]], [[:User:Herschmi | Michael Hersche]]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian] <br />
: Contact (at IBM): [mailto:abr@zurich.ibm.com Dr. Abbas Rahimi]<br />
: Professor: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]</div>Herschmihttp://iis-projects.ee.ethz.ch/index.php?title=IBM_Research&diff=6936IBM Research2021-09-16T07:39:07Z<p>Herschmi: /* Available Projects */</p>
<hr />
<div>[[Category:Digital]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]] <br />
[[Category:Available]] <br />
[[Category:2021]]<br />
[[Category:Hot]]<br />
[[File:IBM_ZRLab.png|center]]<br />
<br />
==Short Description==<br />
Today, we are entering the era of cognitive computing, which holds great promise in deriving intelligence and knowledge from huge volumes of data. In today’s computers based on von Neumann architecture, huge amounts of data need to be shuttled back and forth at high speeds, a task at which this architecture is inefficient.<br />
<br />
It is becoming increasingly clear that to build efficient cognitive computers, we need to transition to non-von Neumann architectures in which memory and processing coexist in some form. At IBM Research–Zurich in the [https://www.zurich.ibm.com/sto/memory/ Neuromorphic and In-memory Computing Group], we explore various such computing paradigms from in-memory computing to brain-inspired neuromorphic computing. Our research spans from devices and architectures to algorithms and applications. <!--Here is a list of available projects:<br />
<br />
* [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf '''Artificial general intelligence (AGI):''' lifelong learning challenge] <br />
* [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf '''Machine learning based on optimal transport using in-memory computing''']<br />
* [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Developing Efficient Models of '''Strong AI for Intelligence Quotient (IQ) Test''']<br />
--><br />
<br />
===About the IBM Research–Zurich===<br />
The location in Zurich is one of IBM’s 12 global research labs. IBM has maintained a research laboratory in Switzerland since 1956. As the first European branch of IBM Research, the mission of the Zurich Lab, in addition to pursuing cutting-edge research for tomorrow’s information technology, is to cultivate close relationships with academic and industrial partners, be one of the premier places to work for world-class researchers, to promote women in IT and science, and to help drive Europe’s innovation agenda. [https://www.zurich.ibm.com/pdf/ZRL_Fact_Sheet.pdf Download factsheet]<br />
<br />
==Hybrid AI Systems (HAS)==<br />
<br />
[[File:NatureElectronics20.jpg|thumb|right|200px]]<br />
Neither symbolic AI nor neural networks alone has produced the kind of intelligence expressed in human and animal behavior. Why? Each has a long and rich history, but has addressed a relatively narrow aspect of the problem. Symbolic AI focuses on solving cognitive problems, drawing upon the rich framework of symbolic computation to manipulate internal representations in order to perform reasoning and inference. But it suffers from being non-adaptive, lacking the ability to learn from example or by direct observation of the world. Neural networks on the other hand have the ability to learn from data, and derive much of their power from nonlinear function approximation combined with stochastic gradient descent. But intelligence requires more than modeling input-output relationships. Without the richness of symbolic computation, neural nets lack the simple but powerful operations such as variable binding that allow for analogy making and reasoning, which underlie the ability to generalize from few examples.<br />
<br />
We approach the problem from a very different perspective, inspired by the brain’s high-dimensional circuits and the unique mathematical properties of high-dimensional spaces. ''It leads us to a novel information processing architecture that combines the strengths of symbolic AI and neural networks, and yet has novel emergent properties of its own.'' By combining a small set of basic operations on high-dimensional vectors, we obtain hybrid AI system (HAS) that makes it possible to represent and manipulate data in ways familiar to us from symbolic AI, and to learn from the statistics of data in ways familiar to us from artificial neural networks and deep learning. Further, principles of such HAS allow few-shot learning capabilities, and extremely robust operations against failures, defects, variations, and noise, all of which are complementary to ultra-low energy computation on nanoscale fabrics such as phase-change memory devices. Exciting further research (listed in below table) awaiting in this direction spans high-level algorithmic exploration all the way to efficient hardware design for emerging computational fabrics.<br />
<br />
===Useful Reading===<br />
*[https://www.nature.com/articles/s41467-021-22364-0 Robust high-dimensional memory-augmented neural networks], Nature Communications, 2021.<br />
*[https://www.nature.com/articles/s41928-020-0410-3 In-memory hyperdimensional computing], Nature Electronics, 2020.<br />
*[https://www.nature.com/articles/s41928-020-00510-8 A wearable biosensing system with in-sensor adaptive machine learning for hand gesture recognition], Nature Electronics, 2020. <br />
*[https://link.springer.com/article/10.1007/s12559-009-9009-8 Hyperdimensional computing: an introduction to computing in distributed representation with high-dimensional random vectors], Cognitive Computation, 2009.<br />
<br />
===Prerequisites===<br />
*Python<br />
*Background in machine learning (''recommended'')<br />
*Experience with any deep learning framework such as TensorFlow or PyTorch (''recommended'')<br />
*VLSI I (''recommended'')<br />
<br />
==In-Memory Computing (IMC)==<br />
<br />
[[File:NNcover_imc.jpg|thumb|right|200px]]<br />
For decades, conventional computers based on the von Neumann architecture have performed computation by repeatedly transferring data between their processing and their memory units, which are physically separated. As computation becomes increasingly data-centric and as the scalability limits in terms of performance and power are being reached, alternative computing paradigms are searched for in which computation and storage are collocated. A fascinating new approach is that of computational memory where the physics of nanoscale memory devices are used to perform certain computational tasks within the memory unit in a non-von Neumann manner. '''Computational Memory''' (CM) is finding application in a variety of areas such as machine learning and signal processing. In addition to novel non-volatile memory technologies like '''PCM''' and '''ReRAM''', also conventional '''SRAM''' has been proposed as underlying storage technology in Computational Memories.<br />
<br />
===Useful Reading===<br />
*[https://ieeexplore.ieee.org/document/9288639 An SRAM-Based Multibit In-Memory Matrix-Vector Multiplier With a Precision That Scales Linearly in Area, Time, and Power]<br />
*[https://www.nature.com/articles/s41565-020-0655-z Memory devices and applications for in-memory computing]<br />
*[https://ieeexplore.ieee.org/abstract/document/8865099 Deep learning acceleration based on in-memory computing]<br />
<br />
===Prerequisites===<br />
*General interest in Deep Learning and memory/system design<br />
*VLSI I and VLSI II (''recommended'')<br />
*Analog Circuit Design (''recommended'')<br />
Specific requirements for the different projects vary and are generally negotiable.<br />
<br />
==Available Projects==<br />
We are inviting applications from students to conduct their master’s thesis work or an internship project at the IBM Research lab in Zurich on this exciting new topic. <br />
<!--<br />
The work performed could span low-level hardware experiments on phase-change memory chips comprising more than 1 million devices to high-level algorithmic development in a deep learning framework such as TensorFlow or PyTorch. It also involves interactions with several researchers across IBM research focusing on various aspects of the project. The ideal candidate should have a multi-disciplinary background, strong mathematical aptitude and programming skills. Prior knowledge on emerging memory technologies such as phase-change memory is a bonus but not necessary.<br />
--><br />
<br />
{| class="wikitable" style="text-align: center;"<br />
|-<br />
! style="width: 5%;"|Type !! style="width: 20%"|Project !! style="width: 60%"|Description !! style="width: 5%"|Topic !! style="width: 15%"|Workload Type <br />
|-<br />
<br />
| MA || Accelerating Mixers with Computational Memory || [http://iis-projects.ee.ethz.ch/images/e/e8/IBM_Mixer.pdf Link to description] || Hardware design and experiments<br />
|-<br />
<br />
| MA || Accelerating Transformers with Computational Memory || [http://iis-projects.ee.ethz.ch/images/b/be/IBM_TransfAcc.pdf Link to description] || Hardware/algorithmic design<br />
|-<br />
<br />
| MA/SA/BA || Face Recognition at Scale || [http://iis-projects.ee.ethz.ch/images/3/3d/IBM_FaceRec_at_Scale.pdf Link to description] || algorithmic design<br />
|-<br />
<br />
| MA/SA/BA || Developing Efficient Models of Strong AI for Intelligence Quotient (IQ) Test || [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Link to description] || algorithmic design<br />
|-<br />
<br />
| MA|| Lifelong learning challenge || [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf Link to description] || algorithmic/hardware design<br />
|-<br />
<br />
<br />
| MA || Accurate deep learning inference using computational memory || [https://www.zurich.ibm.com/careers/2020_027.html Project description and application] || IMC || algorithmic design<br />
|-<br />
|}<br />
<br />
==Contact==<br />
: Thesis will be at IBM Zurich in Rüschlikon<br />
; Hybrid AI Systems (HAS) projects<br />
: Contact (at ETH Zurich): [[:User:kgf | Dr. Frank K. Gurkaynak]], [[:User:Herschmi | Michael Hersche]]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian] <br />
: Contact (at IBM): [mailto:abr@zurich.ibm.com Dr. Abbas Rahimi]<br />
: Professor: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]</div>Herschmihttp://iis-projects.ee.ethz.ch/index.php?title=IBM_Research&diff=6935IBM Research2021-09-16T07:38:17Z<p>Herschmi: /* Available Projects */</p>
<hr />
<div>[[Category:Digital]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]] <br />
[[Category:Available]] <br />
[[Category:2021]]<br />
[[Category:Hot]]<br />
[[File:IBM_ZRLab.png|center]]<br />
<br />
==Short Description==<br />
Today, we are entering the era of cognitive computing, which holds great promise in deriving intelligence and knowledge from huge volumes of data. In today’s computers based on von Neumann architecture, huge amounts of data need to be shuttled back and forth at high speeds, a task at which this architecture is inefficient.<br />
<br />
It is becoming increasingly clear that to build efficient cognitive computers, we need to transition to non-von Neumann architectures in which memory and processing coexist in some form. At IBM Research–Zurich in the [https://www.zurich.ibm.com/sto/memory/ Neuromorphic and In-memory Computing Group], we explore various such computing paradigms from in-memory computing to brain-inspired neuromorphic computing. Our research spans from devices and architectures to algorithms and applications. <!--Here is a list of available projects:<br />
<br />
* [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf '''Artificial general intelligence (AGI):''' lifelong learning challenge] <br />
* [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf '''Machine learning based on optimal transport using in-memory computing''']<br />
* [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Developing Efficient Models of '''Strong AI for Intelligence Quotient (IQ) Test''']<br />
--><br />
<br />
===About the IBM Research–Zurich===<br />
The location in Zurich is one of IBM’s 12 global research labs. IBM has maintained a research laboratory in Switzerland since 1956. As the first European branch of IBM Research, the mission of the Zurich Lab, in addition to pursuing cutting-edge research for tomorrow’s information technology, is to cultivate close relationships with academic and industrial partners, be one of the premier places to work for world-class researchers, to promote women in IT and science, and to help drive Europe’s innovation agenda. [https://www.zurich.ibm.com/pdf/ZRL_Fact_Sheet.pdf Download factsheet]<br />
<br />
==Hybrid AI Systems (HAS)==<br />
<br />
[[File:NatureElectronics20.jpg|thumb|right|200px]]<br />
Neither symbolic AI nor neural networks alone has produced the kind of intelligence expressed in human and animal behavior. Why? Each has a long and rich history, but has addressed a relatively narrow aspect of the problem. Symbolic AI focuses on solving cognitive problems, drawing upon the rich framework of symbolic computation to manipulate internal representations in order to perform reasoning and inference. But it suffers from being non-adaptive, lacking the ability to learn from example or by direct observation of the world. Neural networks on the other hand have the ability to learn from data, and derive much of their power from nonlinear function approximation combined with stochastic gradient descent. But intelligence requires more than modeling input-output relationships. Without the richness of symbolic computation, neural nets lack the simple but powerful operations such as variable binding that allow for analogy making and reasoning, which underlie the ability to generalize from few examples.<br />
<br />
We approach the problem from a very different perspective, inspired by the brain’s high-dimensional circuits and the unique mathematical properties of high-dimensional spaces. ''It leads us to a novel information processing architecture that combines the strengths of symbolic AI and neural networks, and yet has novel emergent properties of its own.'' By combining a small set of basic operations on high-dimensional vectors, we obtain hybrid AI system (HAS) that makes it possible to represent and manipulate data in ways familiar to us from symbolic AI, and to learn from the statistics of data in ways familiar to us from artificial neural networks and deep learning. Further, principles of such HAS allow few-shot learning capabilities, and extremely robust operations against failures, defects, variations, and noise, all of which are complementary to ultra-low energy computation on nanoscale fabrics such as phase-change memory devices. Exciting further research (listed in below table) awaiting in this direction spans high-level algorithmic exploration all the way to efficient hardware design for emerging computational fabrics.<br />
<br />
===Useful Reading===<br />
*[https://www.nature.com/articles/s41467-021-22364-0 Robust high-dimensional memory-augmented neural networks], Nature Communications, 2021.<br />
*[https://www.nature.com/articles/s41928-020-0410-3 In-memory hyperdimensional computing], Nature Electronics, 2020.<br />
*[https://www.nature.com/articles/s41928-020-00510-8 A wearable biosensing system with in-sensor adaptive machine learning for hand gesture recognition], Nature Electronics, 2020. <br />
*[https://link.springer.com/article/10.1007/s12559-009-9009-8 Hyperdimensional computing: an introduction to computing in distributed representation with high-dimensional random vectors], Cognitive Computation, 2009.<br />
<br />
===Prerequisites===<br />
*Python<br />
*Background in machine learning (''recommended'')<br />
*Experience with any deep learning framework such as TensorFlow or PyTorch (''recommended'')<br />
*VLSI I (''recommended'')<br />
<br />
==In-Memory Computing (IMC)==<br />
<br />
[[File:NNcover_imc.jpg|thumb|right|200px]]<br />
For decades, conventional computers based on the von Neumann architecture have performed computation by repeatedly transferring data between their processing and their memory units, which are physically separated. As computation becomes increasingly data-centric and as the scalability limits in terms of performance and power are being reached, alternative computing paradigms are searched for in which computation and storage are collocated. A fascinating new approach is that of computational memory where the physics of nanoscale memory devices are used to perform certain computational tasks within the memory unit in a non-von Neumann manner. '''Computational Memory''' (CM) is finding application in a variety of areas such as machine learning and signal processing. In addition to novel non-volatile memory technologies like '''PCM''' and '''ReRAM''', also conventional '''SRAM''' has been proposed as underlying storage technology in Computational Memories.<br />
<br />
===Useful Reading===<br />
*[https://ieeexplore.ieee.org/document/9288639 An SRAM-Based Multibit In-Memory Matrix-Vector Multiplier With a Precision That Scales Linearly in Area, Time, and Power]<br />
*[https://www.nature.com/articles/s41565-020-0655-z Memory devices and applications for in-memory computing]<br />
*[https://ieeexplore.ieee.org/abstract/document/8865099 Deep learning acceleration based on in-memory computing]<br />
<br />
===Prerequisites===<br />
*General interest in Deep Learning and memory/system design<br />
*VLSI I and VLSI II (''recommended'')<br />
*Analog Circuit Design (''recommended'')<br />
Specific requirements for the different projects vary and are generally negotiable.<br />
<br />
==Available Projects==<br />
We are inviting applications from students to conduct their master’s thesis work or an internship project at the IBM Research lab in Zurich on this exciting new topic. <br />
<!--<br />
The work performed could span low-level hardware experiments on phase-change memory chips comprising more than 1 million devices to high-level algorithmic development in a deep learning framework such as TensorFlow or PyTorch. It also involves interactions with several researchers across IBM research focusing on various aspects of the project. The ideal candidate should have a multi-disciplinary background, strong mathematical aptitude and programming skills. Prior knowledge on emerging memory technologies such as phase-change memory is a bonus but not necessary.<br />
--><br />
<br />
{| class="wikitable" style="text-align: center;"<br />
|-<br />
! style="width: 5%;"|Type !! style="width: 20%"|Project !! style="width: 60%"|Description !! style="width: 5%"|Topic !! style="width: 15%"|Workload Type <br />
|-<br />
<br />
| MA || Accelerating Mixers with Computational Memory || [http://iis-projects.ee.ethz.ch/images/e/e8/IBM_Mixer.pdf Link to description] || HAS || Hardware design and experiments<br />
|-<br />
<br />
| MA || Accelerating Transformers with Computational Memory || [http://iis-projects.ee.ethz.ch/images/b/be/IBM_TransfAcc.pdf Link to description] || HAS || Hardware/algorithmic design<br />
|-<br />
<br />
| MA/SA/BA || Face Recognition at Scale || [http://iis-projects.ee.ethz.ch/images/3/3d/IBM_FaceRec_at_Scale.pdf Link to description] || HAS || algorithmic design<br />
|-<br />
<br />
| MA/SA/BA || Developing Efficient Models of Strong AI for Intelligence Quotient (IQ) Test || [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Link to description] || HAS || algorithmic design<br />
|-<br />
<br />
| MA|| Lifelong learning challenge || [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf Link to description] || HAS || algorithmic/hardware design<br />
|-<br />
<br />
<br />
| MA || Accurate deep learning inference using computational memory || [https://www.zurich.ibm.com/careers/2020_027.html Project description and application] || IMC || algorithmic design<br />
|-<br />
|}<br />
<br />
==Contact==<br />
: Thesis will be at IBM Zurich in Rüschlikon<br />
; Hybrid AI Systems (HAS) projects<br />
: Contact (at ETH Zurich): [[:User:kgf | Dr. Frank K. Gurkaynak]], [[:User:Herschmi | Michael Hersche]]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian] <br />
: Contact (at IBM): [mailto:abr@zurich.ibm.com Dr. Abbas Rahimi]<br />
: Professor: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]</div>Herschmihttp://iis-projects.ee.ethz.ch/index.php?title=IBM_Research&diff=6934IBM Research2021-09-16T07:37:36Z<p>Herschmi: /* Available Projects */</p>
<hr />
<div>[[Category:Digital]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]] <br />
[[Category:Available]] <br />
[[Category:2021]]<br />
[[Category:Hot]]<br />
[[File:IBM_ZRLab.png|center]]<br />
<br />
==Short Description==<br />
Today, we are entering the era of cognitive computing, which holds great promise in deriving intelligence and knowledge from huge volumes of data. In today’s computers based on von Neumann architecture, huge amounts of data need to be shuttled back and forth at high speeds, a task at which this architecture is inefficient.<br />
<br />
It is becoming increasingly clear that to build efficient cognitive computers, we need to transition to non-von Neumann architectures in which memory and processing coexist in some form. At IBM Research–Zurich in the [https://www.zurich.ibm.com/sto/memory/ Neuromorphic and In-memory Computing Group], we explore various such computing paradigms from in-memory computing to brain-inspired neuromorphic computing. Our research spans from devices and architectures to algorithms and applications. <!--Here is a list of available projects:<br />
<br />
* [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf '''Artificial general intelligence (AGI):''' lifelong learning challenge] <br />
* [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf '''Machine learning based on optimal transport using in-memory computing''']<br />
* [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Developing Efficient Models of '''Strong AI for Intelligence Quotient (IQ) Test''']<br />
--><br />
<br />
===About the IBM Research–Zurich===<br />
The location in Zurich is one of IBM’s 12 global research labs. IBM has maintained a research laboratory in Switzerland since 1956. As the first European branch of IBM Research, the mission of the Zurich Lab, in addition to pursuing cutting-edge research for tomorrow’s information technology, is to cultivate close relationships with academic and industrial partners, be one of the premier places to work for world-class researchers, to promote women in IT and science, and to help drive Europe’s innovation agenda. [https://www.zurich.ibm.com/pdf/ZRL_Fact_Sheet.pdf Download factsheet]<br />
<br />
==Hybrid AI Systems (HAS)==<br />
<br />
[[File:NatureElectronics20.jpg|thumb|right|200px]]<br />
Neither symbolic AI nor neural networks alone has produced the kind of intelligence expressed in human and animal behavior. Why? Each has a long and rich history, but has addressed a relatively narrow aspect of the problem. Symbolic AI focuses on solving cognitive problems, drawing upon the rich framework of symbolic computation to manipulate internal representations in order to perform reasoning and inference. But it suffers from being non-adaptive, lacking the ability to learn from example or by direct observation of the world. Neural networks on the other hand have the ability to learn from data, and derive much of their power from nonlinear function approximation combined with stochastic gradient descent. But intelligence requires more than modeling input-output relationships. Without the richness of symbolic computation, neural nets lack the simple but powerful operations such as variable binding that allow for analogy making and reasoning, which underlie the ability to generalize from few examples.<br />
<br />
We approach the problem from a very different perspective, inspired by the brain’s high-dimensional circuits and the unique mathematical properties of high-dimensional spaces. ''It leads us to a novel information processing architecture that combines the strengths of symbolic AI and neural networks, and yet has novel emergent properties of its own.'' By combining a small set of basic operations on high-dimensional vectors, we obtain hybrid AI system (HAS) that makes it possible to represent and manipulate data in ways familiar to us from symbolic AI, and to learn from the statistics of data in ways familiar to us from artificial neural networks and deep learning. Further, principles of such HAS allow few-shot learning capabilities, and extremely robust operations against failures, defects, variations, and noise, all of which are complementary to ultra-low energy computation on nanoscale fabrics such as phase-change memory devices. Exciting further research (listed in below table) awaiting in this direction spans high-level algorithmic exploration all the way to efficient hardware design for emerging computational fabrics.<br />
<br />
===Useful Reading===<br />
*[https://www.nature.com/articles/s41467-021-22364-0 Robust high-dimensional memory-augmented neural networks], Nature Communications, 2021.<br />
*[https://www.nature.com/articles/s41928-020-0410-3 In-memory hyperdimensional computing], Nature Electronics, 2020.<br />
*[https://www.nature.com/articles/s41928-020-00510-8 A wearable biosensing system with in-sensor adaptive machine learning for hand gesture recognition], Nature Electronics, 2020. <br />
*[https://link.springer.com/article/10.1007/s12559-009-9009-8 Hyperdimensional computing: an introduction to computing in distributed representation with high-dimensional random vectors], Cognitive Computation, 2009.<br />
<br />
===Prerequisites===<br />
*Python<br />
*Background in machine learning (''recommended'')<br />
*Experience with any deep learning framework such as TensorFlow or PyTorch (''recommended'')<br />
*VLSI I (''recommended'')<br />
<br />
==In-Memory Computing (IMC)==<br />
<br />
[[File:NNcover_imc.jpg|thumb|right|200px]]<br />
For decades, conventional computers based on the von Neumann architecture have performed computation by repeatedly transferring data between their processing and their memory units, which are physically separated. As computation becomes increasingly data-centric and as the scalability limits in terms of performance and power are being reached, alternative computing paradigms are searched for in which computation and storage are collocated. A fascinating new approach is that of computational memory where the physics of nanoscale memory devices are used to perform certain computational tasks within the memory unit in a non-von Neumann manner. '''Computational Memory''' (CM) is finding application in a variety of areas such as machine learning and signal processing. In addition to novel non-volatile memory technologies like '''PCM''' and '''ReRAM''', also conventional '''SRAM''' has been proposed as underlying storage technology in Computational Memories.<br />
<br />
===Useful Reading===<br />
*[https://ieeexplore.ieee.org/document/9288639 An SRAM-Based Multibit In-Memory Matrix-Vector Multiplier With a Precision That Scales Linearly in Area, Time, and Power]<br />
*[https://www.nature.com/articles/s41565-020-0655-z Memory devices and applications for in-memory computing]<br />
*[https://ieeexplore.ieee.org/abstract/document/8865099 Deep learning acceleration based on in-memory computing]<br />
<br />
===Prerequisites===<br />
*General interest in Deep Learning and memory/system design<br />
*VLSI I and VLSI II (''recommended'')<br />
*Analog Circuit Design (''recommended'')<br />
Specific requirements for the different projects vary and are generally negotiable.<br />
<br />
==Available Projects==<br />
We are inviting applications from students to conduct their master’s thesis work or an internship project at the IBM Research lab in Zurich on this exciting new topic. <br />
<!--<br />
The work performed could span low-level hardware experiments on phase-change memory chips comprising more than 1 million devices to high-level algorithmic development in a deep learning framework such as TensorFlow or PyTorch. It also involves interactions with several researchers across IBM research focusing on various aspects of the project. The ideal candidate should have a multi-disciplinary background, strong mathematical aptitude and programming skills. Prior knowledge on emerging memory technologies such as phase-change memory is a bonus but not necessary.<br />
--><br />
<br />
{| class="wikitable" style="text-align: center;"<br />
|-<br />
! style="width: 5%;"|Type !! style="width: 20%"|Project !! style="width: 60%"|Description !! style="width: 5%"|Topic !! style="width: 15%"|Workload Type <br />
|-<br />
<br />
| MA || Accelerating Mixers with Computational Memory || [http://iis-projects.ee.ethz.ch/images/e/e8/IBM_Mixer.pdf Link to description] || HAS || Hardware design and experiments<br />
|-<br />
<br />
| MA || Accelerating Transformers with Computational Memory || [http://iis-projects.ee.ethz.ch/images/b/be/IBM_TransfAcc.pdf Link to description] || HAS || Hardware/algorithmic design<br />
|-<br />
<br />
| MA/SA/BA || Face Recognition at Scale || [http://iis-projects.ee.ethz.ch/images/3/3d/IBM_FaceRec_at_Scale.pdf Link to description] || HAS || algorithmic design<br />
|-<br />
<br />
| MA || Developing Efficient Models of Strong AI for Intelligence Quotient (IQ) Test || [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Link to description] || HAS || algorithmic design<br />
|-<br />
<br />
| MA|| Lifelong learning challenge || [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf Link to description] || HAS || algorithmic/hardware design<br />
|-<br />
<br />
<br />
| MA || Accurate deep learning inference using computational memory || [https://www.zurich.ibm.com/careers/2020_027.html Project description and application] || IMC || algorithmic design<br />
|-<br />
|}<br />
<br />
==Contact==<br />
: Thesis will be at IBM Zurich in Rüschlikon<br />
; Hybrid AI Systems (HAS) projects<br />
: Contact (at ETH Zurich): [[:User:kgf | Dr. Frank K. Gurkaynak]], [[:User:Herschmi | Michael Hersche]]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian] <br />
: Contact (at IBM): [mailto:abr@zurich.ibm.com Dr. Abbas Rahimi]<br />
: Professor: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]</div>Herschmihttp://iis-projects.ee.ethz.ch/index.php?title=IBM_Research&diff=6933IBM Research2021-09-16T07:37:04Z<p>Herschmi: </p>
<hr />
<div>[[Category:Digital]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]] <br />
[[Category:Available]] <br />
[[Category:2021]]<br />
[[Category:Hot]]<br />
[[File:IBM_ZRLab.png|center]]<br />
<br />
==Short Description==<br />
Today, we are entering the era of cognitive computing, which holds great promise in deriving intelligence and knowledge from huge volumes of data. In today’s computers based on von Neumann architecture, huge amounts of data need to be shuttled back and forth at high speeds, a task at which this architecture is inefficient.<br />
<br />
It is becoming increasingly clear that to build efficient cognitive computers, we need to transition to non-von Neumann architectures in which memory and processing coexist in some form. At IBM Research–Zurich in the [https://www.zurich.ibm.com/sto/memory/ Neuromorphic and In-memory Computing Group], we explore various such computing paradigms from in-memory computing to brain-inspired neuromorphic computing. Our research spans from devices and architectures to algorithms and applications. <!--Here is a list of available projects:<br />
<br />
* [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf '''Artificial general intelligence (AGI):''' lifelong learning challenge] <br />
* [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf '''Machine learning based on optimal transport using in-memory computing''']<br />
* [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Developing Efficient Models of '''Strong AI for Intelligence Quotient (IQ) Test''']<br />
--><br />
<br />
===About the IBM Research–Zurich===<br />
The location in Zurich is one of IBM’s 12 global research labs. IBM has maintained a research laboratory in Switzerland since 1956. As the first European branch of IBM Research, the mission of the Zurich Lab, in addition to pursuing cutting-edge research for tomorrow’s information technology, is to cultivate close relationships with academic and industrial partners, be one of the premier places to work for world-class researchers, to promote women in IT and science, and to help drive Europe’s innovation agenda. [https://www.zurich.ibm.com/pdf/ZRL_Fact_Sheet.pdf Download factsheet]<br />
<br />
==Hybrid AI Systems (HAS)==<br />
<br />
[[File:NatureElectronics20.jpg|thumb|right|200px]]<br />
Neither symbolic AI nor neural networks alone has produced the kind of intelligence expressed in human and animal behavior. Why? Each has a long and rich history, but has addressed a relatively narrow aspect of the problem. Symbolic AI focuses on solving cognitive problems, drawing upon the rich framework of symbolic computation to manipulate internal representations in order to perform reasoning and inference. But it suffers from being non-adaptive, lacking the ability to learn from example or by direct observation of the world. Neural networks on the other hand have the ability to learn from data, and derive much of their power from nonlinear function approximation combined with stochastic gradient descent. But intelligence requires more than modeling input-output relationships. Without the richness of symbolic computation, neural nets lack the simple but powerful operations such as variable binding that allow for analogy making and reasoning, which underlie the ability to generalize from few examples.<br />
<br />
We approach the problem from a very different perspective, inspired by the brain’s high-dimensional circuits and the unique mathematical properties of high-dimensional spaces. ''It leads us to a novel information processing architecture that combines the strengths of symbolic AI and neural networks, and yet has novel emergent properties of its own.'' By combining a small set of basic operations on high-dimensional vectors, we obtain hybrid AI system (HAS) that makes it possible to represent and manipulate data in ways familiar to us from symbolic AI, and to learn from the statistics of data in ways familiar to us from artificial neural networks and deep learning. Further, principles of such HAS allow few-shot learning capabilities, and extremely robust operations against failures, defects, variations, and noise, all of which are complementary to ultra-low energy computation on nanoscale fabrics such as phase-change memory devices. Exciting further research (listed in below table) awaiting in this direction spans high-level algorithmic exploration all the way to efficient hardware design for emerging computational fabrics.<br />
<br />
===Useful Reading===<br />
*[https://www.nature.com/articles/s41467-021-22364-0 Robust high-dimensional memory-augmented neural networks], Nature Communications, 2021.<br />
*[https://www.nature.com/articles/s41928-020-0410-3 In-memory hyperdimensional computing], Nature Electronics, 2020.<br />
*[https://www.nature.com/articles/s41928-020-00510-8 A wearable biosensing system with in-sensor adaptive machine learning for hand gesture recognition], Nature Electronics, 2020. <br />
*[https://link.springer.com/article/10.1007/s12559-009-9009-8 Hyperdimensional computing: an introduction to computing in distributed representation with high-dimensional random vectors], Cognitive Computation, 2009.<br />
<br />
===Prerequisites===<br />
*Python<br />
*Background in machine learning (''recommended'')<br />
*Experience with any deep learning framework such as TensorFlow or PyTorch (''recommended'')<br />
*VLSI I (''recommended'')<br />
<br />
==In-Memory Computing (IMC)==<br />
<br />
[[File:NNcover_imc.jpg|thumb|right|200px]]<br />
For decades, conventional computers based on the von Neumann architecture have performed computation by repeatedly transferring data between their processing and their memory units, which are physically separated. As computation becomes increasingly data-centric and as the scalability limits in terms of performance and power are being reached, alternative computing paradigms are searched for in which computation and storage are collocated. A fascinating new approach is that of computational memory where the physics of nanoscale memory devices are used to perform certain computational tasks within the memory unit in a non-von Neumann manner. '''Computational Memory''' (CM) is finding application in a variety of areas such as machine learning and signal processing. In addition to novel non-volatile memory technologies like '''PCM''' and '''ReRAM''', also conventional '''SRAM''' has been proposed as underlying storage technology in Computational Memories.<br />
<br />
===Useful Reading===<br />
*[https://ieeexplore.ieee.org/document/9288639 An SRAM-Based Multibit In-Memory Matrix-Vector Multiplier With a Precision That Scales Linearly in Area, Time, and Power]<br />
*[https://www.nature.com/articles/s41565-020-0655-z Memory devices and applications for in-memory computing]<br />
*[https://ieeexplore.ieee.org/abstract/document/8865099 Deep learning acceleration based on in-memory computing]<br />
<br />
===Prerequisites===<br />
*General interest in Deep Learning and memory/system design<br />
*VLSI I and VLSI II (''recommended'')<br />
*Analog Circuit Design (''recommended'')<br />
Specific requirements for the different projects vary and are generally negotiable.<br />
<br />
==Available Projects==<br />
We are inviting applications from students to conduct their master’s thesis work or an internship project at the IBM Research lab in Zurich on this exciting new topic. <br />
<!--<br />
The work performed could span low-level hardware experiments on phase-change memory chips comprising more than 1 million devices to high-level algorithmic development in a deep learning framework such as TensorFlow or PyTorch. It also involves interactions with several researchers across IBM research focusing on various aspects of the project. The ideal candidate should have a multi-disciplinary background, strong mathematical aptitude and programming skills. Prior knowledge on emerging memory technologies such as phase-change memory is a bonus but not necessary.<br />
--><br />
<br />
{| class="wikitable" style="text-align: center;"<br />
|-<br />
! style="width: 5%;"|Type !! style="width: 20%"|Project !! style="width: 60%"|Description !! style="width: 5%"|Topic !! style="width: 15%"|Workload Type <br />
|-<br />
<br />
| MA || Accelerating Mixers with Computational Memory || [http://iis-projects.ee.ethz.ch/images/e/e8/IBM_Mixer.pdf Link to description] || HAS || Hardware design and experiments<br />
|-<br />
<br />
| MA || Accelerating Transformers with Computational Memory || [http://iis-projects.ee.ethz.ch/images/b/be/IBM_TransfAcc.pdf Link to description] || HAS || Hardware/algorithmic design<br />
|-<br />
<br />
| MA/SA/BA || Face Recognition at Scale || [http://iis-projects.ee.ethz.ch/images/3/3d/IBM_FaceRec_at_Scale.pdf Link to description] || HAS || algorithmic design<br />
|-<br />
<br />
| MA || Developing Efficient Models of Strong AI for Intelligence Quotient (IQ) Test || [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Link to description] || HAS || algorithmic design<br />
|-<br />
<br />
| MA|| Lifelong learning challenge || [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf Link to description] || HAS || algorithmic/hardware design<br />
|-<br />
<br />
<br />
| MA || Accurate deep learning inference using computational memory || [https://www.zurich.ibm.com/careers/2020_027.html Project description and application] || IMC || algorithmic design<br />
|-<br />
}<br />
<br />
<br />
<br />
<br />
==Contact==<br />
: Thesis will be at IBM Zurich in Rüschlikon<br />
; Hybrid AI Systems (HAS) projects<br />
: Contact (at ETH Zurich): [[:User:kgf | Dr. Frank K. Gurkaynak]], [[:User:Herschmi | Michael Hersche]]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian] <br />
: Contact (at IBM): [mailto:abr@zurich.ibm.com Dr. Abbas Rahimi]<br />
: Professor: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]</div>Herschmihttp://iis-projects.ee.ethz.ch/index.php?title=IBM_Research&diff=6932IBM Research2021-09-16T07:36:28Z<p>Herschmi: /* Contact */</p>
<hr />
<div>[[Category:Digital]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]] <br />
[[Category:Available]] <br />
[[Category:2020]]<br />
[[Category:Hot]]<br />
[[File:IBM_ZRLab.png|center]]<br />
<br />
==Short Description==<br />
Today, we are entering the era of cognitive computing, which holds great promise in deriving intelligence and knowledge from huge volumes of data. In today’s computers based on von Neumann architecture, huge amounts of data need to be shuttled back and forth at high speeds, a task at which this architecture is inefficient.<br />
<br />
It is becoming increasingly clear that to build efficient cognitive computers, we need to transition to non-von Neumann architectures in which memory and processing coexist in some form. At IBM Research–Zurich in the [https://www.zurich.ibm.com/sto/memory/ Neuromorphic and In-memory Computing Group], we explore various such computing paradigms from in-memory computing to brain-inspired neuromorphic computing. Our research spans from devices and architectures to algorithms and applications. <!--Here is a list of available projects:<br />
<br />
* [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf '''Artificial general intelligence (AGI):''' lifelong learning challenge] <br />
* [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf '''Machine learning based on optimal transport using in-memory computing''']<br />
* [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Developing Efficient Models of '''Strong AI for Intelligence Quotient (IQ) Test''']<br />
--><br />
<br />
===About the IBM Research–Zurich===<br />
The location in Zurich is one of IBM’s 12 global research labs. IBM has maintained a research laboratory in Switzerland since 1956. As the first European branch of IBM Research, the mission of the Zurich Lab, in addition to pursuing cutting-edge research for tomorrow’s information technology, is to cultivate close relationships with academic and industrial partners, be one of the premier places to work for world-class researchers, to promote women in IT and science, and to help drive Europe’s innovation agenda. [https://www.zurich.ibm.com/pdf/ZRL_Fact_Sheet.pdf Download factsheet]<br />
<br />
==Hybrid AI Systems (HAS)==<br />
<br />
[[File:NatureElectronics20.jpg|thumb|right|200px]]<br />
Neither symbolic AI nor neural networks alone has produced the kind of intelligence expressed in human and animal behavior. Why? Each has a long and rich history, but has addressed a relatively narrow aspect of the problem. Symbolic AI focuses on solving cognitive problems, drawing upon the rich framework of symbolic computation to manipulate internal representations in order to perform reasoning and inference. But it suffers from being non-adaptive, lacking the ability to learn from example or by direct observation of the world. Neural networks on the other hand have the ability to learn from data, and derive much of their power from nonlinear function approximation combined with stochastic gradient descent. But intelligence requires more than modeling input-output relationships. Without the richness of symbolic computation, neural nets lack the simple but powerful operations such as variable binding that allow for analogy making and reasoning, which underlie the ability to generalize from few examples.<br />
<br />
We approach the problem from a very different perspective, inspired by the brain’s high-dimensional circuits and the unique mathematical properties of high-dimensional spaces. ''It leads us to a novel information processing architecture that combines the strengths of symbolic AI and neural networks, and yet has novel emergent properties of its own.'' By combining a small set of basic operations on high-dimensional vectors, we obtain hybrid AI system (HAS) that makes it possible to represent and manipulate data in ways familiar to us from symbolic AI, and to learn from the statistics of data in ways familiar to us from artificial neural networks and deep learning. Further, principles of such HAS allow few-shot learning capabilities, and extremely robust operations against failures, defects, variations, and noise, all of which are complementary to ultra-low energy computation on nanoscale fabrics such as phase-change memory devices. Exciting further research (listed in below table) awaiting in this direction spans high-level algorithmic exploration all the way to efficient hardware design for emerging computational fabrics.<br />
<br />
===Useful Reading===<br />
*[https://www.nature.com/articles/s41467-021-22364-0 Robust high-dimensional memory-augmented neural networks], Nature Communications, 2021.<br />
*[https://www.nature.com/articles/s41928-020-0410-3 In-memory hyperdimensional computing], Nature Electronics, 2020.<br />
*[https://www.nature.com/articles/s41928-020-00510-8 A wearable biosensing system with in-sensor adaptive machine learning for hand gesture recognition], Nature Electronics, 2020. <br />
*[https://link.springer.com/article/10.1007/s12559-009-9009-8 Hyperdimensional computing: an introduction to computing in distributed representation with high-dimensional random vectors], Cognitive Computation, 2009.<br />
<br />
===Prerequisites===<br />
*Python<br />
*Background in machine learning (''recommended'')<br />
*Experience with any deep learning framework such as TensorFlow or PyTorch (''recommended'')<br />
*VLSI I (''recommended'')<br />
<br />
==In-Memory Computing (IMC)==<br />
<br />
[[File:NNcover_imc.jpg|thumb|right|200px]]<br />
For decades, conventional computers based on the von Neumann architecture have performed computation by repeatedly transferring data between their processing and their memory units, which are physically separated. As computation becomes increasingly data-centric and as the scalability limits in terms of performance and power are being reached, alternative computing paradigms are searched for in which computation and storage are collocated. A fascinating new approach is that of computational memory where the physics of nanoscale memory devices are used to perform certain computational tasks within the memory unit in a non-von Neumann manner. '''Computational Memory''' (CM) is finding application in a variety of areas such as machine learning and signal processing. In addition to novel non-volatile memory technologies like '''PCM''' and '''ReRAM''', also conventional '''SRAM''' has been proposed as underlying storage technology in Computational Memories.<br />
<br />
===Useful Reading===<br />
*[https://ieeexplore.ieee.org/document/9288639 An SRAM-Based Multibit In-Memory Matrix-Vector Multiplier With a Precision That Scales Linearly in Area, Time, and Power]<br />
*[https://www.nature.com/articles/s41565-020-0655-z Memory devices and applications for in-memory computing]<br />
*[https://ieeexplore.ieee.org/abstract/document/8865099 Deep learning acceleration based on in-memory computing]<br />
<br />
===Prerequisites===<br />
*General interest in Deep Learning and memory/system design<br />
*VLSI I and VLSI II (''recommended'')<br />
*Analog Circuit Design (''recommended'')<br />
Specific requirements for the different projects vary and are generally negotiable.<br />
<br />
==Available Projects==<br />
We are inviting applications from students to conduct their master’s thesis work or an internship project at the IBM Research lab in Zurich on this exciting new topic. <br />
<!--<br />
The work performed could span low-level hardware experiments on phase-change memory chips comprising more than 1 million devices to high-level algorithmic development in a deep learning framework such as TensorFlow or PyTorch. It also involves interactions with several researchers across IBM research focusing on various aspects of the project. The ideal candidate should have a multi-disciplinary background, strong mathematical aptitude and programming skills. Prior knowledge on emerging memory technologies such as phase-change memory is a bonus but not necessary.<br />
--><br />
<br />
{| class="wikitable" style="text-align: center;"<br />
|-<br />
! style="width: 5%;"|Type !! style="width: 20%"|Project !! style="width: 60%"|Description !! style="width: 5%"|Topic !! style="width: 15%"|Workload Type <br />
|-<br />
<br />
| MA || Accelerating Mixers with Computational Memory || [http://iis-projects.ee.ethz.ch/images/e/e8/IBM_Mixer.pdf Link to description] || HAS || Hardware design and experiments<br />
|-<br />
<br />
| MA || Accelerating Transformers with Computational Memory || [http://iis-projects.ee.ethz.ch/images/b/be/IBM_TransfAcc.pdf Link to description] || HAS || Hardware/algorithmic design<br />
|-<br />
<br />
| MA/SA/BA || Face Recognition at Scale || [http://iis-projects.ee.ethz.ch/images/3/3d/IBM_FaceRec_at_Scale.pdf Link to description] || HAS || algorithmic design<br />
|-<br />
<br />
| MA || Developing Efficient Models of Strong AI for Intelligence Quotient (IQ) Test || [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Link to description] || HAS || algorithmic design<br />
|-<br />
<br />
| MA|| Lifelong learning challenge || [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf Link to description] || HAS || algorithmic/hardware design<br />
|-<br />
<br />
<br />
| MA || Accurate deep learning inference using computational memory || [https://www.zurich.ibm.com/careers/2020_027.html Project description and application] || IMC || algorithmic design<br />
|-<br />
}<br />
<br />
<br />
<!--<br />
| MA || ADC design for computational memory || style="text-align: justify;"|Digital-to-Analog converters ('''DAC'''s) and Analog-to-Digital converters ('''ADC'''s) are extensively employed in Computational Memory ('''CM''') to handle the crossing between the digital and analog domain, in which computationally expensive tasks, like Matrix-Vector Multiplications ('''MVM'''), are carried out with O(1) complexity. Each conversion costs a certain amount of energy and its precision can only be guaranteed up to the Effective Number of Bits ('''ENOB''') of the employed data converter. <br /> The research focus will be on understanding the system level requirements on ADC and DAC for optimal performance of Deep Neural Network inference using CM. Furthermore, the effects of noise, non-linearity and manufacturing tolerances shall be examined and counter measurements, like for example periodic digital ADC recalibration and digital post processing, shall be evaluated with regards to effectivity and energy costs.|| IMC || analog circuit design<br />
|<br />
---><br />
<br />
==Contact==<br />
: Thesis will be at IBM Zurich in Rüschlikon<br />
; Hybrid AI Systems (HAS) projects<br />
: Contact (at ETH Zurich): [[:User:kgf | Dr. Frank K. Gurkaynak]], [[:User:Herschmi | Michael Hersche]]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian] <br />
: Contact (at IBM): [mailto:abr@zurich.ibm.com Dr. Abbas Rahimi]<br />
: Professor: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]</div>Herschmihttp://iis-projects.ee.ethz.ch/index.php?title=IBM_Research&diff=6931IBM Research2021-09-16T07:35:42Z<p>Herschmi: /* Available Projects */</p>
<hr />
<div>[[Category:Digital]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]] <br />
[[Category:Available]] <br />
[[Category:2020]]<br />
[[Category:Hot]]<br />
[[File:IBM_ZRLab.png|center]]<br />
<br />
==Short Description==<br />
Today, we are entering the era of cognitive computing, which holds great promise in deriving intelligence and knowledge from huge volumes of data. In today’s computers based on von Neumann architecture, huge amounts of data need to be shuttled back and forth at high speeds, a task at which this architecture is inefficient.<br />
<br />
It is becoming increasingly clear that to build efficient cognitive computers, we need to transition to non-von Neumann architectures in which memory and processing coexist in some form. At IBM Research–Zurich in the [https://www.zurich.ibm.com/sto/memory/ Neuromorphic and In-memory Computing Group], we explore various such computing paradigms from in-memory computing to brain-inspired neuromorphic computing. Our research spans from devices and architectures to algorithms and applications. <!--Here is a list of available projects:<br />
<br />
* [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf '''Artificial general intelligence (AGI):''' lifelong learning challenge] <br />
* [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf '''Machine learning based on optimal transport using in-memory computing''']<br />
* [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Developing Efficient Models of '''Strong AI for Intelligence Quotient (IQ) Test''']<br />
--><br />
<br />
===About the IBM Research–Zurich===<br />
The location in Zurich is one of IBM’s 12 global research labs. IBM has maintained a research laboratory in Switzerland since 1956. As the first European branch of IBM Research, the mission of the Zurich Lab, in addition to pursuing cutting-edge research for tomorrow’s information technology, is to cultivate close relationships with academic and industrial partners, be one of the premier places to work for world-class researchers, to promote women in IT and science, and to help drive Europe’s innovation agenda. [https://www.zurich.ibm.com/pdf/ZRL_Fact_Sheet.pdf Download factsheet]<br />
<br />
==Hybrid AI Systems (HAS)==<br />
<br />
[[File:NatureElectronics20.jpg|thumb|right|200px]]<br />
Neither symbolic AI nor neural networks alone has produced the kind of intelligence expressed in human and animal behavior. Why? Each has a long and rich history, but has addressed a relatively narrow aspect of the problem. Symbolic AI focuses on solving cognitive problems, drawing upon the rich framework of symbolic computation to manipulate internal representations in order to perform reasoning and inference. But it suffers from being non-adaptive, lacking the ability to learn from example or by direct observation of the world. Neural networks on the other hand have the ability to learn from data, and derive much of their power from nonlinear function approximation combined with stochastic gradient descent. But intelligence requires more than modeling input-output relationships. Without the richness of symbolic computation, neural nets lack the simple but powerful operations such as variable binding that allow for analogy making and reasoning, which underlie the ability to generalize from few examples.<br />
<br />
We approach the problem from a very different perspective, inspired by the brain’s high-dimensional circuits and the unique mathematical properties of high-dimensional spaces. ''It leads us to a novel information processing architecture that combines the strengths of symbolic AI and neural networks, and yet has novel emergent properties of its own.'' By combining a small set of basic operations on high-dimensional vectors, we obtain hybrid AI system (HAS) that makes it possible to represent and manipulate data in ways familiar to us from symbolic AI, and to learn from the statistics of data in ways familiar to us from artificial neural networks and deep learning. Further, principles of such HAS allow few-shot learning capabilities, and extremely robust operations against failures, defects, variations, and noise, all of which are complementary to ultra-low energy computation on nanoscale fabrics such as phase-change memory devices. Exciting further research (listed in below table) awaiting in this direction spans high-level algorithmic exploration all the way to efficient hardware design for emerging computational fabrics.<br />
<br />
===Useful Reading===<br />
*[https://www.nature.com/articles/s41467-021-22364-0 Robust high-dimensional memory-augmented neural networks], Nature Communications, 2021.<br />
*[https://www.nature.com/articles/s41928-020-0410-3 In-memory hyperdimensional computing], Nature Electronics, 2020.<br />
*[https://www.nature.com/articles/s41928-020-00510-8 A wearable biosensing system with in-sensor adaptive machine learning for hand gesture recognition], Nature Electronics, 2020. <br />
*[https://link.springer.com/article/10.1007/s12559-009-9009-8 Hyperdimensional computing: an introduction to computing in distributed representation with high-dimensional random vectors], Cognitive Computation, 2009.<br />
<br />
===Prerequisites===<br />
*Python<br />
*Background in machine learning (''recommended'')<br />
*Experience with any deep learning framework such as TensorFlow or PyTorch (''recommended'')<br />
*VLSI I (''recommended'')<br />
<br />
==In-Memory Computing (IMC)==<br />
<br />
[[File:NNcover_imc.jpg|thumb|right|200px]]<br />
For decades, conventional computers based on the von Neumann architecture have performed computation by repeatedly transferring data between their processing and their memory units, which are physically separated. As computation becomes increasingly data-centric and as the scalability limits in terms of performance and power are being reached, alternative computing paradigms are searched for in which computation and storage are collocated. A fascinating new approach is that of computational memory where the physics of nanoscale memory devices are used to perform certain computational tasks within the memory unit in a non-von Neumann manner. '''Computational Memory''' (CM) is finding application in a variety of areas such as machine learning and signal processing. In addition to novel non-volatile memory technologies like '''PCM''' and '''ReRAM''', also conventional '''SRAM''' has been proposed as underlying storage technology in Computational Memories.<br />
<br />
===Useful Reading===<br />
*[https://ieeexplore.ieee.org/document/9288639 An SRAM-Based Multibit In-Memory Matrix-Vector Multiplier With a Precision That Scales Linearly in Area, Time, and Power]<br />
*[https://www.nature.com/articles/s41565-020-0655-z Memory devices and applications for in-memory computing]<br />
*[https://ieeexplore.ieee.org/abstract/document/8865099 Deep learning acceleration based on in-memory computing]<br />
<br />
===Prerequisites===<br />
*General interest in Deep Learning and memory/system design<br />
*VLSI I and VLSI II (''recommended'')<br />
*Analog Circuit Design (''recommended'')<br />
Specific requirements for the different projects vary and are generally negotiable.<br />
<br />
==Available Projects==<br />
We are inviting applications from students to conduct their master’s thesis work or an internship project at the IBM Research lab in Zurich on this exciting new topic. <br />
<!--<br />
The work performed could span low-level hardware experiments on phase-change memory chips comprising more than 1 million devices to high-level algorithmic development in a deep learning framework such as TensorFlow or PyTorch. It also involves interactions with several researchers across IBM research focusing on various aspects of the project. The ideal candidate should have a multi-disciplinary background, strong mathematical aptitude and programming skills. Prior knowledge on emerging memory technologies such as phase-change memory is a bonus but not necessary.<br />
--><br />
<br />
{| class="wikitable" style="text-align: center;"<br />
|-<br />
! style="width: 5%;"|Type !! style="width: 20%"|Project !! style="width: 60%"|Description !! style="width: 5%"|Topic !! style="width: 15%"|Workload Type <br />
|-<br />
<br />
| MA || Accelerating Mixers with Computational Memory || [http://iis-projects.ee.ethz.ch/images/e/e8/IBM_Mixer.pdf Link to description] || HAS || Hardware design and experiments<br />
|-<br />
<br />
| MA || Accelerating Transformers with Computational Memory || [http://iis-projects.ee.ethz.ch/images/b/be/IBM_TransfAcc.pdf Link to description] || HAS || Hardware/algorithmic design<br />
|-<br />
<br />
| MA/SA/BA || Face Recognition at Scale || [http://iis-projects.ee.ethz.ch/images/3/3d/IBM_FaceRec_at_Scale.pdf Link to description] || HAS || algorithmic design<br />
|-<br />
<br />
| MA || Developing Efficient Models of Strong AI for Intelligence Quotient (IQ) Test || [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Link to description] || HAS || algorithmic design<br />
|-<br />
<br />
| MA|| Lifelong learning challenge || [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf Link to description] || HAS || algorithmic/hardware design<br />
|-<br />
<br />
<br />
| MA || Accurate deep learning inference using computational memory || [https://www.zurich.ibm.com/careers/2020_027.html Project description and application] || IMC || algorithmic design<br />
|-<br />
}<br />
<br />
<br />
<!--<br />
| MA || ADC design for computational memory || style="text-align: justify;"|Digital-to-Analog converters ('''DAC'''s) and Analog-to-Digital converters ('''ADC'''s) are extensively employed in Computational Memory ('''CM''') to handle the crossing between the digital and analog domain, in which computationally expensive tasks, like Matrix-Vector Multiplications ('''MVM'''), are carried out with O(1) complexity. Each conversion costs a certain amount of energy and its precision can only be guaranteed up to the Effective Number of Bits ('''ENOB''') of the employed data converter. <br /> The research focus will be on understanding the system level requirements on ADC and DAC for optimal performance of Deep Neural Network inference using CM. Furthermore, the effects of noise, non-linearity and manufacturing tolerances shall be examined and counter measurements, like for example periodic digital ADC recalibration and digital post processing, shall be evaluated with regards to effectivity and energy costs.|| IMC || analog circuit design<br />
|<br />
---><br />
<br />
==Contact==<br />
: Thesis will be at IBM Zurich in Rüschlikon<br />
; Hybrid AI Systems (HAS) projects<br />
: Contact (at ETH Zurich): [[:User:kgf | Dr. Frank K. Gurkaynak]] and [[:User:Herschmi | Michael Hersche]]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian] <br />
: Contact (at IBM): [mailto:abr@zurich.ibm.com Dr. Abbas Rahimi]<br />
: Professor: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]<br />
; In-Memory Computing (IMC) projects<br />
: Contact (at IBM/ETH Zurich): [[:User:rid | Riduan Khaddam-Aljameh]]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian]<br />
: Professor: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]</div>Herschmihttp://iis-projects.ee.ethz.ch/index.php?title=User:Herschmi&diff=6930User:Herschmi2021-09-16T07:33:21Z<p>Herschmi: /* Available Projects */</p>
<hr />
<div>[[File:Hersche.jpg|thumb|200px]]<br />
== Michael Hersche ==<br />
Michael Hersche received his M.Sc. degree from the Swiss Federal Institute of Technology Zurich (ETHZ), Switzerland, where he is currently pursuing a Ph.D. degree. From 2019-2021, he was a research assistant at ETHZ in the group of Prof. Luca Benini at the Integrated Systems Laboratory. Currently, he is working as a PhD student at IBM Research Zurich. His research targets digital signal processing, artificial intelligence, and communication with focus on hyperdimensional computing.<br />
<br />
==Interests==<br />
* Hyperdimensional Computing<br />
* Machine Learning<br />
* Brain-Computer Interfaces<br />
<br />
==Contact Information==<br />
* '''Office''': ETZ J 76.2<br />
* '''e-mail''': [mailto:herschmi@iis.ee.ethz.ch herschmi@iis.ee.ethz.ch]<br />
* '''www''': [http://www.iis.ee.ethz.ch/people/person-detail.MjIxOTI2.TGlzdC8yMDA4LDk5MDE4ODk4MA==.html IIS Homepage]<br />
* '''phone''': (+41 44 63) 259 12 <br />
[[Category:Supervisors]]<br />
[[Category:Digital]]<br />
[[Category:Human_Intranet]]<br />
<br />
==Available Projects==<br />
See available projects under [[IBM Research]]. <br />
<!--<br />
<DynamicPageList><br />
supresserrors = true<br />
category = Available<br />
category = Herschmi<br />
</DynamicPageList><br />
---><br />
<br />
== Projects in Progress==<br />
<DynamicPageList><br />
supresserrors = true<br />
category = In progress<br />
category = Herschmi<br />
</DynamicPageList><br />
<br />
== Completed Projects==<br />
<DynamicPageList><br />
supresserrors = true<br />
category = Completed<br />
category = Herschmi<br />
</DynamicPageList></div>Herschmihttp://iis-projects.ee.ethz.ch/index.php?title=User:Herschmi&diff=6929User:Herschmi2021-09-16T07:32:56Z<p>Herschmi: /* Available Projects */</p>
<hr />
<div>[[File:Hersche.jpg|thumb|200px]]<br />
== Michael Hersche ==<br />
Michael Hersche received his M.Sc. degree from the Swiss Federal Institute of Technology Zurich (ETHZ), Switzerland, where he is currently pursuing a Ph.D. degree. From 2019-2021, he was a research assistant at ETHZ in the group of Prof. Luca Benini at the Integrated Systems Laboratory. Currently, he is working as a PhD student at IBM Research Zurich. His research targets digital signal processing, artificial intelligence, and communication with focus on hyperdimensional computing.<br />
<br />
==Interests==<br />
* Hyperdimensional Computing<br />
* Machine Learning<br />
* Brain-Computer Interfaces<br />
<br />
==Contact Information==<br />
* '''Office''': ETZ J 76.2<br />
* '''e-mail''': [mailto:herschmi@iis.ee.ethz.ch herschmi@iis.ee.ethz.ch]<br />
* '''www''': [http://www.iis.ee.ethz.ch/people/person-detail.MjIxOTI2.TGlzdC8yMDA4LDk5MDE4ODk4MA==.html IIS Homepage]<br />
* '''phone''': (+41 44 63) 259 12 <br />
[[Category:Supervisors]]<br />
[[Category:Digital]]<br />
[[Category:Human_Intranet]]<br />
<br />
==Available Projects==<br />
See available projects under [[IBM Research]]. <br />
<---<br />
<DynamicPageList><br />
supresserrors = true<br />
category = Available<br />
category = Herschmi<br />
</DynamicPageList><br />
---><br />
<br />
== Projects in Progress==<br />
<DynamicPageList><br />
supresserrors = true<br />
category = In progress<br />
category = Herschmi<br />
</DynamicPageList><br />
<br />
== Completed Projects==<br />
<DynamicPageList><br />
supresserrors = true<br />
category = Completed<br />
category = Herschmi<br />
</DynamicPageList></div>Herschmihttp://iis-projects.ee.ethz.ch/index.php?title=User:Herschmi&diff=6928User:Herschmi2021-09-16T07:32:27Z<p>Herschmi: /* Available Projects */</p>
<hr />
<div>[[File:Hersche.jpg|thumb|200px]]<br />
== Michael Hersche ==<br />
Michael Hersche received his M.Sc. degree from the Swiss Federal Institute of Technology Zurich (ETHZ), Switzerland, where he is currently pursuing a Ph.D. degree. From 2019-2021, he was a research assistant at ETHZ in the group of Prof. Luca Benini at the Integrated Systems Laboratory. Currently, he is working as a PhD student at IBM Research Zurich. His research targets digital signal processing, artificial intelligence, and communication with focus on hyperdimensional computing.<br />
<br />
==Interests==<br />
* Hyperdimensional Computing<br />
* Machine Learning<br />
* Brain-Computer Interfaces<br />
<br />
==Contact Information==<br />
* '''Office''': ETZ J 76.2<br />
* '''e-mail''': [mailto:herschmi@iis.ee.ethz.ch herschmi@iis.ee.ethz.ch]<br />
* '''www''': [http://www.iis.ee.ethz.ch/people/person-detail.MjIxOTI2.TGlzdC8yMDA4LDk5MDE4ODk4MA==.html IIS Homepage]<br />
* '''phone''': (+41 44 63) 259 12 <br />
[[Category:Supervisors]]<br />
[[Category:Digital]]<br />
[[Category:Human_Intranet]]<br />
<br />
==Available Projects==<br />
See available projects under <br />
<---<br />
<DynamicPageList><br />
supresserrors = true<br />
category = Available<br />
category = Herschmi<br />
</DynamicPageList><br />
---><br />
<br />
== Projects in Progress==<br />
<DynamicPageList><br />
supresserrors = true<br />
category = In progress<br />
category = Herschmi<br />
</DynamicPageList><br />
<br />
== Completed Projects==<br />
<DynamicPageList><br />
supresserrors = true<br />
category = Completed<br />
category = Herschmi<br />
</DynamicPageList></div>Herschmihttp://iis-projects.ee.ethz.ch/index.php?title=User:Herschmi&diff=6927User:Herschmi2021-09-16T07:31:33Z<p>Herschmi: /* Michael Hersche */</p>
<hr />
<div>[[File:Hersche.jpg|thumb|200px]]<br />
== Michael Hersche ==<br />
Michael Hersche received his M.Sc. degree from the Swiss Federal Institute of Technology Zurich (ETHZ), Switzerland, where he is currently pursuing a Ph.D. degree. From 2019-2021, he was a research assistant at ETHZ in the group of Prof. Luca Benini at the Integrated Systems Laboratory. Currently, he is working as a PhD student at IBM Research Zurich. His research targets digital signal processing, artificial intelligence, and communication with focus on hyperdimensional computing.<br />
<br />
==Interests==<br />
* Hyperdimensional Computing<br />
* Machine Learning<br />
* Brain-Computer Interfaces<br />
<br />
==Contact Information==<br />
* '''Office''': ETZ J 76.2<br />
* '''e-mail''': [mailto:herschmi@iis.ee.ethz.ch herschmi@iis.ee.ethz.ch]<br />
* '''www''': [http://www.iis.ee.ethz.ch/people/person-detail.MjIxOTI2.TGlzdC8yMDA4LDk5MDE4ODk4MA==.html IIS Homepage]<br />
* '''phone''': (+41 44 63) 259 12 <br />
[[Category:Supervisors]]<br />
[[Category:Digital]]<br />
[[Category:Human_Intranet]]<br />
<br />
==Available Projects==<br />
Here is a list of available projects. If you come up with your own project idea covering my interests, feel free to contact me.<br />
<DynamicPageList><br />
supresserrors = true<br />
category = Available<br />
category = Herschmi<br />
</DynamicPageList><br />
<br />
== Projects in Progress==<br />
<DynamicPageList><br />
supresserrors = true<br />
category = In progress<br />
category = Herschmi<br />
</DynamicPageList><br />
<br />
== Completed Projects==<br />
<DynamicPageList><br />
supresserrors = true<br />
category = Completed<br />
category = Herschmi<br />
</DynamicPageList></div>Herschmihttp://iis-projects.ee.ethz.ch/index.php?title=User:Herschmi&diff=6926User:Herschmi2021-09-16T07:30:02Z<p>Herschmi: /* Interests */</p>
<hr />
<div>[[File:Hersche.jpg|thumb|200px]]<br />
== Michael Hersche ==<br />
Michael Hersche received his M.Sc. degree from the Swiss Federal Institute of Technology Zurich (ETHZ), Switzerland, where he is currently pursuing a Ph.D. degree. Since 2019, he has been a research assistant at ETHZ in the group of Prof. Luca Benini at the Integrated Systems Laboratory. His research targets digital signal processing, artificial intelligence, and communication with focus on hyperdimensional computing.<br />
<br />
==Interests==<br />
* Hyperdimensional Computing<br />
* Machine Learning<br />
* Brain-Computer Interfaces<br />
<br />
==Contact Information==<br />
* '''Office''': ETZ J 76.2<br />
* '''e-mail''': [mailto:herschmi@iis.ee.ethz.ch herschmi@iis.ee.ethz.ch]<br />
* '''www''': [http://www.iis.ee.ethz.ch/people/person-detail.MjIxOTI2.TGlzdC8yMDA4LDk5MDE4ODk4MA==.html IIS Homepage]<br />
* '''phone''': (+41 44 63) 259 12 <br />
[[Category:Supervisors]]<br />
[[Category:Digital]]<br />
[[Category:Human_Intranet]]<br />
<br />
==Available Projects==<br />
Here is a list of available projects. If you come up with your own project idea covering my interests, feel free to contact me.<br />
<DynamicPageList><br />
supresserrors = true<br />
category = Available<br />
category = Herschmi<br />
</DynamicPageList><br />
<br />
== Projects in Progress==<br />
<DynamicPageList><br />
supresserrors = true<br />
category = In progress<br />
category = Herschmi<br />
</DynamicPageList><br />
<br />
== Completed Projects==<br />
<DynamicPageList><br />
supresserrors = true<br />
category = Completed<br />
category = Herschmi<br />
</DynamicPageList></div>Herschmihttp://iis-projects.ee.ethz.ch/index.php?title=Classification_of_Evoked_Local-Field_Potentials_in_Rat_Barrel_Cortex_using_Hyper-dimensional_Computing&diff=6925Classification of Evoked Local-Field Potentials in Rat Barrel Cortex using Hyper-dimensional Computing2021-09-16T07:29:17Z<p>Herschmi: /* Status: Available */</p>
<hr />
<div>[[File:iis-project-description.jpg|400px|right|thumb]]<br />
<br />
==Description==<br />
One of the most ambitious goals of neuroscience and its neuroprosthetic applications is to interface intelligent electronic devices with the biological brain to cure neurological diseases. Neural coding is the branch of neuroscience that investigates the relationship between stimulus and neuronal responses. This emerging research field builds on our growing understanding of brain circuits and on recent technological advances in miniaturization of implantable multielectrode-arrays (MEAs) to record brain signals at high spatio-temporal resolution. Data processing is needed to decode useful information from the recorded neural activity to better understand the function of underlying neural circuits and, in perspective, to operate neuroprosthetic devices. In this context, artificial intelligence combined with low-power embedded devices is a very promising starting point towards real-time decoding of cerebral activities with low power consumption digital processors for brain-machine interfacing and neuroprosthetic applications [1].<br />
<br />
Brain-inspired hyperdimensional computing (HDC) explores the emulation of cognition by computing with hypervectors as an alternative to computing with numbers. HDC has proven to be promising for energy-efficient computing applied to biosignal classification [2].<br />
<br />
This project focuses on processing data of evoked Local Field Potentials (LFPs) recorded from the rat barrel cortex using a miniaturized 16-by-16 MEA while stimulating the principal whisker. The sensor has been implanted in vivo and 2D images have been acquired from different cortical depths. The deflection of the whisker is performed by means of a piezo-electric bender using various stimulation amplitudes. The aim of the project is to assess the performance of HDC in classifying different external stimulus applied to the animal.<br />
<br />
<br />
The task includes the following main sub-points:<br />
<ul><li> Understand the LFP basics and interpret the dataset.</li><br />
<li> Develop (high-level Phython or Matlab) machine learning or deep learning algorithm to classify the stimulation amplitudes or to detect signal onset. </li><br />
<li> Map the algorithm in the hardware (C-programming PULP, parallel computing).</li><br />
<li> Conduct in-vivo experiments to validate the method with a realistic setting.</li></ul><br />
<br />
The task is anyways flexible and it will be adapted to the student's skills and will.<br />
<br />
<br />
===Status: Available ===<br />
* Semester project<br />
: Supervisors: [[:User:xiaywang|Xiaying Wang]]<br />
<br />
===Character===<br />
: 20% Theory<br />
: 80% Programming<br />
<br />
===Prerequisites===<br />
: Knowledge in Machine Learning (preprocessing, feature extraction, classifier, supervised-learning)<br />
: Embedded system programming<br />
: Python, C/C++, Matlab<br />
<br />
===Literature===<br />
* [https://ieeexplore.ieee.org/abstract/document/8584830] X. Wang, et al., Embedded Classification of Local Field Potentials Recorded from Rat Barrel Cortex with Implanted Multi-Electrode Array, 2018<br />
* [https://ieeexplore.ieee.org/document/8450511] A. Rahimi, et al., Hyperdimensional biosignal processing: A case study for EMG-based hand gesture recognition, 2016<br />
<br />
<br />
===IIS Professor===<br />
: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]<br />
<!-- : [http://www.iis.ee.ethz.ch/portrait/staff/huang.en.html Qiuting Huang] ---><br />
<!-- : [http://lne.ee.ethz.ch/en/general-information/people/professor.html Vanessa Wood] ---><br />
<!-- : [http://www.iis.ee.ethz.ch/portrait/staff/mluisier.en.html Mathieu Luisier] ---><br />
<!-- : [http://www.iis.ee.ethz.ch/portrait/staff/schenk.en.html Andreas Schenk] ---><br />
<!-- : [http://www.dz.ee.ethz.ch/en/general-information/about/staff/uid/364.html Hubert Kaeslin] ---><br />
[[#top|↑ top]]<br />
<br />
<br />
===Practical Details===<br />
* '''[[Project Plan]]'''<br />
* '''[[Project Meetings]]'''<br />
* '''[[Design Review]]'''<br />
* '''[[Coding Guidelines]]'''<br />
* '''[[Final Report]]'''<br />
* '''[[Final Presentation]]'''<br />
<br />
<br />
[[Category:Digital]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Xiaywang]]<br />
[[Category:Available]]<br />
[[Category:Hyper-dimensional Computing]]<br />
<br />
<!-- <br />
<br />
COPY PASTE FROM THE LIST BELOW TO ADD TO CATEGORIES<br />
<br />
GROUP<br />
[[Category:Digital]]<br />
[[Category:Analog]]<br />
[[Category:Nano-TCAD]]<br />
[[Category:Nano Electronics]]<br />
<br />
STATUS<br />
[[Category:Available]]<br />
[[Category:In progress]]<br />
[[Category:Completed]]<br />
[[Category:Hot]]<br />
<br />
TYPE OF WORK<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]]<br />
[[Category:PhD Thesis]]<br />
[[Category:Research]]<br />
<br />
NAMES OF EU/CTI/NT PROJECTS<br />
[[Category:UltrasoundToGo]]<br />
[[Category:IcySoC]]<br />
[[Category:PSocrates]]<br />
[[Category:UlpSoC]]<br />
[[Category:Qcrypt]]<br />
<br />
YEAR (IF FINISHED)<br />
[[Category:2010]]<br />
[[Category:2011]]<br />
[[Category:2012]]<br />
[[Category:2013]]<br />
[[Category:2014]]<br />
<br />
---></div>Herschmihttp://iis-projects.ee.ethz.ch/index.php?title=Classification_of_Evoked_Local-Field_Potentials_in_Rat_Barrel_Cortex_using_Hyper-dimensional_Computing&diff=6924Classification of Evoked Local-Field Potentials in Rat Barrel Cortex using Hyper-dimensional Computing2021-09-16T07:29:09Z<p>Herschmi: </p>
<hr />
<div>[[File:iis-project-description.jpg|400px|right|thumb]]<br />
<br />
==Description==<br />
One of the most ambitious goals of neuroscience and its neuroprosthetic applications is to interface intelligent electronic devices with the biological brain to cure neurological diseases. Neural coding is the branch of neuroscience that investigates the relationship between stimulus and neuronal responses. This emerging research field builds on our growing understanding of brain circuits and on recent technological advances in miniaturization of implantable multielectrode-arrays (MEAs) to record brain signals at high spatio-temporal resolution. Data processing is needed to decode useful information from the recorded neural activity to better understand the function of underlying neural circuits and, in perspective, to operate neuroprosthetic devices. In this context, artificial intelligence combined with low-power embedded devices is a very promising starting point towards real-time decoding of cerebral activities with low power consumption digital processors for brain-machine interfacing and neuroprosthetic applications [1].<br />
<br />
Brain-inspired hyperdimensional computing (HDC) explores the emulation of cognition by computing with hypervectors as an alternative to computing with numbers. HDC has proven to be promising for energy-efficient computing applied to biosignal classification [2].<br />
<br />
This project focuses on processing data of evoked Local Field Potentials (LFPs) recorded from the rat barrel cortex using a miniaturized 16-by-16 MEA while stimulating the principal whisker. The sensor has been implanted in vivo and 2D images have been acquired from different cortical depths. The deflection of the whisker is performed by means of a piezo-electric bender using various stimulation amplitudes. The aim of the project is to assess the performance of HDC in classifying different external stimulus applied to the animal.<br />
<br />
<br />
The task includes the following main sub-points:<br />
<ul><li> Understand the LFP basics and interpret the dataset.</li><br />
<li> Develop (high-level Phython or Matlab) machine learning or deep learning algorithm to classify the stimulation amplitudes or to detect signal onset. </li><br />
<li> Map the algorithm in the hardware (C-programming PULP, parallel computing).</li><br />
<li> Conduct in-vivo experiments to validate the method with a realistic setting.</li></ul><br />
<br />
The task is anyways flexible and it will be adapted to the student's skills and will.<br />
<br />
<br />
===Status: Available ===<br />
* Semester project<br />
: Supervisors: [[:User:xiaywang|Xiaying Wang]], [[:User:herschmi|Michael Hersche]]<br />
<br />
===Character===<br />
: 20% Theory<br />
: 80% Programming<br />
<br />
===Prerequisites===<br />
: Knowledge in Machine Learning (preprocessing, feature extraction, classifier, supervised-learning)<br />
: Embedded system programming<br />
: Python, C/C++, Matlab<br />
<br />
===Literature===<br />
* [https://ieeexplore.ieee.org/abstract/document/8584830] X. Wang, et al., Embedded Classification of Local Field Potentials Recorded from Rat Barrel Cortex with Implanted Multi-Electrode Array, 2018<br />
* [https://ieeexplore.ieee.org/document/8450511] A. Rahimi, et al., Hyperdimensional biosignal processing: A case study for EMG-based hand gesture recognition, 2016<br />
<br />
<br />
===IIS Professor===<br />
: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]<br />
<!-- : [http://www.iis.ee.ethz.ch/portrait/staff/huang.en.html Qiuting Huang] ---><br />
<!-- : [http://lne.ee.ethz.ch/en/general-information/people/professor.html Vanessa Wood] ---><br />
<!-- : [http://www.iis.ee.ethz.ch/portrait/staff/mluisier.en.html Mathieu Luisier] ---><br />
<!-- : [http://www.iis.ee.ethz.ch/portrait/staff/schenk.en.html Andreas Schenk] ---><br />
<!-- : [http://www.dz.ee.ethz.ch/en/general-information/about/staff/uid/364.html Hubert Kaeslin] ---><br />
[[#top|↑ top]]<br />
<br />
<br />
===Practical Details===<br />
* '''[[Project Plan]]'''<br />
* '''[[Project Meetings]]'''<br />
* '''[[Design Review]]'''<br />
* '''[[Coding Guidelines]]'''<br />
* '''[[Final Report]]'''<br />
* '''[[Final Presentation]]'''<br />
<br />
<br />
[[Category:Digital]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Xiaywang]]<br />
[[Category:Available]]<br />
[[Category:Hyper-dimensional Computing]]<br />
<br />
<!-- <br />
<br />
COPY PASTE FROM THE LIST BELOW TO ADD TO CATEGORIES<br />
<br />
GROUP<br />
[[Category:Digital]]<br />
[[Category:Analog]]<br />
[[Category:Nano-TCAD]]<br />
[[Category:Nano Electronics]]<br />
<br />
STATUS<br />
[[Category:Available]]<br />
[[Category:In progress]]<br />
[[Category:Completed]]<br />
[[Category:Hot]]<br />
<br />
TYPE OF WORK<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]]<br />
[[Category:PhD Thesis]]<br />
[[Category:Research]]<br />
<br />
NAMES OF EU/CTI/NT PROJECTS<br />
[[Category:UltrasoundToGo]]<br />
[[Category:IcySoC]]<br />
[[Category:PSocrates]]<br />
[[Category:UlpSoC]]<br />
[[Category:Qcrypt]]<br />
<br />
YEAR (IF FINISHED)<br />
[[Category:2010]]<br />
[[Category:2011]]<br />
[[Category:2012]]<br />
[[Category:2013]]<br />
[[Category:2014]]<br />
<br />
---></div>Herschmihttp://iis-projects.ee.ethz.ch/index.php?title=Compression_of_iEEG_Data&diff=6923Compression of iEEG Data2021-09-16T07:28:23Z<p>Herschmi: </p>
<hr />
<div>[[Category:Digital]][[Category:Semester Thesis]][[Category:Available]] [[Category:2020]][[Category:Hot]][[Category:Human Intranet]][[Category:xiaywang]][[Category:Epilepsy]]<br />
<br />
==Description==<br />
Seizure detection systems hold promise for improving the quality of life for patients with epilepsy that afflicts nearly 1% of the world’s population. High-resolution intracranial Electroencephalography (iEEG) enables the detection and location of such seizures. When aiming a low power implanted system the large amount of data has to be efficiently reduced. iEEG signals are sparse and have been successfully compressed using well-established encoders such as Discrete Wavelet Transform (DWT) or Non-Negative Matrix Factorization (NNMF) [1]. Due to its recent success, however, convolutional neural networks (CNNs) are getting more attention and have shown to be a viable option to compress EEG signals [2]. This project compares deep convolutional autoencoders with state-of-the-art DWT and NNMF to compress iEEG data from long-term recordings of epileptic patients. We use the publicly available long-term dataset [3] consisting of a total of 2656 hours iEEG recordings.<br />
<br />
<br />
===Status: Available ===<br />
Looking for student for Master's thesis or Semester project. <br />
: Supervision: [[:User:xiaywang|Xiaying Wang]]<br />
<br />
===Prerequisites===<br />
* Machine Learning<br />
* Python & C Programming<br />
<br />
<br />
<br />
===Character===<br />
: 20% Theory<br />
: 80% Programming<br />
<br />
===Professor===<br />
: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]<br />
[[#top|↑ top]]<br />
<br />
<!--<br />
==Detailed Task Description==<br />
<br />
===Meetings & Presentations===<br />
The student(s) and advisor(s) agree on weekly meetings to discuss all relevant decisions and decide on how to proceed. Of course, additional meetings can be organized to address urgent issues. <br />
Around the middle of the project there is a design review, where senior members of the lab review your work (bring all the relevant information, such as prelim. specifications, block diagrams, synthesis reports, testing strategy, ...) to make sure everything is on track and decide whether further support is necessary. They also make the definite decision on whether the chip is actually manufactured (no reason to worry, if the project is on track) and whether more chip area, a different package, ... is provided. For more details confer to [http://eda.ee.ethz.ch/index.php/Design_review]. <br />
At the end of the project, you have to present/defend your work during a 15 min. presentation and 5 min. of discussion as part of the IIS colloquium (as required for any semester or master thesis at D-ITET).<br />
<br />
===Deliverables===<br />
* description of the most promising architectures, and argumentation on the decision taken (as part of the report)<br />
* synthesizable, verified VHDL code<br />
* generated test vector files<br />
* synthesis scripts & relevant software models developed for verification<br />
* synthesis results and final chip layout (GDS II data), bonding diagram<br />
* datasheet (part of report)<br />
* presentation slides<br />
* project report (in digital form; a hard copy also welcome, but not necessary)<br />
---><br />
<!--<br />
===Timeline==<br />
To give some idea on how the time can be split up, we provide some possible partitioning: <br />
* Literature survey, building a basic understanding of the problem at hand, catch up on related work <br />
* Development of a working software-based implementation running on the Zynq's ARM core<br />
* Piece-by-piece off-loading of relevant tasks to the programmable logic<br />
* Implementation of data interfaces (software or hardware)<br />
* Report and presentation <br />
--><br />
<!-- 13.5 weeks total here --><br />
<br />
<br />
===Literature===<br />
* [https://pubmed.ncbi.nlm.nih.gov/29040672/] M. Baud, et al., Unsupervised Learning of Spatiotemporal Interictal Discharges in Focal Epilepsy, 2018<br />
* [https://ieeexplore.ieee.org/document/8450511] A. Al-Marridi, et al., Convolutional Autoencoder Approach for EEG Compression and Reconstruction in m-Health Systems, 2018<br />
* [http://ieeg-swez.ethz.ch/] iEEG-SWEZ data base<br />
<br />
===Practical Details===<br />
* '''[[Project Plan]]'''<br />
* '''[[Project Meetings]]'''<br />
* '''[[Final Report]]'''<br />
* '''[[Final Presentation]]'''<br />
<br />
[[#top|↑ top]]<br />
<br />
<!-- <br />
<br />
COPY PASTE FROM THE LIST BELOW TO ADD TO CATEGORIES<br />
<br />
GROUP<br />
[[Category:Digital]]<br />
<br />
STATUS<br />
[[Category:Available]]<br />
[[Category:In progress]]<br />
[[Category:Completed]]<br />
[[Category:Hot]]<br />
<br />
TYPE OF WORK<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]]<br />
[[Category:PhD Thesis]]<br />
[[Category:Research]]<br />
<br />
NAMES OF EU/CTI/NT PROJECTS<br />
[[Category:UltrasoundToGo]]<br />
[[Category:IcySoC]]<br />
[[Category:PSocrates]]<br />
[[Category:UlpSoC]]<br />
[[Category:Qcrypt]]<br />
<br />
YEAR (IF FINISHED)<br />
[[Category:2010]]<br />
[[Category:2011]]<br />
[[Category:2012]]<br />
[[Category:2013]]<br />
[[Category:2014]]<br />
<br />
---></div>Herschmihttp://iis-projects.ee.ethz.ch/index.php?title=Hardware_Constrained_Neural_Architechture_Search&diff=6922Hardware Constrained Neural Architechture Search2021-09-16T07:27:13Z<p>Herschmi: </p>
<hr />
<div>[[Category:Digital]][[Category:Semester Thesis]] [[Category:Available]] [[Category:2020]][[Category:Hot]][[Category:Human Intranet]][[Category:xiaywang]][[Category:BCI]]<br />
[[File:FBNet.png|400px|thumb|right|Differentiable Neural Arcitechture Search [1]]]<br />
==Description==<br />
Designing good and efficient neural networks is challenging, most tasks require models to be both highly accurate and robust, as well as being compact. These models then also often have constraints on energy usage, memory consumption, and latency. This results in the search space for manual design in being combinatorially large. A method of tackling the problem of manual design is instead using a neural architecture search (NAS). Many different flavors of NAS exist, such as being differentiable or DNAS [1], NAS methods that utilize evolutionary algorithms, and NAS methods that use reinforcement learning [2]. An interesting and exciting feature of NAS is the ability to include hardware constraints or have constraints on e.g., power consumption and/or memory usage guide the search process for state-of-the-art neural networks [3].<br />
<br />
<br />
Compact and energy-efficient machine learning models have the ability to be embedded into small microprocessors for inference. A task where having small models is especially interesting is brain-computer interface (BCI) classification. A brain-computer interface is a device that enables communication between the human brain and an external device. It aims to recognize the human’s intentions from spatiotemporal neural activity typically recorded by a large set of electroencephalogram (EEG) electrodes. What makes it particularly challenging, however, is its susceptibility to errors in the recognition of human intentions, especially during motor imagery (MI). The underlying reason is the high inter-subject variance, which makes it difficult to build one universal model for all subjects.<br />
<br />
<br />
In this project, the student explores and tests a NAS method that has the ability to have hardware constraints to guide the search process for SoA neural networks on BCI related tasks.<br />
<br />
===Status: Available ===<br />
Looking for a student for a Semester project. <br />
<br />
: Supervision:[[:User:Thoriri | Thorir Mar Ingolfsson]], [[:User:xiaywang|Xiaying Wang]]<br />
<br />
===Prerequisites===<br />
* Machine Learning<br />
* Python<br />
<br />
===Character===<br />
: 20% Theory<br />
: 80% Implementation<br />
<br />
===Literature===<br />
* [https://arxiv.org/pdf/1812.03443.pdf] Wu et. al., FBNet: Hardware-Aware Efficient ConvNet Design via Differentiable Neural Architecture Search<br />
* [https://arxiv.org/pdf/1807.11626.pdf] Tan et. al., MnasNet: Platform-Aware Neural Architecture Search for Mobile<br />
* [https://arxiv.org/pdf/1906.07214v1.pdf] Vineeth et al., Hardware Aware Neural Network Architectures (using FBNet)<br />
<br />
<br />
===Professor===<br />
: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]<br />
[[#top|↑ top]]<br />
<br />
<!--<br />
==Detailed Task Description==<br />
<br />
===Meetings & Presentations===<br />
The student(s) and advisor(s) agree on weekly meetings to discuss all relevant decisions and decide on how to proceed. Of course, additional meetings can be organized to address urgent issues. <br />
Around the middle of the project there is a design review, where senior members of the lab review your work (bring all the relevant information, such as prelim. specifications, block diagrams, synthesis reports, testing strategy, ...) to make sure everything is on track and decide whether further support is necessary. They also make the definite decision on whether the chip is actually manufactured (no reason to worry, if the project is on track) and whether more chip area, a different package, ... is provided. For more details confer to [http://eda.ee.ethz.ch/index.php/Design_review]. <br />
At the end of the project, you have to present/defend your work during a 15 min. presentation and 5 min. of discussion as part of the IIS colloquium (as required for any semester or master thesis at D-ITET).<br />
<br />
===Deliverables===<br />
* description of the most promising architectures, and argumentation on the decision taken (as part of the report)<br />
* synthesizable, verified VHDL code<br />
* generated test vector files<br />
* synthesis scripts & relevant software models developed for verification<br />
* synthesis results and final chip layout (GDS II data), bonding diagram<br />
* datasheet (part of report)<br />
* presentation slides<br />
* project report (in digital form; a hard copy also welcome, but not necessary)<br />
---><br />
<!--<br />
===Timeline==<br />
To give some idea on how the time can be split up, we provide some possible partitioning: <br />
* Literature survey, building a basic understanding of the problem at hand, catch up on related work <br />
* Development of a working software-based implementation running on the Zynq's ARM core<br />
* Piece-by-piece off-loading of relevant tasks to the programmable logic<br />
* Implementation of data interfaces (software or hardware)<br />
* Report and presentation <br />
--><br />
<!-- 13.5 weeks total here --><br />
<br />
===Practical Details===<br />
* '''[[Project Plan]]'''<br />
* '''[[Project Meetings]]'''<br />
* '''[[Final Report]]'''<br />
* '''[[Final Presentation]]'''<br />
<br />
[[#top|↑ top]]<br />
<br />
<!-- <br />
<br />
COPY PASTE FROM THE LIST BELOW TO ADD TO CATEGORIES<br />
<br />
GROUP<br />
[[Category:Digital]]<br />
<br />
STATUS<br />
[[Category:Available]]<br />
[[Category:In progress]]<br />
[[Category:Completed]]<br />
[[Category:Hot]]<br />
[[Category:Thoriri]]<br />
<br />
TYPE OF WORK<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]]<br />
[[Category:PhD Thesis]]<br />
[[Category:Research]]<br />
<br />
NAMES OF EU/CTI/NT PROJECTS<br />
[[Category:UltrasoundToGo]]<br />
[[Category:IcySoC]]<br />
[[Category:PSocrates]]<br />
[[Category:UlpSoC]]<br />
[[Category:Qcrypt]]<br />
<br />
YEAR (IF FINISHED)<br />
[[Category:2010]]<br />
[[Category:2011]]<br />
[[Category:2012]]<br />
[[Category:2013]]<br />
[[Category:2014]]<br />
<br />
---><br />
<br />
[[Category:Thoriri]]</div>Herschmihttp://iis-projects.ee.ethz.ch/index.php?title=Hardware_Constrained_Neural_Architechture_Search&diff=6921Hardware Constrained Neural Architechture Search2021-09-16T07:26:50Z<p>Herschmi: </p>
<hr />
<div>[[Category:Digital]][[Category:Semester Thesis]] [[Category:Available]] [[Category:2020]][[Category:Hot]][[Category:Human Intranet]][[Category:xiaywang]][[Category:BCI]]<br />
[[File:FBNet.png|400px|thumb|right|Differentiable Neural Arcitechture Search [1]]]<br />
==Description==<br />
Designing good and efficient neural networks is challenging, most tasks require models to be both highly accurate and robust, as well as being compact. These models then also often have constraints on energy usage, memory consumption, and latency. This results in the search space for manual design in being combinatorially large. A method of tackling the problem of manual design is instead using a neural architecture search (NAS). Many different flavors of NAS exist, such as being differentiable or DNAS [1], NAS methods that utilize evolutionary algorithms, and NAS methods that use reinforcement learning [2]. An interesting and exciting feature of NAS is the ability to include hardware constraints or have constraints on e.g., power consumption and/or memory usage guide the search process for state-of-the-art neural networks [3].<br />
<br />
<br />
Compact and energy-efficient machine learning models have the ability to be embedded into small microprocessors for inference. A task where having small models is especially interesting is brain-computer interface (BCI) classification. A brain-computer interface is a device that enables communication between the human brain and an external device. It aims to recognize the human’s intentions from spatiotemporal neural activity typically recorded by a large set of electroencephalogram (EEG) electrodes. What makes it particularly challenging, however, is its susceptibility to errors in the recognition of human intentions, especially during motor imagery (MI). The underlying reason is the high inter-subject variance, which makes it difficult to build one universal model for all subjects.<br />
<br />
<br />
In this project, the student explores and tests a NAS method that has the ability to have hardware constraints to guide the search process for SoA neural networks on BCI related tasks.<br />
<br />
===Status: Available ===<br />
Looking for a student for a Semester project. <br />
<br />
: Supervision:[[:User:Thoriri | Thorir Mar Ingolfsson]], [[:User:xiaywang|Xiaying Wang]]<br />
<br />
===Prerequisites===<br />
* Machine Learning<br />
* Python<br />
<br />
===Character===<br />
: 20% Theory<br />
: 80% Implementation<br />
<br />
===Literature===<br />
* [https://arxiv.org/pdf/1812.03443.pdf] Wu et. al., FBNet: Hardware-Aware Efficient ConvNet Design via Differentiable Neural Architecture Search<br />
* [https://arxiv.org/pdf/1807.11626.pdf] Tan et. al., MnasNet: Platform-Aware Neural Architecture Search for Mobile<br />
* [https://arxiv.org/pdf/1906.07214v1.pdf] Vineeth et al., Hardware Aware Neural Network Architectures (using FBNet)<br />
<br />
<br />
===Professor===<br />
: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]<br />
[[#top|↑ top]]<br />
<br />
<!--<br />
==Detailed Task Description==<br />
<br />
===Meetings & Presentations===<br />
The student(s) and advisor(s) agree on weekly meetings to discuss all relevant decisions and decide on how to proceed. Of course, additional meetings can be organized to address urgent issues. <br />
Around the middle of the project there is a design review, where senior members of the lab review your work (bring all the relevant information, such as prelim. specifications, block diagrams, synthesis reports, testing strategy, ...) to make sure everything is on track and decide whether further support is necessary. They also make the definite decision on whether the chip is actually manufactured (no reason to worry, if the project is on track) and whether more chip area, a different package, ... is provided. For more details confer to [http://eda.ee.ethz.ch/index.php/Design_review]. <br />
At the end of the project, you have to present/defend your work during a 15 min. presentation and 5 min. of discussion as part of the IIS colloquium (as required for any semester or master thesis at D-ITET).<br />
<br />
===Deliverables===<br />
* description of the most promising architectures, and argumentation on the decision taken (as part of the report)<br />
* synthesizable, verified VHDL code<br />
* generated test vector files<br />
* synthesis scripts & relevant software models developed for verification<br />
* synthesis results and final chip layout (GDS II data), bonding diagram<br />
* datasheet (part of report)<br />
* presentation slides<br />
* project report (in digital form; a hard copy also welcome, but not necessary)<br />
---><br />
<!--<br />
===Timeline==<br />
To give some idea on how the time can be split up, we provide some possible partitioning: <br />
* Literature survey, building a basic understanding of the problem at hand, catch up on related work <br />
* Development of a working software-based implementation running on the Zynq's ARM core<br />
* Piece-by-piece off-loading of relevant tasks to the programmable logic<br />
* Implementation of data interfaces (software or hardware)<br />
* Report and presentation <br />
--><br />
<!-- 13.5 weeks total here --><br />
<br />
===Practical Details===<br />
* '''[[Project Plan]]'''<br />
* '''[[Project Meetings]]'''<br />
* '''[[Final Report]]'''<br />
* '''[[Final Presentation]]'''<br />
<br />
[[#top|↑ top]]<br />
<br />
<!-- <br />
<br />
COPY PASTE FROM THE LIST BELOW TO ADD TO CATEGORIES<br />
<br />
GROUP<br />
[[Category:Digital]]<br />
<br />
STATUS<br />
[[Category:Available]]<br />
[[Category:In progress]]<br />
[[Category:Completed]]<br />
[[Category:Hot]]<br />
[[Category:Thoriri]]<br />
<br />
TYPE OF WORK<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]]<br />
[[Category:PhD Thesis]]<br />
[[Category:Research]]<br />
<br />
NAMES OF EU/CTI/NT PROJECTS<br />
[[Category:UltrasoundToGo]]<br />
[[Category:IcySoC]]<br />
[[Category:PSocrates]]<br />
[[Category:UlpSoC]]<br />
[[Category:Qcrypt]]<br />
<br />
YEAR (IF FINISHED)<br />
[[Category:2010]]<br />
[[Category:2011]]<br />
[[Category:2012]]<br />
[[Category:2013]]<br />
[[Category:2014]]<br />
<br />
---><br />
<br />
[[Category:Herschmi]]<br />
[[Category:Thoriri]]</div>Herschmihttp://iis-projects.ee.ethz.ch/index.php?title=Compression_of_iEEG_Data&diff=6920Compression of iEEG Data2021-09-16T07:26:03Z<p>Herschmi: </p>
<hr />
<div>[[Category:Digital]][[Category:Semester Thesis]][[Category:Available]] [[Category:2020]][[Category:Hot]][[Category:Human Intranet]][[Category:xiaywang]][[Category:Herschmi]][[Category:Epilepsy]]<br />
<br />
==Description==<br />
Seizure detection systems hold promise for improving the quality of life for patients with epilepsy that afflicts nearly 1% of the world’s population. High-resolution intracranial Electroencephalography (iEEG) enables the detection and location of such seizures. When aiming a low power implanted system the large amount of data has to be efficiently reduced. iEEG signals are sparse and have been successfully compressed using well-established encoders such as Discrete Wavelet Transform (DWT) or Non-Negative Matrix Factorization (NNMF) [1]. Due to its recent success, however, convolutional neural networks (CNNs) are getting more attention and have shown to be a viable option to compress EEG signals [2]. This project compares deep convolutional autoencoders with state-of-the-art DWT and NNMF to compress iEEG data from long-term recordings of epileptic patients. We use the publicly available long-term dataset [3] consisting of a total of 2656 hours iEEG recordings.<br />
<br />
<br />
===Status: Available ===<br />
Looking for student for Master's thesis or Semester project. <br />
: Supervision: [[:User:xiaywang|Xiaying Wang]]<br />
<br />
===Prerequisites===<br />
* Machine Learning<br />
* Python & C Programming<br />
<br />
<br />
<br />
===Character===<br />
: 20% Theory<br />
: 80% Programming<br />
<br />
===Professor===<br />
: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]<br />
[[#top|↑ top]]<br />
<br />
<!--<br />
==Detailed Task Description==<br />
<br />
===Meetings & Presentations===<br />
The student(s) and advisor(s) agree on weekly meetings to discuss all relevant decisions and decide on how to proceed. Of course, additional meetings can be organized to address urgent issues. <br />
Around the middle of the project there is a design review, where senior members of the lab review your work (bring all the relevant information, such as prelim. specifications, block diagrams, synthesis reports, testing strategy, ...) to make sure everything is on track and decide whether further support is necessary. They also make the definite decision on whether the chip is actually manufactured (no reason to worry, if the project is on track) and whether more chip area, a different package, ... is provided. For more details confer to [http://eda.ee.ethz.ch/index.php/Design_review]. <br />
At the end of the project, you have to present/defend your work during a 15 min. presentation and 5 min. of discussion as part of the IIS colloquium (as required for any semester or master thesis at D-ITET).<br />
<br />
===Deliverables===<br />
* description of the most promising architectures, and argumentation on the decision taken (as part of the report)<br />
* synthesizable, verified VHDL code<br />
* generated test vector files<br />
* synthesis scripts & relevant software models developed for verification<br />
* synthesis results and final chip layout (GDS II data), bonding diagram<br />
* datasheet (part of report)<br />
* presentation slides<br />
* project report (in digital form; a hard copy also welcome, but not necessary)<br />
---><br />
<!--<br />
===Timeline==<br />
To give some idea on how the time can be split up, we provide some possible partitioning: <br />
* Literature survey, building a basic understanding of the problem at hand, catch up on related work <br />
* Development of a working software-based implementation running on the Zynq's ARM core<br />
* Piece-by-piece off-loading of relevant tasks to the programmable logic<br />
* Implementation of data interfaces (software or hardware)<br />
* Report and presentation <br />
--><br />
<!-- 13.5 weeks total here --><br />
<br />
<br />
===Literature===<br />
* [https://pubmed.ncbi.nlm.nih.gov/29040672/] M. Baud, et al., Unsupervised Learning of Spatiotemporal Interictal Discharges in Focal Epilepsy, 2018<br />
* [https://ieeexplore.ieee.org/document/8450511] A. Al-Marridi, et al., Convolutional Autoencoder Approach for EEG Compression and Reconstruction in m-Health Systems, 2018<br />
* [http://ieeg-swez.ethz.ch/] iEEG-SWEZ data base<br />
<br />
===Practical Details===<br />
* '''[[Project Plan]]'''<br />
* '''[[Project Meetings]]'''<br />
* '''[[Final Report]]'''<br />
* '''[[Final Presentation]]'''<br />
<br />
[[#top|↑ top]]<br />
<br />
<!-- <br />
<br />
COPY PASTE FROM THE LIST BELOW TO ADD TO CATEGORIES<br />
<br />
GROUP<br />
[[Category:Digital]]<br />
<br />
STATUS<br />
[[Category:Available]]<br />
[[Category:In progress]]<br />
[[Category:Completed]]<br />
[[Category:Hot]]<br />
<br />
TYPE OF WORK<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]]<br />
[[Category:PhD Thesis]]<br />
[[Category:Research]]<br />
<br />
NAMES OF EU/CTI/NT PROJECTS<br />
[[Category:UltrasoundToGo]]<br />
[[Category:IcySoC]]<br />
[[Category:PSocrates]]<br />
[[Category:UlpSoC]]<br />
[[Category:Qcrypt]]<br />
<br />
YEAR (IF FINISHED)<br />
[[Category:2010]]<br />
[[Category:2011]]<br />
[[Category:2012]]<br />
[[Category:2013]]<br />
[[Category:2014]]<br />
<br />
---></div>Herschmihttp://iis-projects.ee.ethz.ch/index.php?title=Exploring_feature_selection_and_classification_algorithms_for_ultra-low-power_closed-loop_systems_for_epilepsy_control&diff=6919Exploring feature selection and classification algorithms for ultra-low-power closed-loop systems for epilepsy control2021-09-16T07:25:14Z<p>Herschmi: </p>
<hr />
<div>[[File:Ieeg seizure.png|thumb|300px]][[File:EpilepsyStim.jpg|thumb|300px]]<br />
=== Introduction ===<br />
The usage of Implantable Medical Devices (IMD) as therapeutic systems is gaining acceptance in the medical community and from patients. Following this trend and taking benefit of improvements in microelectronic fabrication technologies, new systems are proposed. The complexity of the microelectronic implantable systems has significantly increased over the years, constituting true systems-on-a-chip, that gather functionalities including electrical recording and stimulation, data pre-processing, wireless power and data transfer and overall system control. Data is generally processed inside the implant in the digital domain. The major goal consists of reliably detecting some markers of an abnormal signal that allows tailoring a therapy that is to be delivered in a closed-loop manner. <br />
The most widely used method and data processing chain involves recording of electrical signals, converting them into the digital domain, detecting some features and their relative intensities, and subsequently controlling electrical stimulation. The effectiveness of the delivered therapy is verified in real time. This technique is used in the control of epilepsy in closed-loop fashion, for example. Still, improving the efficiency of the detection is a key factor to the success of the therapy.<br />
<br />
=== Project Description ===<br />
The goal of this project is to optimize the feature selection algorithm and develop a more accurate and light-weight seizure detection algorithm. To this end, students will:<br />
* Study literature about seizure detection and feature selection<br />
* Optimize feature selection algorithms<br />
* Implement machine learning algorithms for seizure detection based on the selected features. Both traditional machine learning and deep learning algorithms can be explored.<br />
* Test on iEEG dataset<br />
<br />
<br />
===Status: Available===<br />
:Looking for Semester and Master Project Students<br />
:Supervision: [[:User:Liaoj | Jiawei Liao]], Reza Ranjandish <br />
===Prerequisites===<br />
* Machine Learning<br />
* Python Programming<br />
===Character===<br />
* 20% literature review<br />
* 40% Feature selection algorithm optimization <br />
* 40% Seizure detection algorithm development<br />
<br />
===Professor===<br />
Prof. Taekwang Jang <[mailto:tjang@ethz.ch tjang@ethz.ch]><br />
<br />
=== Reference===<br />
<br />
<br />
[[#top|↑ top]]<br />
[[Category:EECIS]]<br />
[[Category:Available]]<br />
[[Category:2021]]<br />
[[Category:Liaoj]]<br />
[[Category:Digital]]<br />
<br />
<!-- <br />
<br />
COPY PASTE FROM THE LIST BELOW TO ADD TO CATEGORIES<br />
<br />
GROUP<br />
[[Category:Analog]]<br />
[[Category:Nano-TCAD]]<br />
[[Category:Nano Electronics]]<br />
<br />
STATUS<br />
[[Category:Available]]<br />
[[Category:In progress]]<br />
[[Category:Completed]]<br />
[[Category:Hot]]<br />
<br />
TYPE OF WORK<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]]<br />
[[Category:PhD Thesis]]<br />
[[Category:Research]]<br />
<br />
NAMES OF EU/CTI/NT PROJECTS<br />
[[Category:IcySoC]]<br />
[[Category:PSocrates]]<br />
[[Category:UlpSoC]]<br />
[[Category:Qcrypt]]<br />
[[Category:PULP]]<br />
[[Category:ArmaSuisse]]<br />
<br />
YEAR (IF FINISHED)<br />
[[Category:2010]]<br />
[[Category:2011]]<br />
[[Category:2012]]<br />
[[Category:2013]]<br />
[[Category:2014]]<br />
<br />
---></div>Herschmihttp://iis-projects.ee.ethz.ch/index.php?title=Data_Augmentation_Techniques_in_Biosignal_Classification&diff=6918Data Augmentation Techniques in Biosignal Classification2021-09-16T07:24:21Z<p>Herschmi: </p>
<hr />
<div>[[Category:Digital]][[Category:Semester Thesis]][[Category:Master Thesis]] [[Category:Available]] [[Category:2020]][[Category:Hot]][[Category:Human Intranet]][[Category:xiaywang]][Category:Epilepsy]][[Category:BCI]]<br />
<br />
==Description==<br />
In many biosignal classification tasks, simple and robust classifiers are preferred compared to more sophisticated and potentially more accurate ones. This is mostly due to the lack of labeled training data. Especially for biomedical applications, data acquisition and labeling are very expensive and time-consuming. In this thesis, the student explores data augmentation methods [1][2] to improve the learning in biosignal classification tasks. Applications range from ECG to EEG [3] classification tasks. <br />
<br />
<br />
<br />
===Status: Available ===<br />
Looking for student for Master's thesis or Semester project. <br />
: Supervision: [[:User:xiaywang|Xiaying Wang]]<br />
<br />
===Prerequisites===<br />
* Machine Learning<br />
* Python & C Programming<br />
<br />
<br />
<br />
===Character===<br />
: 20% Theory<br />
: 80% Programming<br />
<br />
===Professor===<br />
: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]<br />
[[#top|↑ top]]<br />
<br />
<!--<br />
==Detailed Task Description==<br />
<br />
===Meetings & Presentations===<br />
The student(s) and advisor(s) agree on weekly meetings to discuss all relevant decisions and decide on how to proceed. Of course, additional meetings can be organized to address urgent issues. <br />
Around the middle of the project there is a design review, where senior members of the lab review your work (bring all the relevant information, such as prelim. specifications, block diagrams, synthesis reports, testing strategy, ...) to make sure everything is on track and decide whether further support is necessary. They also make the definite decision on whether the chip is actually manufactured (no reason to worry, if the project is on track) and whether more chip area, a different package, ... is provided. For more details confer to [http://eda.ee.ethz.ch/index.php/Design_review]. <br />
At the end of the project, you have to present/defend your work during a 15 min. presentation and 5 min. of discussion as part of the IIS colloquium (as required for any semester or master thesis at D-ITET).<br />
<br />
===Deliverables===<br />
* description of the most promising architectures, and argumentation on the decision taken (as part of the report)<br />
* synthesizable, verified VHDL code<br />
* generated test vector files<br />
* synthesis scripts & relevant software models developed for verification<br />
* synthesis results and final chip layout (GDS II data), bonding diagram<br />
* datasheet (part of report)<br />
* presentation slides<br />
* project report (in digital form; a hard copy also welcome, but not necessary)<br />
---><br />
<!--<br />
===Timeline==<br />
To give some idea on how the time can be split up, we provide some possible partitioning: <br />
* Literature survey, building a basic understanding of the problem at hand, catch up on related work <br />
* Development of a working software-based implementation running on the Zynq's ARM core<br />
* Piece-by-piece off-loading of relevant tasks to the programmable logic<br />
* Implementation of data interfaces (software or hardware)<br />
* Report and presentation <br />
--><br />
<!-- 13.5 weeks total here --><br />
<br />
<br />
===Literature===<br />
* [https://arxiv.org/abs/2002.12478] Q. Wen, et al., Time Series Data Augmentation for Deep Learning: A Survey, 2020<br />
* [https://arxiv.org/abs/1801.02730] M. M. Krell, et al., Data Augmentation for Brain-Computer Interfaces: Analysis on Event-Related Potentials Data, 2018<br />
* [https://arxiv.org/abs/2006.00622] T. M. Ingolfsson, et al., EEG-TCNet: An Accurate Temporal Convolutional Network for Embedded Motor-Imagery Brain-Machine Interfaces, 2020<br />
<br />
===Practical Details===<br />
* '''[[Project Plan]]'''<br />
* '''[[Project Meetings]]'''<br />
* '''[[Final Report]]'''<br />
* '''[[Final Presentation]]'''<br />
<br />
[[#top|↑ top]]<br />
<br />
<!-- <br />
<br />
COPY PASTE FROM THE LIST BELOW TO ADD TO CATEGORIES<br />
<br />
GROUP<br />
[[Category:Digital]]<br />
<br />
STATUS<br />
[[Category:Available]]<br />
[[Category:In progress]]<br />
[[Category:Completed]]<br />
[[Category:Hot]]<br />
<br />
TYPE OF WORK<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]]<br />
[[Category:PhD Thesis]]<br />
[[Category:Research]]<br />
<br />
NAMES OF EU/CTI/NT PROJECTS<br />
[[Category:UltrasoundToGo]]<br />
[[Category:IcySoC]]<br />
[[Category:PSocrates]]<br />
[[Category:UlpSoC]]<br />
[[Category:Qcrypt]]<br />
<br />
YEAR (IF FINISHED)<br />
[[Category:2010]]<br />
[[Category:2011]]<br />
[[Category:2012]]<br />
[[Category:2013]]<br />
[[Category:2014]]<br />
<br />
---></div>Herschmihttp://iis-projects.ee.ethz.ch/index.php?title=Data_Augmentation_Techniques_in_Biosignal_Classification&diff=6917Data Augmentation Techniques in Biosignal Classification2021-09-16T07:24:07Z<p>Herschmi: </p>
<hr />
<div>[[Category:Digital]][[Category:Semester Thesis]][[Category:Master Thesis]] [[Category:Available]] [[Category:2020]][[Category:Hot]][[Category:Human Intranet]][[Category:xiaywang]][[Category:Herschmi]][[Category:Epilepsy]][[Category:BCI]]<br />
<br />
==Description==<br />
In many biosignal classification tasks, simple and robust classifiers are preferred compared to more sophisticated and potentially more accurate ones. This is mostly due to the lack of labeled training data. Especially for biomedical applications, data acquisition and labeling are very expensive and time-consuming. In this thesis, the student explores data augmentation methods [1][2] to improve the learning in biosignal classification tasks. Applications range from ECG to EEG [3] classification tasks. <br />
<br />
<br />
<br />
===Status: Available ===<br />
Looking for student for Master's thesis or Semester project. <br />
: Supervision: [[:User:xiaywang|Xiaying Wang]]<br />
<br />
===Prerequisites===<br />
* Machine Learning<br />
* Python & C Programming<br />
<br />
<br />
<br />
===Character===<br />
: 20% Theory<br />
: 80% Programming<br />
<br />
===Professor===<br />
: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]<br />
[[#top|↑ top]]<br />
<br />
<!--<br />
==Detailed Task Description==<br />
<br />
===Meetings & Presentations===<br />
The student(s) and advisor(s) agree on weekly meetings to discuss all relevant decisions and decide on how to proceed. Of course, additional meetings can be organized to address urgent issues. <br />
Around the middle of the project there is a design review, where senior members of the lab review your work (bring all the relevant information, such as prelim. specifications, block diagrams, synthesis reports, testing strategy, ...) to make sure everything is on track and decide whether further support is necessary. They also make the definite decision on whether the chip is actually manufactured (no reason to worry, if the project is on track) and whether more chip area, a different package, ... is provided. For more details confer to [http://eda.ee.ethz.ch/index.php/Design_review]. <br />
At the end of the project, you have to present/defend your work during a 15 min. presentation and 5 min. of discussion as part of the IIS colloquium (as required for any semester or master thesis at D-ITET).<br />
<br />
===Deliverables===<br />
* description of the most promising architectures, and argumentation on the decision taken (as part of the report)<br />
* synthesizable, verified VHDL code<br />
* generated test vector files<br />
* synthesis scripts & relevant software models developed for verification<br />
* synthesis results and final chip layout (GDS II data), bonding diagram<br />
* datasheet (part of report)<br />
* presentation slides<br />
* project report (in digital form; a hard copy also welcome, but not necessary)<br />
---><br />
<!--<br />
===Timeline==<br />
To give some idea on how the time can be split up, we provide some possible partitioning: <br />
* Literature survey, building a basic understanding of the problem at hand, catch up on related work <br />
* Development of a working software-based implementation running on the Zynq's ARM core<br />
* Piece-by-piece off-loading of relevant tasks to the programmable logic<br />
* Implementation of data interfaces (software or hardware)<br />
* Report and presentation <br />
--><br />
<!-- 13.5 weeks total here --><br />
<br />
<br />
===Literature===<br />
* [https://arxiv.org/abs/2002.12478] Q. Wen, et al., Time Series Data Augmentation for Deep Learning: A Survey, 2020<br />
* [https://arxiv.org/abs/1801.02730] M. M. Krell, et al., Data Augmentation for Brain-Computer Interfaces: Analysis on Event-Related Potentials Data, 2018<br />
* [https://arxiv.org/abs/2006.00622] T. M. Ingolfsson, et al., EEG-TCNet: An Accurate Temporal Convolutional Network for Embedded Motor-Imagery Brain-Machine Interfaces, 2020<br />
<br />
===Practical Details===<br />
* '''[[Project Plan]]'''<br />
* '''[[Project Meetings]]'''<br />
* '''[[Final Report]]'''<br />
* '''[[Final Presentation]]'''<br />
<br />
[[#top|↑ top]]<br />
<br />
<!-- <br />
<br />
COPY PASTE FROM THE LIST BELOW TO ADD TO CATEGORIES<br />
<br />
GROUP<br />
[[Category:Digital]]<br />
<br />
STATUS<br />
[[Category:Available]]<br />
[[Category:In progress]]<br />
[[Category:Completed]]<br />
[[Category:Hot]]<br />
<br />
TYPE OF WORK<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]]<br />
[[Category:PhD Thesis]]<br />
[[Category:Research]]<br />
<br />
NAMES OF EU/CTI/NT PROJECTS<br />
[[Category:UltrasoundToGo]]<br />
[[Category:IcySoC]]<br />
[[Category:PSocrates]]<br />
[[Category:UlpSoC]]<br />
[[Category:Qcrypt]]<br />
<br />
YEAR (IF FINISHED)<br />
[[Category:2010]]<br />
[[Category:2011]]<br />
[[Category:2012]]<br />
[[Category:2013]]<br />
[[Category:2014]]<br />
<br />
---></div>Herschmihttp://iis-projects.ee.ethz.ch/index.php?title=Combining_Spiking_Neural_Networks_with_Hyperdimensional_Computing_for_Autonomous_Navigation&diff=6916Combining Spiking Neural Networks with Hyperdimensional Computing for Autonomous Navigation2021-09-16T07:23:48Z<p>Herschmi: /* Status: Available */</p>
<hr />
<div>[[Category:Digital]][[Category:Semester Thesis]][[Category:Master Thesis]] [[Category:2020]][[Category:Hot]][[Category:Adimauro]]<br />
<br />
==Description==<br />
In the last 10 years, Artificial Neural Networks (ANN) revolutionized many scientific fields<br />
by solving very difficult practical problems. Convolutional Neural Networks (CNN) is a very<br />
popular approach which allowed to reach state-of-the-art accuracies in many different machine<br />
learning tasks, featuring a reasonably low memory footprint. In the last years, a new categories of efficient sensors is meeting a growing interest; ULP event-based cameras and audio sensors belong to this category. To efficiently exploit the nature of the data produced by such sensors, a paradigm shift in the way data are acquired and processed could be unavoidable. For this reason, research communities started focusing their interests on less conventional computing paradigms, such as brain-inspired, event-driven computing. Here, both Spiking Neural Networks (SNN) as well as Hyperdimensional Computing (HDC) showed promising results in processing event-based data. Specifically, SNNs have shown to extract robust features from event-driven cameras [1], whereas HDC have proven to be capable to update their simple to cope with environmental changes during operation [2]. <br />
<br />
In this project, you combine the best of the two worlds (SNNs for feature extraction and HDC for regression) for steering angle prediction based on event-driven cameras. You start by applying an already given SNN on the publicly available MVSEC dataset [3], and later extend it with an HDC regressor. <br />
<br />
<br />
===Status: Available ===<br />
: Looking for 1-2 students for a semester project. <br />
: Supervision: [[:User:Adimauro | Alfio Di Mauro]]<br />
<br />
===Prerequisites===<br />
* Machine Learning<br />
* Python<br />
<br />
===Character===<br />
: 20% Theory<br />
: 80% Implementation<br />
<br />
===Literature===<br />
* [https://arxiv.org/abs/2003.02790] Gehrig et al., Event-Based Angular Velocity Regression with Spiking Networks<br />
* [https://iis-people.ee.ethz.ch/~herschmi/ISLPED20.pdf] Hersche et al., Integrating Event-based Dynamic Vision Sensors with Sparse Hyperdimensional Computing: A Low-power Accelerator with Online Learning Capability<br />
* [https://daniilidis-group.github.io/mvsec/] MVSEC dataset<br />
<br />
===Professor===<br />
: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]<br />
[[#top|↑ top]]<br />
<br />
<!--<br />
==Detailed Task Description==<br />
<br />
===Meetings & Presentations===<br />
The student(s) and advisor(s) agree on weekly meetings to discuss all relevant decisions and decide on how to proceed. Of course, additional meetings can be organized to address urgent issues. <br />
Around the middle of the project there is a design review, where senior members of the lab review your work (bring all the relevant information, such as prelim. specifications, block diagrams, synthesis reports, testing strategy, ...) to make sure everything is on track and decide whether further support is necessary. They also make the definite decision on whether the chip is actually manufactured (no reason to worry, if the project is on track) and whether more chip area, a different package, ... is provided. For more details confer to [http://eda.ee.ethz.ch/index.php/Design_review]. <br />
At the end of the project, you have to present/defend your work during a 15 min. presentation and 5 min. of discussion as part of the IIS colloquium (as required for any semester or master thesis at D-ITET).<br />
<br />
===Deliverables===<br />
* description of the most promising architectures, and argumentation on the decision taken (as part of the report)<br />
* synthesizable, verified VHDL code<br />
* generated test vector files<br />
* synthesis scripts & relevant software models developed for verification<br />
* synthesis results and final chip layout (GDS II data), bonding diagram<br />
* datasheet (part of report)<br />
* presentation slides<br />
* project report (in digital form; a hard copy also welcome, but not necessary)<br />
---><br />
<!--<br />
===Timeline==<br />
To give some idea on how the time can be split up, we provide some possible partitioning: <br />
* Literature survey, building a basic understanding of the problem at hand, catch up on related work <br />
* Development of a working software-based implementation running on the Zynq's ARM core<br />
* Piece-by-piece off-loading of relevant tasks to the programmable logic<br />
* Implementation of data interfaces (software or hardware)<br />
* Report and presentation <br />
--><br />
<!-- 13.5 weeks total here --><br />
<br />
===Practical Details===<br />
* '''[[Project Plan]]'''<br />
* '''[[Project Meetings]]'''<br />
* '''[[Final Report]]'''<br />
* '''[[Final Presentation]]'''<br />
<br />
[[#top|↑ top]]<br />
<br />
<!-- <br />
<br />
COPY PASTE FROM THE LIST BELOW TO ADD TO CATEGORIES<br />
<br />
GROUP<br />
[[Category:Digital]]<br />
<br />
STATUS<br />
[[Category:Available]]<br />
[[Category:In progress]]<br />
[[Category:Completed]]<br />
[[Category:Hot]]<br />
<br />
TYPE OF WORK<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]]<br />
[[Category:PhD Thesis]]<br />
[[Category:Research]]<br />
<br />
NAMES OF EU/CTI/NT PROJECTS<br />
[[Category:UltrasoundToGo]]<br />
[[Category:IcySoC]]<br />
[[Category:PSocrates]]<br />
[[Category:UlpSoC]]<br />
[[Category:Qcrypt]]<br />
<br />
YEAR (IF FINISHED)<br />
[[Category:2010]]<br />
[[Category:2011]]<br />
[[Category:2012]]<br />
[[Category:2013]]<br />
[[Category:2014]]<br />
<br />
---><br />
<br />
[[Category:Herschmi]]</div>Herschmihttp://iis-projects.ee.ethz.ch/index.php?title=Combining_Spiking_Neural_Networks_with_Hyperdimensional_Computing_for_Autonomous_Navigation&diff=6915Combining Spiking Neural Networks with Hyperdimensional Computing for Autonomous Navigation2021-09-16T07:23:34Z<p>Herschmi: </p>
<hr />
<div>[[Category:Digital]][[Category:Semester Thesis]][[Category:Master Thesis]] [[Category:2020]][[Category:Hot]][[Category:Adimauro]]<br />
<br />
==Description==<br />
In the last 10 years, Artificial Neural Networks (ANN) revolutionized many scientific fields<br />
by solving very difficult practical problems. Convolutional Neural Networks (CNN) is a very<br />
popular approach which allowed to reach state-of-the-art accuracies in many different machine<br />
learning tasks, featuring a reasonably low memory footprint. In the last years, a new categories of efficient sensors is meeting a growing interest; ULP event-based cameras and audio sensors belong to this category. To efficiently exploit the nature of the data produced by such sensors, a paradigm shift in the way data are acquired and processed could be unavoidable. For this reason, research communities started focusing their interests on less conventional computing paradigms, such as brain-inspired, event-driven computing. Here, both Spiking Neural Networks (SNN) as well as Hyperdimensional Computing (HDC) showed promising results in processing event-based data. Specifically, SNNs have shown to extract robust features from event-driven cameras [1], whereas HDC have proven to be capable to update their simple to cope with environmental changes during operation [2]. <br />
<br />
In this project, you combine the best of the two worlds (SNNs for feature extraction and HDC for regression) for steering angle prediction based on event-driven cameras. You start by applying an already given SNN on the publicly available MVSEC dataset [3], and later extend it with an HDC regressor. <br />
<br />
<br />
===Status: Available ===<br />
: Looking for 1-2 students for a semester project. <br />
: Supervision: [[:User:Herschmi | Michael Hersche]], [[:User:Adimauro | Alfio Di Mauro]]<br />
===Prerequisites===<br />
* Machine Learning<br />
* Python<br />
<br />
===Character===<br />
: 20% Theory<br />
: 80% Implementation<br />
<br />
===Literature===<br />
* [https://arxiv.org/abs/2003.02790] Gehrig et al., Event-Based Angular Velocity Regression with Spiking Networks<br />
* [https://iis-people.ee.ethz.ch/~herschmi/ISLPED20.pdf] Hersche et al., Integrating Event-based Dynamic Vision Sensors with Sparse Hyperdimensional Computing: A Low-power Accelerator with Online Learning Capability<br />
* [https://daniilidis-group.github.io/mvsec/] MVSEC dataset<br />
<br />
===Professor===<br />
: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]<br />
[[#top|↑ top]]<br />
<br />
<!--<br />
==Detailed Task Description==<br />
<br />
===Meetings & Presentations===<br />
The student(s) and advisor(s) agree on weekly meetings to discuss all relevant decisions and decide on how to proceed. Of course, additional meetings can be organized to address urgent issues. <br />
Around the middle of the project there is a design review, where senior members of the lab review your work (bring all the relevant information, such as prelim. specifications, block diagrams, synthesis reports, testing strategy, ...) to make sure everything is on track and decide whether further support is necessary. They also make the definite decision on whether the chip is actually manufactured (no reason to worry, if the project is on track) and whether more chip area, a different package, ... is provided. For more details confer to [http://eda.ee.ethz.ch/index.php/Design_review]. <br />
At the end of the project, you have to present/defend your work during a 15 min. presentation and 5 min. of discussion as part of the IIS colloquium (as required for any semester or master thesis at D-ITET).<br />
<br />
===Deliverables===<br />
* description of the most promising architectures, and argumentation on the decision taken (as part of the report)<br />
* synthesizable, verified VHDL code<br />
* generated test vector files<br />
* synthesis scripts & relevant software models developed for verification<br />
* synthesis results and final chip layout (GDS II data), bonding diagram<br />
* datasheet (part of report)<br />
* presentation slides<br />
* project report (in digital form; a hard copy also welcome, but not necessary)<br />
---><br />
<!--<br />
===Timeline==<br />
To give some idea on how the time can be split up, we provide some possible partitioning: <br />
* Literature survey, building a basic understanding of the problem at hand, catch up on related work <br />
* Development of a working software-based implementation running on the Zynq's ARM core<br />
* Piece-by-piece off-loading of relevant tasks to the programmable logic<br />
* Implementation of data interfaces (software or hardware)<br />
* Report and presentation <br />
--><br />
<!-- 13.5 weeks total here --><br />
<br />
===Practical Details===<br />
* '''[[Project Plan]]'''<br />
* '''[[Project Meetings]]'''<br />
* '''[[Final Report]]'''<br />
* '''[[Final Presentation]]'''<br />
<br />
[[#top|↑ top]]<br />
<br />
<!-- <br />
<br />
COPY PASTE FROM THE LIST BELOW TO ADD TO CATEGORIES<br />
<br />
GROUP<br />
[[Category:Digital]]<br />
<br />
STATUS<br />
[[Category:Available]]<br />
[[Category:In progress]]<br />
[[Category:Completed]]<br />
[[Category:Hot]]<br />
<br />
TYPE OF WORK<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]]<br />
[[Category:PhD Thesis]]<br />
[[Category:Research]]<br />
<br />
NAMES OF EU/CTI/NT PROJECTS<br />
[[Category:UltrasoundToGo]]<br />
[[Category:IcySoC]]<br />
[[Category:PSocrates]]<br />
[[Category:UlpSoC]]<br />
[[Category:Qcrypt]]<br />
<br />
YEAR (IF FINISHED)<br />
[[Category:2010]]<br />
[[Category:2011]]<br />
[[Category:2012]]<br />
[[Category:2013]]<br />
[[Category:2014]]<br />
<br />
---><br />
<br />
[[Category:Herschmi]]</div>Herschmihttp://iis-projects.ee.ethz.ch/index.php?title=BCI-controlled_Drone&diff=6914BCI-controlled Drone2021-09-16T07:22:55Z<p>Herschmi: </p>
<hr />
<div>[[Category:Digital]][[Category:Available]][[Category:Semester Thesis]] [[Category:Master Thesis]] [[Category:2019]][[Category:Hot]][[Category:Human Intranet]][[Category:BCI]][[Category:Drone]][[Category:Xiaywang]]<br />
[[File:biowolf_drone.jpg|thumb|300px]]<br />
==Description==<br />
A brain–computer interface (BCI) is a device that enables communication between the human brain and an external device. It aims to recognize the human’s intentions from spatiotemporal neural activity typically recorded by a large set of electroencephalogram (EEG) electrodes. What makes it particularly challenging, however, is its susceptibility to errors in the recognition of human intentions, especially during motor imagery (MI). The underlying reason is the high inter-subject variance, which makes it difficult to build one universal model for all subjects. <br />
<br />
In this project, the student works on the acquisition and processing of electroencephalography (EEG) data to control a drone using a Brain-Machine Interface (BMI) with motor imagery (MI). <br />
This includes:<br />
* Set up experimental paradigm to acquire EEG data in the motor imagery paradigm <br />
* Record data with of multiple subjects (potentially from our lab) <br />
* Analyse the data and apply a standard classifier [1,2]<br />
* Move forward to online asynchronous BCI experiments<br />
* Potentially control a drone<br />
<br />
<br />
===Status: Available ===<br />
: Looking for 1-2 students for a semester project or group project. <br />
: Supervision: [[:User:Xiaywang | Xiaying Wang]] <br />
<br />
===Prerequisites===<br />
* Machine Learning<br />
* Python and C Programming<br />
* Embedded Systems<br />
<br />
<br />
===Character===<br />
: 15% Theory<br />
: 45% Implementation<br />
: 40% Testing<br />
<br />
===Professor===<br />
: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]<br />
[[#top|↑ top]]<br />
<br />
<!--<br />
==Detailed Task Description==<br />
<br />
===Meetings & Presentations===<br />
The student(s) and advisor(s) agree on weekly meetings to discuss all relevant decisions and decide on how to proceed. Of course, additional meetings can be organized to address urgent issues. <br />
Around the middle of the project there is a design review, where senior members of the lab review your work (bring all the relevant information, such as prelim. specifications, block diagrams, synthesis reports, testing strategy, ...) to make sure everything is on track and decide whether further support is necessary. They also make the definite decision on whether the chip is actually manufactured (no reason to worry, if the project is on track) and whether more chip area, a different package, ... is provided. For more details confer to [http://eda.ee.ethz.ch/index.php/Design_review]. <br />
At the end of the project, you have to present/defend your work during a 15 min. presentation and 5 min. of discussion as part of the IIS colloquium (as required for any semester or master thesis at D-ITET).<br />
<br />
===Deliverables===<br />
* description of the most promising architectures, and argumentation on the decision taken (as part of the report)<br />
* synthesizable, verified VHDL code<br />
* generated test vector files<br />
* synthesis scripts & relevant software models developed for verification<br />
* synthesis results and final chip layout (GDS II data), bonding diagram<br />
* datasheet (part of report)<br />
* presentation slides<br />
* project report (in digital form; a hard copy also welcome, but not necessary)<br />
---><br />
<!--<br />
===Timeline==<br />
To give some idea on how the time can be split up, we provide some possible partitioning: <br />
* Literature survey, building a basic understanding of the problem at hand, catch up on related work <br />
* Development of a working software-based implementation running on the Zynq's ARM core<br />
* Piece-by-piece off-loading of relevant tasks to the programmable logic<br />
* Implementation of data interfaces (software or hardware)<br />
* Report and presentation <br />
--><br />
<!-- 13.5 weeks total here --><br />
<br />
<br />
===Literature===<br />
* [https://ieeexplore.ieee.org/abstract/document/8553378] M. Hersche, et al., Fast and Accurate Multiclass Inference for MI-BCIs Using Large Multiscale Temporal and Spectral Features, 2018 <br />
* [https://arxiv.org/abs/2004.00077] X. Wang, et. al., An Accurate EEGNet-based Motor-Imagery Brain-Computer Interface for Low-Power Edge Computing, 2020<br />
<br />
===Practical Details===<br />
* '''[[Project Plan]]'''<br />
* '''[[Project Meetings]]'''<br />
* '''[[Final Report]]'''<br />
* '''[[Final Presentation]]'''<br />
<br />
[[#top|↑ top]]<br />
<br />
<!-- <br />
<br />
COPY PASTE FROM THE LIST BELOW TO ADD TO CATEGORIES<br />
<br />
GROUP<br />
[[Category:Digital]]<br />
<br />
STATUS<br />
[[Category:Available]]<br />
[[Category:In progress]]<br />
[[Category:Completed]]<br />
[[Category:Hot]]<br />
<br />
TYPE OF WORK<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]]<br />
[[Category:PhD Thesis]]<br />
[[Category:Research]]<br />
<br />
NAMES OF EU/CTI/NT PROJECTS<br />
[[Category:UltrasoundToGo]]<br />
[[Category:IcySoC]]<br />
[[Category:PSocrates]]<br />
[[Category:UlpSoC]]<br />
[[Category:Qcrypt]]<br />
<br />
YEAR (IF FINISHED)<br />
[[Category:2010]]<br />
[[Category:2011]]<br />
[[Category:2012]]<br />
[[Category:2013]]<br />
[[Category:2014]]<br />
<br />
---></div>Herschmihttp://iis-projects.ee.ethz.ch/index.php?title=BCI-controlled_Drone&diff=6913BCI-controlled Drone2021-09-16T07:22:24Z<p>Herschmi: </p>
<hr />
<div>[[Category:Digital]][[Category:Available]][[Category:Semester Thesis]] [[Category:Master Thesis]] [[Category:2019]][[Category:Hot]][[Category:Human Intranet]][[Category:BCI]][[Category:Drone]][[Category:Xiaywang]]<br />
[[File:biowolf_drone.jpg|thumb|300px]]<br />
==Description==<br />
A brain–computer interface (BCI) is a device that enables communication between the human brain and an external device. It aims to recognize the human’s intentions from spatiotemporal neural activity typically recorded by a large set of electroencephalogram (EEG) electrodes. What makes it particularly challenging, however, is its susceptibility to errors in the recognition of human intentions, especially during motor imagery (MI). The underlying reason is the high inter-subject variance, which makes it difficult to build one universal model for all subjects. <br />
<br />
In this project, the student works on the acquisition and processing of electroencephalography (EEG) data to control a drone using a Brain-Machine Interface (BMI) with motor imagery (MI). <br />
This includes:<br />
* Set up experimental paradigm to acquire EEG data in the motor imagery paradigm <br />
* Record data with of multiple subjects (potentially from our lab) <br />
* Analyse the data and apply a standard classifier [1,2]<br />
* Move forward to online asynchronous BCI experiments<br />
* Potentially control a drone<br />
<br />
<br />
===Status: Available ===<br />
: Looking for 1-2 students for a semester project or group project. <br />
: Supervision: [[:User:Xiaywang | Xiaying Wang]] <br />
<br />
===Prerequisites===<br />
* Machine Learning<br />
* Python and C Programming<br />
* Embedded Systems<br />
<br />
<br />
===Character===<br />
: 15% Theory<br />
: 45% Implementation<br />
: 40% Testing<br />
<br />
===Professor===<br />
: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]<br />
[[#top|↑ top]]<br />
<br />
<!--<br />
==Detailed Task Description==<br />
<br />
===Meetings & Presentations===<br />
The student(s) and advisor(s) agree on weekly meetings to discuss all relevant decisions and decide on how to proceed. Of course, additional meetings can be organized to address urgent issues. <br />
Around the middle of the project there is a design review, where senior members of the lab review your work (bring all the relevant information, such as prelim. specifications, block diagrams, synthesis reports, testing strategy, ...) to make sure everything is on track and decide whether further support is necessary. They also make the definite decision on whether the chip is actually manufactured (no reason to worry, if the project is on track) and whether more chip area, a different package, ... is provided. For more details confer to [http://eda.ee.ethz.ch/index.php/Design_review]. <br />
At the end of the project, you have to present/defend your work during a 15 min. presentation and 5 min. of discussion as part of the IIS colloquium (as required for any semester or master thesis at D-ITET).<br />
<br />
===Deliverables===<br />
* description of the most promising architectures, and argumentation on the decision taken (as part of the report)<br />
* synthesizable, verified VHDL code<br />
* generated test vector files<br />
* synthesis scripts & relevant software models developed for verification<br />
* synthesis results and final chip layout (GDS II data), bonding diagram<br />
* datasheet (part of report)<br />
* presentation slides<br />
* project report (in digital form; a hard copy also welcome, but not necessary)<br />
---><br />
<!--<br />
===Timeline==<br />
To give some idea on how the time can be split up, we provide some possible partitioning: <br />
* Literature survey, building a basic understanding of the problem at hand, catch up on related work <br />
* Development of a working software-based implementation running on the Zynq's ARM core<br />
* Piece-by-piece off-loading of relevant tasks to the programmable logic<br />
* Implementation of data interfaces (software or hardware)<br />
* Report and presentation <br />
--><br />
<!-- 13.5 weeks total here --><br />
<br />
<br />
===Literature===<br />
* [https://ieeexplore.ieee.org/abstract/document/8553378] M. Hersche, et al., Fast and Accurate Multiclass Inference for MI-BCIs Using Large Multiscale Temporal and Spectral Features, 2018 <br />
* [https://arxiv.org/abs/2004.00077] X. Wang, et. al., An Accurate EEGNet-based Motor-Imagery Brain-Computer Interface for Low-Power Edge Computing, 2020<br />
<br />
===Practical Details===<br />
* '''[[Project Plan]]'''<br />
* '''[[Project Meetings]]'''<br />
* '''[[Final Report]]'''<br />
* '''[[Final Presentation]]'''<br />
<br />
[[#top|↑ top]]<br />
<br />
<!-- <br />
<br />
COPY PASTE FROM THE LIST BELOW TO ADD TO CATEGORIES<br />
<br />
GROUP<br />
[[Category:Digital]]<br />
<br />
STATUS<br />
[[Category:Available]]<br />
[[Category:In progress]]<br />
[[Category:Completed]]<br />
[[Category:Hot]]<br />
<br />
TYPE OF WORK<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]]<br />
[[Category:PhD Thesis]]<br />
[[Category:Research]]<br />
<br />
NAMES OF EU/CTI/NT PROJECTS<br />
[[Category:UltrasoundToGo]]<br />
[[Category:IcySoC]]<br />
[[Category:PSocrates]]<br />
[[Category:UlpSoC]]<br />
[[Category:Qcrypt]]<br />
<br />
YEAR (IF FINISHED)<br />
[[Category:2010]]<br />
[[Category:2011]]<br />
[[Category:2012]]<br />
[[Category:2013]]<br />
[[Category:2014]]<br />
<br />
---><br />
<br />
[[Category:Herschmi]]</div>Herschmihttp://iis-projects.ee.ethz.ch/index.php?title=Contrastive_Learning_for_Self-supervised_Clustering_of_iEEG_Data_for_Epileptic_Patients&diff=6912Contrastive Learning for Self-supervised Clustering of iEEG Data for Epileptic Patients2021-09-16T07:22:06Z<p>Herschmi: </p>
<hr />
<div>[[Category:Digital]][[Category:Semester Thesis]][[Category:Master Thesis]] [[Category:Available]] [[Category:2020]][[Category:Hot]][[Category:Human Intranet]][[Category:xiaywang]][[Category:Epilepsy]]<br />
<br />
==Description==<br />
Epilepsy is a severe and prevalent chronic neurological disorder affecting 1–2% of the world’s population. One-third of epilepsy patients continue to suffer from seizures despite the best possible pharmacological treatment. For these patients with so-called drug-resistant epilepsy, various algorithms based on intracranial electroencephalography (iEEG) recording are proposed to detect the onset of seizures. Training accurate models (e.g., convolutional neural networks) for the detection of seizure onsets requires a large amount of labeled data. Indeed, labeling can be particularly challenging for certain types of data that are highly complex or noisy, resulting in poor quality human annotations at best.<br />
<br />
A promising alternative is to train the models in a self-supervised way using contrastive learning [1], which was able to learn different sleep states. This will not only improve seizure onset detection accuracy but also gives important insights into the features of the model. You will start with a given model for seizure detection and apply it to contrastive learning. We use the publicly available long-term dataset [2] consisting of a total of 2656 hours iEEG recordings.<br />
<br />
<br />
===Status: Available ===<br />
Looking for student for Master's thesis or Semester project. <br />
: Supervision: [[:User:xiaywang|Xiaying Wang]]<br />
<br />
===Prerequisites===<br />
* Machine Learning<br />
* Python & C Programming<br />
<br />
<br />
<br />
===Character===<br />
: 20% Theory<br />
: 80% Programming<br />
<br />
===Professor===<br />
: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]<br />
[[#top|↑ top]]<br />
<br />
<!--<br />
==Detailed Task Description==<br />
<br />
===Meetings & Presentations===<br />
The student(s) and advisor(s) agree on weekly meetings to discuss all relevant decisions and decide on how to proceed. Of course, additional meetings can be organized to address urgent issues. <br />
Around the middle of the project there is a design review, where senior members of the lab review your work (bring all the relevant information, such as prelim. specifications, block diagrams, synthesis reports, testing strategy, ...) to make sure everything is on track and decide whether further support is necessary. They also make the definite decision on whether the chip is actually manufactured (no reason to worry, if the project is on track) and whether more chip area, a different package, ... is provided. For more details confer to [http://eda.ee.ethz.ch/index.php/Design_review]. <br />
At the end of the project, you have to present/defend your work during a 15 min. presentation and 5 min. of discussion as part of the IIS colloquium (as required for any semester or master thesis at D-ITET).<br />
<br />
===Deliverables===<br />
* description of the most promising architectures, and argumentation on the decision taken (as part of the report)<br />
* synthesizable, verified VHDL code<br />
* generated test vector files<br />
* synthesis scripts & relevant software models developed for verification<br />
* synthesis results and final chip layout (GDS II data), bonding diagram<br />
* datasheet (part of report)<br />
* presentation slides<br />
* project report (in digital form; a hard copy also welcome, but not necessary)<br />
---><br />
<!--<br />
===Timeline==<br />
To give some idea on how the time can be split up, we provide some possible partitioning: <br />
* Literature survey, building a basic understanding of the problem at hand, catch up on related work <br />
* Development of a working software-based implementation running on the Zynq's ARM core<br />
* Piece-by-piece off-loading of relevant tasks to the programmable logic<br />
* Implementation of data interfaces (software or hardware)<br />
* Report and presentation <br />
--><br />
<!-- 13.5 weeks total here --><br />
<br />
<br />
===Literature===<br />
* [https://arxiv.org/abs/1911.05419] H. Banville et al., Self-supervised representation learning from electroencephalography signals, 2019 <br />
* [http://ieeg-swez.ethz.ch/] iEEG-SWEZ data base<br />
<br />
===Practical Details===<br />
* '''[[Project Plan]]'''<br />
* '''[[Project Meetings]]'''<br />
* '''[[Final Report]]'''<br />
* '''[[Final Presentation]]'''<br />
<br />
[[#top|↑ top]]<br />
<br />
<!-- <br />
<br />
COPY PASTE FROM THE LIST BELOW TO ADD TO CATEGORIES<br />
<br />
GROUP<br />
[[Category:Digital]]<br />
<br />
STATUS<br />
[[Category:Available]]<br />
[[Category:In progress]]<br />
[[Category:Completed]]<br />
[[Category:Hot]]<br />
<br />
TYPE OF WORK<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]]<br />
[[Category:PhD Thesis]]<br />
[[Category:Research]]<br />
<br />
NAMES OF EU/CTI/NT PROJECTS<br />
[[Category:UltrasoundToGo]]<br />
[[Category:IcySoC]]<br />
[[Category:PSocrates]]<br />
[[Category:UlpSoC]]<br />
[[Category:Qcrypt]]<br />
<br />
YEAR (IF FINISHED)<br />
[[Category:2010]]<br />
[[Category:2011]]<br />
[[Category:2012]]<br />
[[Category:2013]]<br />
[[Category:2014]]<br />
<br />
---></div>Herschmihttp://iis-projects.ee.ethz.ch/index.php?title=Deep_neural_networks_for_seizure_detection&diff=6911Deep neural networks for seizure detection2021-09-16T07:21:37Z<p>Herschmi: </p>
<hr />
<div>[[Category:Digital]][[Category:Semester Thesis]] [[Category:Available]] [[Category:2020]][[Category:Hot]][[Category:Human Intranet]][[Category:xiaywang]][[Category:Epilepsy]]<br />
[[File:Non-EEG Seizure.jpg|thumb|300px]]<br />
==Description==<br />
Epilepsy is one of the most prevalent chronic neurological disorders. One-third of patients with epilepsy continue to suffer from seizures despite pharmacological therapy. For these patients with drug-resistant epilepsy, efficient algorithms for seizure detection are needed in particular during pre-surgical monitoring. Many efforts have been pursued in this direction with the fabrication of many ASIC and the development of advanced machine/deep-learning to optimize both the energy efficiency for years-operating devices and the accuracy in the epilepsy detection.<br />
In terms of time-series analysis, a big variety of deep-learning approaches are arising for efficient processing such as InceptionTime [1], MultiScale-CNN [2], and Temporal Convolutional Networks (TCN) [3].<br />
<br />
The thesis would be a 6-month full-time project with the following steps to accomplish:<br />
<br />
1 - Development in a high-level programming language (python) of different deep learning algorithms for time series classification. In particular, the initial targets will be the InceptionTime, the MultiScale-CNN, the Temporal Convolutional Networks, and a bidirectional LSTM.<br />
<br />
2 - Benchmarking of these algorithms on a large-scale dataset collected by the Bern Inselspital about epileptic patients [4].<br />
<br />
3 - Comparison with state of the art methods (Local Binary pattern + Hyperdimensional computing [5], Short-time Fourier transform + Convolutional Neural Networks and classical machine learning methods).<br />
<br />
4 - Characterization of the algorithm on different computing platforms, from the high-level number of operations to the number of cycles and energy consumption on embedded devices (e.g. GAP8, a multi-core chip from GreenWaves Technology).<br />
<br />
<br />
===Status: Available ===<br />
Looking for one student for Master's thesis. <br />
: Supervision: [[:User:xiaywang|Xiaying Wang]], [mailto:alessio.burrello@unibo.it Alessio Burrello]<br />
<br />
===Prerequisites===<br />
* Machine Learning<br />
* Python & C Programming<br />
<br />
<br />
<br />
===Character===<br />
: 20% Theory<br />
: 80% Programming<br />
<br />
===Professor===<br />
: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]<br />
[[#top|↑ top]]<br />
<br />
<!--<br />
==Detailed Task Description==<br />
<br />
===Meetings & Presentations===<br />
The student(s) and advisor(s) agree on weekly meetings to discuss all relevant decisions and decide on how to proceed. Of course, additional meetings can be organized to address urgent issues. <br />
Around the middle of the project there is a design review, where senior members of the lab review your work (bring all the relevant information, such as prelim. specifications, block diagrams, synthesis reports, testing strategy, ...) to make sure everything is on track and decide whether further support is necessary. They also make the definite decision on whether the chip is actually manufactured (no reason to worry, if the project is on track) and whether more chip area, a different package, ... is provided. For more details confer to [http://eda.ee.ethz.ch/index.php/Design_review]. <br />
At the end of the project, you have to present/defend your work during a 15 min. presentation and 5 min. of discussion as part of the IIS colloquium (as required for any semester or master thesis at D-ITET).<br />
<br />
===Deliverables===<br />
* description of the most promising architectures, and argumentation on the decision taken (as part of the report)<br />
* synthesizable, verified VHDL code<br />
* generated test vector files<br />
* synthesis scripts & relevant software models developed for verification<br />
* synthesis results and final chip layout (GDS II data), bonding diagram<br />
* datasheet (part of report)<br />
* presentation slides<br />
* project report (in digital form; a hard copy also welcome, but not necessary)<br />
---><br />
<!--<br />
===Timeline==<br />
To give some idea on how the time can be split up, we provide some possible partitioning: <br />
* Literature survey, building a basic understanding of the problem at hand, catch up on related work <br />
* Development of a working software-based implementation running on the Zynq's ARM core<br />
* Piece-by-piece off-loading of relevant tasks to the programmable logic<br />
* Implementation of data interfaces (software or hardware)<br />
* Report and presentation <br />
--><br />
<!-- 13.5 weeks total here --><br />
<br />
<br />
===Literature===<br />
* H. I. Fawaz et al., InceptionTime: Finding AlexNet for Time Series Classification [https://arxiv.org/abs/1909.04939]<br />
* Z. Cui et al., Multi-Scale Convolutional Neural Networks for Time Series Classification [https://arxiv.org/abs/1603.06995]<br />
* C. Lea et al., Temporal Convolutional Networks: A Unified Approach to Action Segmentation, [https://link.springer.com/chapter/10.1007/978-3-319-49409-8_7]<br />
* iEEG-SWEZ data base [http://ieeg-swez.ethz.ch/]<br />
* A. Burello et al., Laelaps: An Energy-Efficient Seizure Detection Algorithm from Long-term Human iEEG Recordings without False Alarms [https://ieeexplore.ieee.org/abstract/document/8715186]<br />
<br />
<br />
<br />
===Practical Details===<br />
* '''[[Project Plan]]'''<br />
* '''[[Project Meetings]]'''<br />
* '''[[Final Report]]'''<br />
* '''[[Final Presentation]]'''<br />
<br />
[[#top|↑ top]]<br />
<br />
<!-- <br />
<br />
COPY PASTE FROM THE LIST BELOW TO ADD TO CATEGORIES<br />
<br />
GROUP<br />
[[Category:Digital]]<br />
<br />
STATUS<br />
[[Category:Available]]<br />
[[Category:In progress]]<br />
[[Category:Completed]]<br />
[[Category:Hot]]<br />
<br />
TYPE OF WORK<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]]<br />
[[Category:PhD Thesis]]<br />
[[Category:Research]]<br />
<br />
NAMES OF EU/CTI/NT PROJECTS<br />
[[Category:UltrasoundToGo]]<br />
[[Category:IcySoC]]<br />
[[Category:PSocrates]]<br />
[[Category:UlpSoC]]<br />
[[Category:Qcrypt]]<br />
<br />
YEAR (IF FINISHED)<br />
[[Category:2010]]<br />
[[Category:2011]]<br />
[[Category:2012]]<br />
[[Category:2013]]<br />
[[Category:2014]]<br />
<br />
---></div>Herschmihttp://iis-projects.ee.ethz.ch/index.php?title=Deep_neural_networks_for_seizure_detection&diff=6910Deep neural networks for seizure detection2021-09-16T07:20:55Z<p>Herschmi: </p>
<hr />
<div>[[Category:Digital]][[Category:Semester Thesis]] [[Category:Available]] [[Category:2020]][[Category:Hot]][[Category:Human Intranet]][[Category:xiaywang]][[Category:Epilepsy]]<br />
[[File:Non-EEG Seizure.jpg|thumb|300px]]<br />
==Description==<br />
Epilepsy is one of the most prevalent chronic neurological disorders. One-third of patients with epilepsy continue to suffer from seizures despite pharmacological therapy. For these patients with drug-resistant epilepsy, efficient algorithms for seizure detection are needed in particular during pre-surgical monitoring. Many efforts have been pursued in this direction with the fabrication of many ASIC and the development of advanced machine/deep-learning to optimize both the energy efficiency for years-operating devices and the accuracy in the epilepsy detection.<br />
In terms of time-series analysis, a big variety of deep-learning approaches are arising for efficient processing such as InceptionTime [1], MultiScale-CNN [2], and Temporal Convolutional Networks (TCN) [3].<br />
<br />
The thesis would be a 6-month full-time project with the following steps to accomplish:<br />
<br />
1 - Development in a high-level programming language (python) of different deep learning algorithms for time series classification. In particular, the initial targets will be the InceptionTime, the MultiScale-CNN, the Temporal Convolutional Networks, and a bidirectional LSTM.<br />
<br />
2 - Benchmarking of these algorithms on a large-scale dataset collected by the Bern Inselspital about epileptic patients [4].<br />
<br />
3 - Comparison with state of the art methods (Local Binary pattern + Hyperdimensional computing [5], Short-time Fourier transform + Convolutional Neural Networks and classical machine learning methods).<br />
<br />
4 - Characterization of the algorithm on different computing platforms, from the high-level number of operations to the number of cycles and energy consumption on embedded devices (e.g. GAP8, a multi-core chip from GreenWaves Technology).<br />
<br />
<br />
===Status: Available ===<br />
Looking for one student for Master's thesis. <br />
: Supervision: [[:User:Herschmi | Michael Hersche]], [[:User:xiaywang|Xiaying Wang]], [mailto:alessio.burrello@unibo.it Alessio Burrello]<br />
<br />
===Prerequisites===<br />
* Machine Learning<br />
* Python & C Programming<br />
<br />
<br />
<br />
===Character===<br />
: 20% Theory<br />
: 80% Programming<br />
<br />
===Professor===<br />
: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]<br />
[[#top|↑ top]]<br />
<br />
<!--<br />
==Detailed Task Description==<br />
<br />
===Meetings & Presentations===<br />
The student(s) and advisor(s) agree on weekly meetings to discuss all relevant decisions and decide on how to proceed. Of course, additional meetings can be organized to address urgent issues. <br />
Around the middle of the project there is a design review, where senior members of the lab review your work (bring all the relevant information, such as prelim. specifications, block diagrams, synthesis reports, testing strategy, ...) to make sure everything is on track and decide whether further support is necessary. They also make the definite decision on whether the chip is actually manufactured (no reason to worry, if the project is on track) and whether more chip area, a different package, ... is provided. For more details confer to [http://eda.ee.ethz.ch/index.php/Design_review]. <br />
At the end of the project, you have to present/defend your work during a 15 min. presentation and 5 min. of discussion as part of the IIS colloquium (as required for any semester or master thesis at D-ITET).<br />
<br />
===Deliverables===<br />
* description of the most promising architectures, and argumentation on the decision taken (as part of the report)<br />
* synthesizable, verified VHDL code<br />
* generated test vector files<br />
* synthesis scripts & relevant software models developed for verification<br />
* synthesis results and final chip layout (GDS II data), bonding diagram<br />
* datasheet (part of report)<br />
* presentation slides<br />
* project report (in digital form; a hard copy also welcome, but not necessary)<br />
---><br />
<!--<br />
===Timeline==<br />
To give some idea on how the time can be split up, we provide some possible partitioning: <br />
* Literature survey, building a basic understanding of the problem at hand, catch up on related work <br />
* Development of a working software-based implementation running on the Zynq's ARM core<br />
* Piece-by-piece off-loading of relevant tasks to the programmable logic<br />
* Implementation of data interfaces (software or hardware)<br />
* Report and presentation <br />
--><br />
<!-- 13.5 weeks total here --><br />
<br />
<br />
===Literature===<br />
* H. I. Fawaz et al., InceptionTime: Finding AlexNet for Time Series Classification [https://arxiv.org/abs/1909.04939]<br />
* Z. Cui et al., Multi-Scale Convolutional Neural Networks for Time Series Classification [https://arxiv.org/abs/1603.06995]<br />
* C. Lea et al., Temporal Convolutional Networks: A Unified Approach to Action Segmentation, [https://link.springer.com/chapter/10.1007/978-3-319-49409-8_7]<br />
* iEEG-SWEZ data base [http://ieeg-swez.ethz.ch/]<br />
* A. Burello et al., Laelaps: An Energy-Efficient Seizure Detection Algorithm from Long-term Human iEEG Recordings without False Alarms [https://ieeexplore.ieee.org/abstract/document/8715186]<br />
<br />
<br />
<br />
===Practical Details===<br />
* '''[[Project Plan]]'''<br />
* '''[[Project Meetings]]'''<br />
* '''[[Final Report]]'''<br />
* '''[[Final Presentation]]'''<br />
<br />
[[#top|↑ top]]<br />
<br />
<!-- <br />
<br />
COPY PASTE FROM THE LIST BELOW TO ADD TO CATEGORIES<br />
<br />
GROUP<br />
[[Category:Digital]]<br />
<br />
STATUS<br />
[[Category:Available]]<br />
[[Category:In progress]]<br />
[[Category:Completed]]<br />
[[Category:Hot]]<br />
<br />
TYPE OF WORK<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]]<br />
[[Category:PhD Thesis]]<br />
[[Category:Research]]<br />
<br />
NAMES OF EU/CTI/NT PROJECTS<br />
[[Category:UltrasoundToGo]]<br />
[[Category:IcySoC]]<br />
[[Category:PSocrates]]<br />
[[Category:UlpSoC]]<br />
[[Category:Qcrypt]]<br />
<br />
YEAR (IF FINISHED)<br />
[[Category:2010]]<br />
[[Category:2011]]<br />
[[Category:2012]]<br />
[[Category:2013]]<br />
[[Category:2014]]<br />
<br />
---><br />
<br />
[[Category:Herschmi]]</div>Herschmihttp://iis-projects.ee.ethz.ch/index.php?title=Towards_global_Brain-Computer_Interfaces&diff=6909Towards global Brain-Computer Interfaces2021-09-16T07:20:09Z<p>Herschmi: </p>
<hr />
<div>[[Category:Digital]][[Category:Semester Thesis]] [[Category:Master Thesis]] [[Category:2019]][[Category:Hot]][[Category:Human Intranet]][[Category:BCI]]<br />
[[File:Emotiv-epoc-14-channel-mobile-eeg.jpg|thumb|300px]]<br />
==Description==<br />
A brain–computer interface (BCI) is a device that enables communication between the human brain and an external device. It aims to recognize the human’s intentions from spatiotemporal neural activity typically recorded by a large set of electroencephalogram (EEG) electrodes. What makes it particularly challenging, however, is its susceptibility to errors in the recognition of human intentions, especially during motor imagery (MI). The underlying reason is the high inter-subject variance, which makes it difficult to build one universal model for all subjects. <br />
<br />
This project aims to find a global model using hyperdimensional superposition of model weights [1]. A preliminary study [2] showed that we can find a global model for 9 subjects on a 4-class MI dataset. In this project, you apply hyperdimensional superposition on a much larger dataset with 109 subjects. Furthermore, you will apply the same methodology on different BCI paradigms at the same time, e.g., visual stimulus and motor imagery, aiming for a model able to process different brain signals simultaneously. <br />
<br />
<br />
<br />
===Status: Available ===<br />
: Looking for 1-2 students for a semester project or group project. <br />
: Supervision: [[:User:Herschmi | Michael Hersche]] [mailto:abbas@iis.ee.ethz.ch Abbas Rahimi]<br />
<br />
===Prerequisites===<br />
* Machine Learning<br />
* Python Programming<br />
<br />
<br />
===Character===<br />
: 20% Theory<br />
: 80% Programming<br />
<br />
===Professor===<br />
: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]<br />
[[#top|↑ top]]<br />
<br />
<!--<br />
==Detailed Task Description==<br />
<br />
===Meetings & Presentations===<br />
The student(s) and advisor(s) agree on weekly meetings to discuss all relevant decisions and decide on how to proceed. Of course, additional meetings can be organized to address urgent issues. <br />
Around the middle of the project there is a design review, where senior members of the lab review your work (bring all the relevant information, such as prelim. specifications, block diagrams, synthesis reports, testing strategy, ...) to make sure everything is on track and decide whether further support is necessary. They also make the definite decision on whether the chip is actually manufactured (no reason to worry, if the project is on track) and whether more chip area, a different package, ... is provided. For more details confer to [http://eda.ee.ethz.ch/index.php/Design_review]. <br />
At the end of the project, you have to present/defend your work during a 15 min. presentation and 5 min. of discussion as part of the IIS colloquium (as required for any semester or master thesis at D-ITET).<br />
<br />
===Deliverables===<br />
* description of the most promising architectures, and argumentation on the decision taken (as part of the report)<br />
* synthesizable, verified VHDL code<br />
* generated test vector files<br />
* synthesis scripts & relevant software models developed for verification<br />
* synthesis results and final chip layout (GDS II data), bonding diagram<br />
* datasheet (part of report)<br />
* presentation slides<br />
* project report (in digital form; a hard copy also welcome, but not necessary)<br />
---><br />
<!--<br />
===Timeline==<br />
To give some idea on how the time can be split up, we provide some possible partitioning: <br />
* Literature survey, building a basic understanding of the problem at hand, catch up on related work <br />
* Development of a working software-based implementation running on the Zynq's ARM core<br />
* Piece-by-piece off-loading of relevant tasks to the programmable logic<br />
* Implementation of data interfaces (software or hardware)<br />
* Report and presentation <br />
--><br />
<!-- 13.5 weeks total here --><br />
<br />
<br />
===Literature===<br />
* [https://arxiv.org/abs/1902.05522#] Cheung et al., Superposition of many models into one<br />
* [https://www.research-collection.ethz.ch/bitstream/handle/20.500.11850/387117/bci_superposition.pdf?sequence=1&isAllowed=y] Hersche et. al., Compressing Subject-specific Brain–Computer Interface Models into One Model by Superposition in Hyperdimensional Space<br />
<br />
===Practical Details===<br />
* '''[[Project Plan]]'''<br />
* '''[[Project Meetings]]'''<br />
* '''[[Final Report]]'''<br />
* '''[[Final Presentation]]'''<br />
<br />
[[#top|↑ top]]<br />
<br />
<!-- <br />
<br />
COPY PASTE FROM THE LIST BELOW TO ADD TO CATEGORIES<br />
<br />
GROUP<br />
[[Category:Digital]]<br />
<br />
STATUS<br />
[[Category:Available]]<br />
[[Category:In progress]]<br />
[[Category:Completed]]<br />
[[Category:Hot]]<br />
<br />
TYPE OF WORK<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]]<br />
[[Category:PhD Thesis]]<br />
[[Category:Research]]<br />
<br />
NAMES OF EU/CTI/NT PROJECTS<br />
[[Category:UltrasoundToGo]]<br />
[[Category:IcySoC]]<br />
[[Category:PSocrates]]<br />
[[Category:UlpSoC]]<br />
[[Category:Qcrypt]]<br />
<br />
YEAR (IF FINISHED)<br />
[[Category:2010]]<br />
[[Category:2011]]<br />
[[Category:2012]]<br />
[[Category:2013]]<br />
[[Category:2014]]<br />
<br />
---><br />
<br />
[[Category:Herschmi]]</div>Herschmihttp://iis-projects.ee.ethz.ch/index.php?title=Memory_Augmented_Neural_Networks_in_Brain-Computer_Interfaces&diff=6908Memory Augmented Neural Networks in Brain-Computer Interfaces2021-09-16T07:19:11Z<p>Herschmi: </p>
<hr />
<div>[[Category:Digital]][[Category:Semester Thesis]][[Category:Master Thesis]] [[Category:Completed]] [[Category:2020]][[Category:Hot]][[Category:Human Intranet]][[Category:xiaywang]][[Category:Herschmi]][[Category:BCI]]<br />
<br />
==Description==<br />
Motor-imagery Brain-Computer Interfaces (MI-BCIs) aim to establish a communication channel between the human brain and an external device, solely based on human’s thought. Due to high inter-session and inter-user variance, however, users may need several training sessions while wearing EEG electrodes to reach an acceptable accuracy. This makes the usage of pure convolutional neural network (CNN) architectures very limited in this area, and less practical for multi-users deployment. A viable option are memory augmented neural networks (MANNs) [1] which augment CNNs with an external binary memory. This opens up a large variety of options that can significantly improve the applicability of MI-BCI, e.g.,<br />
<br />
* Add a new MI class on the fly without retraining the whole model.<br />
* Calibrate the model at the beginning of a new session to mitigate high inter-session variance in EEG. <br />
* Calibrate the model on a new, unseen user due to high inter-user variance.<br />
<br />
In this project, you start with a given CNN controller [2], augment it with an external memory, and explore all aforementioned options. <br />
<br />
<br />
===Status: In Progress ===<br />
Cyrill Künzi<br />
<br />
: Supervision:[[:User:Thoriri | Thorir Mar Ingolfsson]], [[:User:Herschmi | Michael Hersche]], [[:User:xiaywang|Xiaying Wang]]<br />
<br />
===Prerequisites===<br />
* Machine Learning<br />
* Python<br />
<br />
===Character===<br />
: 20% Theory<br />
: 80% Implementation<br />
<br />
===Literature===<br />
* [https://ieeexplore.ieee.org/abstract/document/8715198] Laguna et al., Design of Hardware-Friendly Memory Enhanced Neural Networks<br />
* [https://arxiv.org/abs/1611.08024] Lawhern et. al., EEGNet: A Compact Convolutional Network for EEG-based Brain-Computer Interfaces<br />
<br />
===Professor===<br />
: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]<br />
[[#top|↑ top]]<br />
<br />
<!--<br />
==Detailed Task Description==<br />
<br />
===Meetings & Presentations===<br />
The student(s) and advisor(s) agree on weekly meetings to discuss all relevant decisions and decide on how to proceed. Of course, additional meetings can be organized to address urgent issues. <br />
Around the middle of the project there is a design review, where senior members of the lab review your work (bring all the relevant information, such as prelim. specifications, block diagrams, synthesis reports, testing strategy, ...) to make sure everything is on track and decide whether further support is necessary. They also make the definite decision on whether the chip is actually manufactured (no reason to worry, if the project is on track) and whether more chip area, a different package, ... is provided. For more details confer to [http://eda.ee.ethz.ch/index.php/Design_review]. <br />
At the end of the project, you have to present/defend your work during a 15 min. presentation and 5 min. of discussion as part of the IIS colloquium (as required for any semester or master thesis at D-ITET).<br />
<br />
===Deliverables===<br />
* description of the most promising architectures, and argumentation on the decision taken (as part of the report)<br />
* synthesizable, verified VHDL code<br />
* generated test vector files<br />
* synthesis scripts & relevant software models developed for verification<br />
* synthesis results and final chip layout (GDS II data), bonding diagram<br />
* datasheet (part of report)<br />
* presentation slides<br />
* project report (in digital form; a hard copy also welcome, but not necessary)<br />
---><br />
<!--<br />
===Timeline==<br />
To give some idea on how the time can be split up, we provide some possible partitioning: <br />
* Literature survey, building a basic understanding of the problem at hand, catch up on related work <br />
* Development of a working software-based implementation running on the Zynq's ARM core<br />
* Piece-by-piece off-loading of relevant tasks to the programmable logic<br />
* Implementation of data interfaces (software or hardware)<br />
* Report and presentation <br />
--><br />
<!-- 13.5 weeks total here --><br />
<br />
===Practical Details===<br />
* '''[[Project Plan]]'''<br />
* '''[[Project Meetings]]'''<br />
* '''[[Final Report]]'''<br />
* '''[[Final Presentation]]'''<br />
<br />
[[#top|↑ top]]<br />
<br />
<!-- <br />
<br />
COPY PASTE FROM THE LIST BELOW TO ADD TO CATEGORIES<br />
<br />
GROUP<br />
[[Category:Digital]]<br />
<br />
STATUS<br />
[[Category:Available]]<br />
[[Category:In progress]]<br />
[[Category:Completed]]<br />
[[Category:Hot]]<br />
<br />
TYPE OF WORK<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]]<br />
[[Category:PhD Thesis]]<br />
[[Category:Research]]<br />
<br />
NAMES OF EU/CTI/NT PROJECTS<br />
[[Category:UltrasoundToGo]]<br />
[[Category:IcySoC]]<br />
[[Category:PSocrates]]<br />
[[Category:UlpSoC]]<br />
[[Category:Qcrypt]]<br />
<br />
YEAR (IF FINISHED)<br />
[[Category:2010]]<br />
[[Category:2011]]<br />
[[Category:2012]]<br />
[[Category:2013]]<br />
[[Category:2014]]<br />
<br />
---><br />
<br />
[[Category:Herschmi]]</div>Herschmihttp://iis-projects.ee.ethz.ch/index.php?title=Hyper-Dimensional_Computing_Based_Predictive_Maintenance&diff=6907Hyper-Dimensional Computing Based Predictive Maintenance2021-09-16T07:18:49Z<p>Herschmi: </p>
<hr />
<div>==Introduction== <br />
[[File:Brain-circuit.png|thumb]]<br />
<br />
Predictive Maintenance is a key component of industry 4.0 and is the process of<br />
determining a machines current state of degradation and provide an estimation of<br />
the remaining useful lifetime. This allows replacing machine components in a<br />
fabrication pipeline when there is an actual need for it instead of the<br />
currently prevalent scheme of replacement with a fixed maintenance interval.<br />
Hyperdimensional computing (HDC) is an energy-efficient alternative to<br />
traditional machine learning techniques were data is represented in the form of<br />
very wide vectors. With its holistic data representation, HDC is inherently<br />
error-resilient and requires only very little data to learn (one/few-shot<br />
learning). It is thus a promising candidate to process the raw input data from<br />
environmental sensors and solve the classification/regression problem of<br />
estimating the machine's current health state.<br />
<br />
==Project Details==<br />
[[File:Ball-bearing-numbered.png|thumb]]<br />
<br />
In this project, you will develop a new classification algorithm based on Hyperdimensional Computing for the task of ball bearing lifetime estimation. As Ball Bearings are omnipresent in industrial manufacturing pipelines estimating their remaining useful life (RUL) is the key to condition-based maintenance of the machines as opposed to replacement at fixed intervals independently of how worn out the parts actually are. Using vibration data from low power accelerometers you will conduct some first experiments with GPU accelerated code to come up with a set of discriminative features and design the classification schemes using the primitives of hyperdimensional computing. In a second step, the algorithm should be tuned to be portable to an ultra-low power HDC hardware accelerator that allows execution of HDC algorithms in the uW power envelope. <br />
<br />
===Useful Reading===<br />
*[http://redwood.berkeley.edu/vs265/kanerva09-hyperdimensional.pdf Hyperdimensional Computing: An Introduction to Computing in Distributed Representation with High-Dimensional Random Vectors]<br />
*[http://people.eecs.berkeley.edu/~abbas/papers/TCAS17.pdf High-dimensional Computing as a Nanoscalable Paradigm]<br />
*[https://mitpress.mit.edu/books/sparse-distributed-memory Pentti Kanerva. 1988. Sparse Distributed Memory. MIT Press, Cambridge, MA, USA]<br />
<br />
==Required Skills==<br />
To work on this project, you will need:<br />
* Some experience in software development using Python<br />
* Knowledge of the principals of machine learning and how to conduct performance evaluations of classification algorithms<br />
* An open mind and methodological problem-solving skills<br />
* Some insights on how digital ASICs work is highly beneficial (VLSI1, VLSI2)<br />
Other skills that you might find useful include:<br />
* to be familiar working on Linux and in a command-line terminal<br />
If you want to work on this project, but you think that you do not match some the required skills, we can give you some preliminary exercise to help you fill in the gap.<br />
<br />
===Status: In Progress ===<br />
: Student: Lars Widmer<br />
: Supervision: [[:User:Meggiman | Manuel Eggimann]], [[:User:Herschmi | Michael Hersche]]<!--, [[:User:Fschuiki | Fabian Schuiki]]--><br />
: Date: Autumn 2020<br />
[[Category:Digital]] [[Category:Completed]] [[Category:Semester Thesis]] [[Category:Master Thesis]] [[Category:EmbeddedAI]][[Category:Herschmi]]<br />
<br />
===Character===<br />
: 35% Theory<br />
: 35% Implementation<br />
: 30% Evaluation<br />
<br />
===Professor===<br />
: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]<br />
[[#top|↑ top]]<br />
<br />
==Practical Details==<br />
<br />
===Meetings & Presentations===<br />
The students and advisor(s) agree on weekly meetings to discuss all relevant decisions and decide on how to proceed. Of course, additional meetings can be organized to address urgent issues. <br />
<br />
At the end of the project, you have to present/defend your work during a 15 min. presentation and 5 min. of discussion as part of the IIS colloquium.<br />
<br />
* '''[[Project Plan]]'''<br />
* '''[[Project Meetings]]'''<br />
* '''[[Design Review]]'''<br />
* '''[[Coding Guidelines]]'''<br />
* '''[[Final Report]]'''<br />
* '''[[Final Presentation]]'''<br />
<br />
<!--<br />
==Literature==<br />
* [Andri2017] R. Andri et al., YodaNN: an Architecture for Ultra-Low Power Binary-Weight CNN Acceleration, [https://arxiv.org/pdf/1606.05487.pdf]<br />
* [Conti2017] F. Conti et al., An IoT Endpoint System-on-Chip for Secure and Energy-Efficient Near-Sensor Analytics, [https://arxiv.org/pdf/1612.05974.pdf]<br />
* [Intel2017] A. Zhou et al., Incremental Network Quantization: Towards Lossless CNNs with low-precision weights, [https://arxiv.org/pdf/1702.03044.pdf]<br />
* [Krizhevsky2012] A. Khrizevsky et al., ImageNet Classification with Deep Convolutional Neural Networks, [http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf]<br />
--><br />
<br />
==Links== <br />
* The EDA wiki with lots of information on the ETHZ ASIC design flow (internal only) [http://eda.ee.ethz.ch/]<br />
* The IIS/DZ coding guidelines [http://eda.ee.ethz.ch/index.php/Naming_Conventions]<br />
<br />
<br />
[[#top|↑ top]]<br />
<br />
<!-- <br />
<br />
COPY PASTE FROM THE LIST BELOW TO ADD TO CATEGORIES<br />
<br />
GROUP<br />
[[Category:Digital]]<br />
[[Category:Analog]]<br />
[[Category:Nano-TCAD]]<br />
[[Category:Nano Electronics]]<br />
<br />
STATUS<br />
[[Category:Available]]<br />
[[Category:In progress]]<br />
[[Category:Completed]]<br />
[[Category:Hot]]<br />
<br />
TYPE OF WORK<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]]<br />
[[Category:PhD Thesis]]<br />
[[Category:Research]]<br />
<br />
NAMES OF EU/CTI/NT PROJECTS<br />
[[Category:UltrasoundToGo]]<br />
[[Category:IcySoC]]<br />
[[Category:PSocrates]]<br />
[[Category:UlpSoC]]<br />
[[Category:Qcrypt]]<br />
<br />
YEAR (IF FINISHED)<br />
[[Category:2010]]<br />
[[Category:2011]]<br />
[[Category:2012]]<br />
[[Category:2013]]<br />
[[Category:2014]]<br />
<br />
---></div>Herschmihttp://iis-projects.ee.ethz.ch/index.php?title=Low_Latency_Brain-Machine_Interfaces&diff=6906Low Latency Brain-Machine Interfaces2021-09-16T07:18:07Z<p>Herschmi: </p>
<hr />
<div>[[Category:Digital]][[Category:Semester Thesis]][[Category:Master Thesis]] [[Category:Completed]] [[Category:2020]][[Category:Hot]][[Category:Human Intranet]][[Category:xiaywang]][[Category:Herschmi]][[Category:BCI]]<br />
<br />
==Description==<br />
Motor-imagery Brain-Computer Interfaces (MI-BCIs) aim to establish a communication channel between the human brain and an external device, solely based on human’s thought. The device aims to recognize the human's intentions from spatiotemporal neural activity typically recorded by a large set of electroencephalogram (EEG) electrodes. What makes it particularly challenging is to reliably detect the user's intention to perform MI with a short delay. There exist two types of EEG features, which can be used to detect motor intention: sensorimotor rhythms (SMR) and movement-related cortical potentials (MRCP). SMR-based system make use of event-related synchronisation/desynchronisation (ERD/ERS) behaviour and come with relatively long latency. Conversely, MRCP may be done with very short latency, possibly even with negative delay. This project analyses the delay of reliable MI onset detection using either SMR or MRCP features. <br />
<br />
<br />
===Status: In Progress ===<br />
Student: She Xinyang <br />
<br />
Supervision:[[:User:Herschmi | Michael Hersche]], [[:User:xiaywang|Xiaying Wang]], Simone Benatti, Victor Javier Kartsch<br />
<br />
===Prerequisites===<br />
* Machine Learning<br />
* Python<br />
<br />
===Character===<br />
: 20% Theory<br />
: 80% Implementation<br />
<br />
<br />
===Professor===<br />
: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]<br />
[[#top|↑ top]]<br />
<br />
<!--<br />
==Detailed Task Description==<br />
<br />
===Meetings & Presentations===<br />
The student(s) and advisor(s) agree on weekly meetings to discuss all relevant decisions and decide on how to proceed. Of course, additional meetings can be organized to address urgent issues. <br />
Around the middle of the project there is a design review, where senior members of the lab review your work (bring all the relevant information, such as prelim. specifications, block diagrams, synthesis reports, testing strategy, ...) to make sure everything is on track and decide whether further support is necessary. They also make the definite decision on whether the chip is actually manufactured (no reason to worry, if the project is on track) and whether more chip area, a different package, ... is provided. For more details confer to [http://eda.ee.ethz.ch/index.php/Design_review]. <br />
At the end of the project, you have to present/defend your work during a 15 min. presentation and 5 min. of discussion as part of the IIS colloquium (as required for any semester or master thesis at D-ITET).<br />
<br />
===Deliverables===<br />
* description of the most promising architectures, and argumentation on the decision taken (as part of the report)<br />
* synthesizable, verified VHDL code<br />
* generated test vector files<br />
* synthesis scripts & relevant software models developed for verification<br />
* synthesis results and final chip layout (GDS II data), bonding diagram<br />
* datasheet (part of report)<br />
* presentation slides<br />
* project report (in digital form; a hard copy also welcome, but not necessary)<br />
---><br />
<!--<br />
===Timeline==<br />
To give some idea on how the time can be split up, we provide some possible partitioning: <br />
* Literature survey, building a basic understanding of the problem at hand, catch up on related work <br />
* Development of a working software-based implementation running on the Zynq's ARM core<br />
* Piece-by-piece off-loading of relevant tasks to the programmable logic<br />
* Implementation of data interfaces (software or hardware)<br />
* Report and presentation <br />
--><br />
<!-- 13.5 weeks total here --><br />
<br />
===Practical Details===<br />
* '''[[Project Plan]]'''<br />
* '''[[Project Meetings]]'''<br />
* '''[[Final Report]]'''<br />
* '''[[Final Presentation]]'''<br />
<br />
[[#top|↑ top]]<br />
<br />
<!-- <br />
<br />
COPY PASTE FROM THE LIST BELOW TO ADD TO CATEGORIES<br />
<br />
GROUP<br />
[[Category:Digital]]<br />
<br />
STATUS<br />
[[Category:Available]]<br />
[[Category:In progress]]<br />
[[Category:Completed]]<br />
[[Category:Hot]]<br />
<br />
TYPE OF WORK<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]]<br />
[[Category:PhD Thesis]]<br />
[[Category:Research]]<br />
<br />
NAMES OF EU/CTI/NT PROJECTS<br />
[[Category:UltrasoundToGo]]<br />
[[Category:IcySoC]]<br />
[[Category:PSocrates]]<br />
[[Category:UlpSoC]]<br />
[[Category:Qcrypt]]<br />
<br />
YEAR (IF FINISHED)<br />
[[Category:2010]]<br />
[[Category:2011]]<br />
[[Category:2012]]<br />
[[Category:2013]]<br />
[[Category:2014]]<br />
<br />
---><br />
<br />
[[Category:Herschmi]]</div>Herschmihttp://iis-projects.ee.ethz.ch/index.php?title=IBM_Research&diff=6905IBM Research2021-09-16T07:16:39Z<p>Herschmi: /* Available Projects */</p>
<hr />
<div>[[Category:Digital]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]] <br />
[[Category:Available]] <br />
[[Category:2020]]<br />
[[Category:Hot]]<br />
[[File:IBM_ZRLab.png|center]]<br />
<br />
==Short Description==<br />
Today, we are entering the era of cognitive computing, which holds great promise in deriving intelligence and knowledge from huge volumes of data. In today’s computers based on von Neumann architecture, huge amounts of data need to be shuttled back and forth at high speeds, a task at which this architecture is inefficient.<br />
<br />
It is becoming increasingly clear that to build efficient cognitive computers, we need to transition to non-von Neumann architectures in which memory and processing coexist in some form. At IBM Research–Zurich in the [https://www.zurich.ibm.com/sto/memory/ Neuromorphic and In-memory Computing Group], we explore various such computing paradigms from in-memory computing to brain-inspired neuromorphic computing. Our research spans from devices and architectures to algorithms and applications. <!--Here is a list of available projects:<br />
<br />
* [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf '''Artificial general intelligence (AGI):''' lifelong learning challenge] <br />
* [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf '''Machine learning based on optimal transport using in-memory computing''']<br />
* [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Developing Efficient Models of '''Strong AI for Intelligence Quotient (IQ) Test''']<br />
--><br />
<br />
===About the IBM Research–Zurich===<br />
The location in Zurich is one of IBM’s 12 global research labs. IBM has maintained a research laboratory in Switzerland since 1956. As the first European branch of IBM Research, the mission of the Zurich Lab, in addition to pursuing cutting-edge research for tomorrow’s information technology, is to cultivate close relationships with academic and industrial partners, be one of the premier places to work for world-class researchers, to promote women in IT and science, and to help drive Europe’s innovation agenda. [https://www.zurich.ibm.com/pdf/ZRL_Fact_Sheet.pdf Download factsheet]<br />
<br />
==Hybrid AI Systems (HAS)==<br />
<br />
[[File:NatureElectronics20.jpg|thumb|right|200px]]<br />
Neither symbolic AI nor neural networks alone has produced the kind of intelligence expressed in human and animal behavior. Why? Each has a long and rich history, but has addressed a relatively narrow aspect of the problem. Symbolic AI focuses on solving cognitive problems, drawing upon the rich framework of symbolic computation to manipulate internal representations in order to perform reasoning and inference. But it suffers from being non-adaptive, lacking the ability to learn from example or by direct observation of the world. Neural networks on the other hand have the ability to learn from data, and derive much of their power from nonlinear function approximation combined with stochastic gradient descent. But intelligence requires more than modeling input-output relationships. Without the richness of symbolic computation, neural nets lack the simple but powerful operations such as variable binding that allow for analogy making and reasoning, which underlie the ability to generalize from few examples.<br />
<br />
We approach the problem from a very different perspective, inspired by the brain’s high-dimensional circuits and the unique mathematical properties of high-dimensional spaces. ''It leads us to a novel information processing architecture that combines the strengths of symbolic AI and neural networks, and yet has novel emergent properties of its own.'' By combining a small set of basic operations on high-dimensional vectors, we obtain hybrid AI system (HAS) that makes it possible to represent and manipulate data in ways familiar to us from symbolic AI, and to learn from the statistics of data in ways familiar to us from artificial neural networks and deep learning. Further, principles of such HAS allow few-shot learning capabilities, and extremely robust operations against failures, defects, variations, and noise, all of which are complementary to ultra-low energy computation on nanoscale fabrics such as phase-change memory devices. Exciting further research (listed in below table) awaiting in this direction spans high-level algorithmic exploration all the way to efficient hardware design for emerging computational fabrics.<br />
<br />
===Useful Reading===<br />
*[https://www.nature.com/articles/s41467-021-22364-0 Robust high-dimensional memory-augmented neural networks], Nature Communications, 2021.<br />
*[https://www.nature.com/articles/s41928-020-0410-3 In-memory hyperdimensional computing], Nature Electronics, 2020.<br />
*[https://www.nature.com/articles/s41928-020-00510-8 A wearable biosensing system with in-sensor adaptive machine learning for hand gesture recognition], Nature Electronics, 2020. <br />
*[https://link.springer.com/article/10.1007/s12559-009-9009-8 Hyperdimensional computing: an introduction to computing in distributed representation with high-dimensional random vectors], Cognitive Computation, 2009.<br />
<br />
===Prerequisites===<br />
*Python<br />
*Background in machine learning (''recommended'')<br />
*Experience with any deep learning framework such as TensorFlow or PyTorch (''recommended'')<br />
*VLSI I (''recommended'')<br />
<br />
==In-Memory Computing (IMC)==<br />
<br />
[[File:NNcover_imc.jpg|thumb|right|200px]]<br />
For decades, conventional computers based on the von Neumann architecture have performed computation by repeatedly transferring data between their processing and their memory units, which are physically separated. As computation becomes increasingly data-centric and as the scalability limits in terms of performance and power are being reached, alternative computing paradigms are searched for in which computation and storage are collocated. A fascinating new approach is that of computational memory where the physics of nanoscale memory devices are used to perform certain computational tasks within the memory unit in a non-von Neumann manner. '''Computational Memory''' (CM) is finding application in a variety of areas such as machine learning and signal processing. In addition to novel non-volatile memory technologies like '''PCM''' and '''ReRAM''', also conventional '''SRAM''' has been proposed as underlying storage technology in Computational Memories.<br />
<br />
===Useful Reading===<br />
*[https://ieeexplore.ieee.org/document/9288639 An SRAM-Based Multibit In-Memory Matrix-Vector Multiplier With a Precision That Scales Linearly in Area, Time, and Power]<br />
*[https://www.nature.com/articles/s41565-020-0655-z Memory devices and applications for in-memory computing]<br />
*[https://ieeexplore.ieee.org/abstract/document/8865099 Deep learning acceleration based on in-memory computing]<br />
<br />
===Prerequisites===<br />
*General interest in Deep Learning and memory/system design<br />
*VLSI I and VLSI II (''recommended'')<br />
*Analog Circuit Design (''recommended'')<br />
Specific requirements for the different projects vary and are generally negotiable.<br />
<br />
==Available Projects==<br />
We are inviting applications from students to conduct their master’s thesis work or an internship project at the IBM Research lab in Zurich on this exciting new topic. <br />
<!--<br />
The work performed could span low-level hardware experiments on phase-change memory chips comprising more than 1 million devices to high-level algorithmic development in a deep learning framework such as TensorFlow or PyTorch. It also involves interactions with several researchers across IBM research focusing on various aspects of the project. The ideal candidate should have a multi-disciplinary background, strong mathematical aptitude and programming skills. Prior knowledge on emerging memory technologies such as phase-change memory is a bonus but not necessary.<br />
--><br />
<br />
{| class="wikitable" style="text-align: center;"<br />
|-<br />
! style="width: 5%;"|Type !! style="width: 20%"|Project !! style="width: 60%"|Description !! style="width: 5%"|Topic !! style="width: 15%"|Workload Type <br />
|-<br />
<br />
| MA || Accelerating Mixers with Computational Memory || [http://iis-projects.ee.ethz.ch/images/e/e8/IBM_Mixer.pdf Link to description] || HAS || Hardware design and experiments<br />
|-<br />
<br />
| MA || Accelerating Transformers with Computational Memory || [http://iis-projects.ee.ethz.ch/images/b/be/IBM_TransfAcc.pdf Link to description] || HAS || Hardware/algorithmic design<br />
|-<br />
<br />
| MA/SA/BA || Face Recognition at Scale || [http://iis-projects.ee.ethz.ch/images/3/3d/IBM_FaceRec_at_Scale.pdf Link to description] || HAS || algorithmic design<br />
|-<br />
<br />
| MA || Developing Efficient Models of Strong AI for Intelligence Quotient (IQ) Test || [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Link to description] || HAS || algorithmic design<br />
|-<br />
<br />
| MA|| Lifelong learning challenge || [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf Link to description] || HAS || algorithmic/hardware design<br />
|-<br />
<br />
<br />
| MA || Accurate deep learning inference using computational memory || [https://www.zurich.ibm.com/careers/2020_027.html Project description and application] || IMC || algorithmic design<br />
|-<br />
}<br />
<br />
<br />
/*<br />
| MA || ADC design for computational memory || style="text-align: justify;"|Digital-to-Analog converters ('''DAC'''s) and Analog-to-Digital converters ('''ADC'''s) are extensively employed in Computational Memory ('''CM''') to handle the crossing between the digital and analog domain, in which computationally expensive tasks, like Matrix-Vector Multiplications ('''MVM'''), are carried out with O(1) complexity. Each conversion costs a certain amount of energy and its precision can only be guaranteed up to the Effective Number of Bits ('''ENOB''') of the employed data converter. <br /> The research focus will be on understanding the system level requirements on ADC and DAC for optimal performance of Deep Neural Network inference using CM. Furthermore, the effects of noise, non-linearity and manufacturing tolerances shall be examined and counter measurements, like for example periodic digital ADC recalibration and digital post processing, shall be evaluated with regards to effectivity and energy costs.|| IMC || analog circuit design<br />
|<br />
*/<br />
<br />
==Contact==<br />
: Thesis will be at IBM Zurich in Rüschlikon<br />
; Hybrid AI Systems (HAS) projects<br />
: Contact (at ETH Zurich): [[:User:kgf | Dr. Frank K. Gurkaynak]] and [[:User:Herschmi | Michael Hersche]]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian] <br />
: Contact (at IBM): [mailto:abr@zurich.ibm.com Dr. Abbas Rahimi]<br />
: Professor: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]<br />
; In-Memory Computing (IMC) projects<br />
: Contact (at IBM/ETH Zurich): [[:User:rid | Riduan Khaddam-Aljameh]]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian]<br />
: Professor: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]</div>Herschmihttp://iis-projects.ee.ethz.ch/index.php?title=IBM_Research&diff=6081IBM Research2020-11-16T14:29:47Z<p>Herschmi: /* Prerequisites */</p>
<hr />
<div>[[Category:Digital]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]] <br />
[[Category:Available]] <br />
[[Category:2020]]<br />
[[Category:Hot]]<br />
[[File:IBM_ZRLab.png|center]]<br />
<br />
==Short Description==<br />
Today, we are entering the era of cognitive computing, which holds great promise in deriving intelligence and knowledge from huge volumes of data. In today’s computers based on von Neumann architecture, huge amounts of data need to be shuttled back and forth at high speeds, a task at which this architecture is inefficient.<br />
<br />
It is becoming increasingly clear that to build efficient cognitive computers, we need to transition to non-von Neumann architectures in which memory and processing coexist in some form. At IBM Research–Zurich in the [https://www.zurich.ibm.com/sto/memory/ Neuromorphic and In-memory Computing Group], we explore various such computing paradigms from in-memory computing to brain-inspired neuromorphic computing. Our research spans from devices and architectures to algorithms and applications. <!--Here is a list of available projects:<br />
<br />
* [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf '''Artificial general intelligence (AGI):''' lifelong learning challenge] <br />
* [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf '''Machine learning based on optimal transport using in-memory computing''']<br />
* [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Developing Efficient Models of '''Strong AI for Intelligence Quotient (IQ) Test''']<br />
--><br />
<br />
===About the IBM Research–Zurich===<br />
The location in Zurich is one of IBM’s 12 global research labs. IBM has maintained a research laboratory in Switzerland since 1956. As the first European branch of IBM Research, the mission of the Zurich Lab, in addition to pursuing cutting-edge research for tomorrow’s information technology, is to cultivate close relationships with academic and industrial partners, be one of the premier places to work for world-class researchers, to promote women in IT and science, and to help drive Europe’s innovation agenda. [https://www.zurich.ibm.com/pdf/ZRL_Fact_Sheet.pdf Download factsheet]<br />
<br />
==Hyperdimensional Computing (HDC)==<br />
<br />
[[File:NatureElectronics20.jpg|thumb|right|200px]]<br />
Hyperdimensional computing (HDC) is a brain-inspired computing paradigm based on representing information with hypervectors with dimensions in the thousands. Hypervectors are holographic and (pseudo)random with independent and identically distributed (i.i.d.) components. Principles of HDC allow implementing efficient machine learning models, where it has shown few-shot learning capabilities by requiring less training data than conventional machine learning models to do accurate predictions. By its very nature, HDC is extremely robust against failures, defects, variations, and noise, all of which are complementary to ultra-low energy computation on nanoscale fabrics such as Phase-Change Memory (PCM) devices. By eliminating non von-Neumann bottleneck, PCM-based in-memory computing systems enable building systems that can run HDC models at very high energy efficiencies. This opens doors to a vast spectrum of further research in applications, algorithms, architecture and system development.<br />
<br />
<br />
HDC has also been successfully applied in few-shot meta-learning problems - where a controller is trained to distinguish previously unseen classes by feeding only a few examples. The state-of-the-art accuracy in few-shot learning problems has been achieved by augmenting the controller with an explicit memory and guiding the controller to learning HDC representations. Exciting further research awaiting in this direction includes exploring compact representation of explicit memory and exploring novel cost functions and training algorithms to further improving the classification accuracy.<br />
<br />
===Useful Reading===<br />
*[https://link.springer.com/article/10.1007/s12559-009-9009-8 Hyperdimensional Computing: An Introduction to Computing in Distributed Representation with High-Dimensional Random Vectors]<br />
*[https://www.nature.com/articles/s41928-020-0410-3 In-memory hyperdimensional computing]<br />
*[https://arxiv.org/pdf/2010.01939.pdf Robust High-dimensional Memory-augmented Neural Networks]<br />
<br />
===Prerequisites===<br />
*Python<br />
*Background in machine learning (''recommended'')<br />
*Experience with any deep learning framework such as TensorFlow or PyTorch (''recommended'')<br />
*VLSI I and VLSI II (''recommended'')<br />
<br />
==In-Memory Computing (IMC)==<br />
For decades, conventional computers based on the von Neumann architecture have performed computation by repeatedly transferring data between their processing and their memory units, which are physically separated. As computation becomes increasingly data-centric and as the scalability limits in terms of performance and power are being reached, alternative computing paradigms are searched for in which computation and storage are collocated. A fascinating new approach is that of computational memory where the physics of nanoscale memory devices are used to perform certain computational tasks within the memory unit in a non-von Neumann manner. '''Computational Memory''' (CM) is finding application in a variety of areas such as machine learning and signal processing. In addition to novel non-volatile memory technologies like '''PCM''' and '''ReRAM''', also conventional '''SRAM''' has been proposed as underlying storage technology in Computational Memories.<br />
<br />
===Useful Reading===<br />
*[https://www.nature.com/articles/s41565-020-0655-z Memory devices and applications for in-memory computing]<br />
<br />
===Prerequisites===<br />
*General interest in Deep Learning and memory/system design<br />
*VLSI I and VLSI II (''recommended'')<br />
*Analog Circuit Design (''recommended'')<br />
Specific requirements for the different projects vary and are generally negotiable.<br />
<br />
==Available Projects==<br />
We are inviting applications from students to conduct their master’s thesis work or an internship project at the IBM Research lab in Zurich on this exciting new topic. <br />
<!--<br />
The work performed could span low-level hardware experiments on phase-change memory chips comprising more than 1 million devices to high-level algorithmic development in a deep learning framework such as TensorFlow or PyTorch. It also involves interactions with several researchers across IBM research focusing on various aspects of the project. The ideal candidate should have a multi-disciplinary background, strong mathematical aptitude and programming skills. Prior knowledge on emerging memory technologies such as phase-change memory is a bonus but not necessary.<br />
--><br />
<br />
{| class="wikitable" style="text-align: center;"<br />
|-<br />
! style="width: 5%;"|Type !! style="width: 20%"|Project !! style="width: 60%"|Description !! style="width: 5%"|Topic !! style="width: 15%"|Workload Type <br />
|-<br />
| MA|| Lifelong learning challenge || [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf '''Artificial general intelligence (AGI):''' lifelong learning challenge] || HDC || algorithmic design<br />
|-<br />
| MA|| Machine learning based on optimal transport using in-memory computing' || [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf '''Machine learning based on optimal transport using in-memory computing'''] || HDC || algorithmic design<br />
|-<br />
| MA || Developing Efficient Models of Strong AI for Intelligence Quotient (IQ) Test || [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Developing Efficient Models of '''Strong AI for Intelligence Quotient (IQ) Test'''] || HDC || algorithmic design<br />
|-<br />
| MA || Accurate deep learning inference using computational memory || [https://www.zurich.ibm.com/careers/2020_027.html Project description and application] || IMC || algorithmic design<br />
|-<br />
| MA || ADC design for computational memory || style="text-align: justify;"|Digital-to-Analog converters ('''DAC'''s) and Analog-to-Digital converters ('''ADC'''s) are extensively employed in Computational Memory ('''CM''') to handle the crossing between the digital and analog domain, in which computationally expensive tasks, like Matrix-Vector Multiplications ('''MVM'''), are carried out with O(1) complexity. Each conversion costs a certain amount of energy and its precision can only be guaranteed up to the Effective Number of Bits ('''ENOB''') of the employed data converter. <br /> The research focus will be on understanding the system level requirements on ADC and DAC for optimal performance of Deep Neural Network inference using CM. Furthermore, the effects of noise, non-linearity and manufacturing tolerances shall be examined and counter measurements, like for example periodic digital ADC recalibration and digital post processing, shall be evaluated with regards to effectivity and energy costs.|| IMC || analog circuit design<br />
|-<br />
| SA || Testing of a computational memory chip || style="text-align: justify;"|TBA || IMC || PCB Design <br /> and/or<br /> µ-code implementation <br />
|-<br />
| MA/SA || Neural Network Training on a <br /> computational memory chip || style="text-align: justify;"|TBA || IMC || algorithm/system design<br />and/or<br /> analog circuit design<br />
|}<br />
<br />
==Contact==<br />
: Thesis will be at IBM Zurich in Rüschlikon<br />
; Hyperdimensional Computing (HDC) projects<br />
: Contact (at ETH Zurich): [[:User:kgf | Dr. Frank K. Gurkaynak]] and [[:User:Herschmi | Michael Hersche]]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian] <br />
: Contact (at IBM): [mailto:abr@zurich.ibm.com Dr. Abbas Rahimi]<br />
: Professor: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]<br />
; In-Memory Computing (IMC) projects<br />
: Contact (at IBM/ETH Zurich): [[:User:rid | Riduan Khaddam-Aljameh]]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian]<br />
: Professor: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]</div>Herschmihttp://iis-projects.ee.ethz.ch/index.php?title=IBM_Research&diff=6080IBM Research2020-11-16T14:29:15Z<p>Herschmi: /* Available Projects */</p>
<hr />
<div>[[Category:Digital]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]] <br />
[[Category:Available]] <br />
[[Category:2020]]<br />
[[Category:Hot]]<br />
[[File:IBM_ZRLab.png|center]]<br />
<br />
==Short Description==<br />
Today, we are entering the era of cognitive computing, which holds great promise in deriving intelligence and knowledge from huge volumes of data. In today’s computers based on von Neumann architecture, huge amounts of data need to be shuttled back and forth at high speeds, a task at which this architecture is inefficient.<br />
<br />
It is becoming increasingly clear that to build efficient cognitive computers, we need to transition to non-von Neumann architectures in which memory and processing coexist in some form. At IBM Research–Zurich in the [https://www.zurich.ibm.com/sto/memory/ Neuromorphic and In-memory Computing Group], we explore various such computing paradigms from in-memory computing to brain-inspired neuromorphic computing. Our research spans from devices and architectures to algorithms and applications. <!--Here is a list of available projects:<br />
<br />
* [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf '''Artificial general intelligence (AGI):''' lifelong learning challenge] <br />
* [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf '''Machine learning based on optimal transport using in-memory computing''']<br />
* [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Developing Efficient Models of '''Strong AI for Intelligence Quotient (IQ) Test''']<br />
--><br />
<br />
===About the IBM Research–Zurich===<br />
The location in Zurich is one of IBM’s 12 global research labs. IBM has maintained a research laboratory in Switzerland since 1956. As the first European branch of IBM Research, the mission of the Zurich Lab, in addition to pursuing cutting-edge research for tomorrow’s information technology, is to cultivate close relationships with academic and industrial partners, be one of the premier places to work for world-class researchers, to promote women in IT and science, and to help drive Europe’s innovation agenda. [https://www.zurich.ibm.com/pdf/ZRL_Fact_Sheet.pdf Download factsheet]<br />
<br />
==Hyperdimensional Computing (HDC)==<br />
<br />
[[File:NatureElectronics20.jpg|thumb|right|200px]]<br />
Hyperdimensional computing (HDC) is a brain-inspired computing paradigm based on representing information with hypervectors with dimensions in the thousands. Hypervectors are holographic and (pseudo)random with independent and identically distributed (i.i.d.) components. Principles of HDC allow implementing efficient machine learning models, where it has shown few-shot learning capabilities by requiring less training data than conventional machine learning models to do accurate predictions. By its very nature, HDC is extremely robust against failures, defects, variations, and noise, all of which are complementary to ultra-low energy computation on nanoscale fabrics such as Phase-Change Memory (PCM) devices. By eliminating non von-Neumann bottleneck, PCM-based in-memory computing systems enable building systems that can run HDC models at very high energy efficiencies. This opens doors to a vast spectrum of further research in applications, algorithms, architecture and system development.<br />
<br />
<br />
HDC has also been successfully applied in few-shot meta-learning problems - where a controller is trained to distinguish previously unseen classes by feeding only a few examples. The state-of-the-art accuracy in few-shot learning problems has been achieved by augmenting the controller with an explicit memory and guiding the controller to learning HDC representations. Exciting further research awaiting in this direction includes exploring compact representation of explicit memory and exploring novel cost functions and training algorithms to further improving the classification accuracy.<br />
<br />
===Useful Reading===<br />
*[https://link.springer.com/article/10.1007/s12559-009-9009-8 Hyperdimensional Computing: An Introduction to Computing in Distributed Representation with High-Dimensional Random Vectors]<br />
*[https://www.nature.com/articles/s41928-020-0410-3 In-memory hyperdimensional computing]<br />
*[https://arxiv.org/pdf/2010.01939.pdf Robust High-dimensional Memory-augmented Neural Networks]<br />
<br />
===Prerequisites===<br />
*Python<br />
*Background in machine learning (''recommended'')<br />
*VLSI I and VLSI II (''recommended'')<br />
<br />
==In-Memory Computing (IMC)==<br />
For decades, conventional computers based on the von Neumann architecture have performed computation by repeatedly transferring data between their processing and their memory units, which are physically separated. As computation becomes increasingly data-centric and as the scalability limits in terms of performance and power are being reached, alternative computing paradigms are searched for in which computation and storage are collocated. A fascinating new approach is that of computational memory where the physics of nanoscale memory devices are used to perform certain computational tasks within the memory unit in a non-von Neumann manner. '''Computational Memory''' (CM) is finding application in a variety of areas such as machine learning and signal processing. In addition to novel non-volatile memory technologies like '''PCM''' and '''ReRAM''', also conventional '''SRAM''' has been proposed as underlying storage technology in Computational Memories.<br />
<br />
===Useful Reading===<br />
*[https://www.nature.com/articles/s41565-020-0655-z Memory devices and applications for in-memory computing]<br />
<br />
===Prerequisites===<br />
*General interest in Deep Learning and memory/system design<br />
*VLSI I and VLSI II (''recommended'')<br />
*Analog Circuit Design (''recommended'')<br />
Specific requirements for the different projects vary and are generally negotiable.<br />
<br />
==Available Projects==<br />
We are inviting applications from students to conduct their master’s thesis work or an internship project at the IBM Research lab in Zurich on this exciting new topic. <br />
<!--<br />
The work performed could span low-level hardware experiments on phase-change memory chips comprising more than 1 million devices to high-level algorithmic development in a deep learning framework such as TensorFlow or PyTorch. It also involves interactions with several researchers across IBM research focusing on various aspects of the project. The ideal candidate should have a multi-disciplinary background, strong mathematical aptitude and programming skills. Prior knowledge on emerging memory technologies such as phase-change memory is a bonus but not necessary.<br />
--><br />
<br />
{| class="wikitable" style="text-align: center;"<br />
|-<br />
! style="width: 5%;"|Type !! style="width: 20%"|Project !! style="width: 60%"|Description !! style="width: 5%"|Topic !! style="width: 15%"|Workload Type <br />
|-<br />
| MA|| Lifelong learning challenge || [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf '''Artificial general intelligence (AGI):''' lifelong learning challenge] || HDC || algorithmic design<br />
|-<br />
| MA|| Machine learning based on optimal transport using in-memory computing' || [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf '''Machine learning based on optimal transport using in-memory computing'''] || HDC || algorithmic design<br />
|-<br />
| MA || Developing Efficient Models of Strong AI for Intelligence Quotient (IQ) Test || [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Developing Efficient Models of '''Strong AI for Intelligence Quotient (IQ) Test'''] || HDC || algorithmic design<br />
|-<br />
| MA || Accurate deep learning inference using computational memory || [https://www.zurich.ibm.com/careers/2020_027.html Project description and application] || IMC || algorithmic design<br />
|-<br />
| MA || ADC design for computational memory || style="text-align: justify;"|Digital-to-Analog converters ('''DAC'''s) and Analog-to-Digital converters ('''ADC'''s) are extensively employed in Computational Memory ('''CM''') to handle the crossing between the digital and analog domain, in which computationally expensive tasks, like Matrix-Vector Multiplications ('''MVM'''), are carried out with O(1) complexity. Each conversion costs a certain amount of energy and its precision can only be guaranteed up to the Effective Number of Bits ('''ENOB''') of the employed data converter. <br /> The research focus will be on understanding the system level requirements on ADC and DAC for optimal performance of Deep Neural Network inference using CM. Furthermore, the effects of noise, non-linearity and manufacturing tolerances shall be examined and counter measurements, like for example periodic digital ADC recalibration and digital post processing, shall be evaluated with regards to effectivity and energy costs.|| IMC || analog circuit design<br />
|-<br />
| SA || Testing of a computational memory chip || style="text-align: justify;"|TBA || IMC || PCB Design <br /> and/or<br /> µ-code implementation <br />
|-<br />
| MA/SA || Neural Network Training on a <br /> computational memory chip || style="text-align: justify;"|TBA || IMC || algorithm/system design<br />and/or<br /> analog circuit design<br />
|}<br />
<br />
==Contact==<br />
: Thesis will be at IBM Zurich in Rüschlikon<br />
; Hyperdimensional Computing (HDC) projects<br />
: Contact (at ETH Zurich): [[:User:kgf | Dr. Frank K. Gurkaynak]] and [[:User:Herschmi | Michael Hersche]]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian] <br />
: Contact (at IBM): [mailto:abr@zurich.ibm.com Dr. Abbas Rahimi]<br />
: Professor: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]<br />
; In-Memory Computing (IMC) projects<br />
: Contact (at IBM/ETH Zurich): [[:User:rid | Riduan Khaddam-Aljameh]]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian]<br />
: Professor: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]</div>Herschmihttp://iis-projects.ee.ethz.ch/index.php?title=IBM_Research&diff=6079IBM Research2020-11-16T14:27:35Z<p>Herschmi: /* Hyperdimensional Computing (HDC) */</p>
<hr />
<div>[[Category:Digital]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]] <br />
[[Category:Available]] <br />
[[Category:2020]]<br />
[[Category:Hot]]<br />
[[File:IBM_ZRLab.png|center]]<br />
<br />
==Short Description==<br />
Today, we are entering the era of cognitive computing, which holds great promise in deriving intelligence and knowledge from huge volumes of data. In today’s computers based on von Neumann architecture, huge amounts of data need to be shuttled back and forth at high speeds, a task at which this architecture is inefficient.<br />
<br />
It is becoming increasingly clear that to build efficient cognitive computers, we need to transition to non-von Neumann architectures in which memory and processing coexist in some form. At IBM Research–Zurich in the [https://www.zurich.ibm.com/sto/memory/ Neuromorphic and In-memory Computing Group], we explore various such computing paradigms from in-memory computing to brain-inspired neuromorphic computing. Our research spans from devices and architectures to algorithms and applications. <!--Here is a list of available projects:<br />
<br />
* [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf '''Artificial general intelligence (AGI):''' lifelong learning challenge] <br />
* [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf '''Machine learning based on optimal transport using in-memory computing''']<br />
* [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Developing Efficient Models of '''Strong AI for Intelligence Quotient (IQ) Test''']<br />
--><br />
<br />
===About the IBM Research–Zurich===<br />
The location in Zurich is one of IBM’s 12 global research labs. IBM has maintained a research laboratory in Switzerland since 1956. As the first European branch of IBM Research, the mission of the Zurich Lab, in addition to pursuing cutting-edge research for tomorrow’s information technology, is to cultivate close relationships with academic and industrial partners, be one of the premier places to work for world-class researchers, to promote women in IT and science, and to help drive Europe’s innovation agenda. [https://www.zurich.ibm.com/pdf/ZRL_Fact_Sheet.pdf Download factsheet]<br />
<br />
==Hyperdimensional Computing (HDC)==<br />
<br />
[[File:NatureElectronics20.jpg|thumb|right|200px]]<br />
Hyperdimensional computing (HDC) is a brain-inspired computing paradigm based on representing information with hypervectors with dimensions in the thousands. Hypervectors are holographic and (pseudo)random with independent and identically distributed (i.i.d.) components. Principles of HDC allow implementing efficient machine learning models, where it has shown few-shot learning capabilities by requiring less training data than conventional machine learning models to do accurate predictions. By its very nature, HDC is extremely robust against failures, defects, variations, and noise, all of which are complementary to ultra-low energy computation on nanoscale fabrics such as Phase-Change Memory (PCM) devices. By eliminating non von-Neumann bottleneck, PCM-based in-memory computing systems enable building systems that can run HDC models at very high energy efficiencies. This opens doors to a vast spectrum of further research in applications, algorithms, architecture and system development.<br />
<br />
<br />
HDC has also been successfully applied in few-shot meta-learning problems - where a controller is trained to distinguish previously unseen classes by feeding only a few examples. The state-of-the-art accuracy in few-shot learning problems has been achieved by augmenting the controller with an explicit memory and guiding the controller to learning HDC representations. Exciting further research awaiting in this direction includes exploring compact representation of explicit memory and exploring novel cost functions and training algorithms to further improving the classification accuracy.<br />
<br />
===Useful Reading===<br />
*[https://link.springer.com/article/10.1007/s12559-009-9009-8 Hyperdimensional Computing: An Introduction to Computing in Distributed Representation with High-Dimensional Random Vectors]<br />
*[https://www.nature.com/articles/s41928-020-0410-3 In-memory hyperdimensional computing]<br />
*[https://arxiv.org/pdf/2010.01939.pdf Robust High-dimensional Memory-augmented Neural Networks]<br />
<br />
===Prerequisites===<br />
*Python<br />
*Background in machine learning (''recommended'')<br />
*VLSI I and VLSI II (''recommended'')<br />
<br />
==In-Memory Computing (IMC)==<br />
For decades, conventional computers based on the von Neumann architecture have performed computation by repeatedly transferring data between their processing and their memory units, which are physically separated. As computation becomes increasingly data-centric and as the scalability limits in terms of performance and power are being reached, alternative computing paradigms are searched for in which computation and storage are collocated. A fascinating new approach is that of computational memory where the physics of nanoscale memory devices are used to perform certain computational tasks within the memory unit in a non-von Neumann manner. '''Computational Memory''' (CM) is finding application in a variety of areas such as machine learning and signal processing. In addition to novel non-volatile memory technologies like '''PCM''' and '''ReRAM''', also conventional '''SRAM''' has been proposed as underlying storage technology in Computational Memories.<br />
<br />
===Useful Reading===<br />
*[https://www.nature.com/articles/s41565-020-0655-z Memory devices and applications for in-memory computing]<br />
<br />
===Prerequisites===<br />
*General interest in Deep Learning and memory/system design<br />
*VLSI I and VLSI II (''recommended'')<br />
*Analog Circuit Design (''recommended'')<br />
Specific requirements for the different projects vary and are generally negotiable.<br />
<br />
==Available Projects==<br />
We are inviting applications from students to conduct their master’s thesis work or an internship project at the IBM Research lab in Zurich on this exciting new topic. The work performed could span low-level hardware experiments on phase-change memory chips comprising more than 1 million devices to high-level algorithmic development in a deep learning framework such as TensorFlow or PyTorch. It also involves interactions with several researchers across IBM research focusing on various aspects of the project. The ideal candidate should have a multi-disciplinary background, strong mathematical aptitude and programming skills. Prior knowledge on emerging memory technologies such as phase-change memory is a bonus but not necessary.<br />
<br />
<br />
{| class="wikitable" style="text-align: center;"<br />
|-<br />
! style="width: 5%;"|Type !! style="width: 20%"|Project !! style="width: 60%"|Description !! style="width: 5%"|Topic !! style="width: 15%"|Workload Type <br />
|-<br />
| MA|| Lifelong learning challenge || [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf '''Artificial general intelligence (AGI):''' lifelong learning challenge] || HDC || algorithmic design<br />
|-<br />
| MA|| Machine learning based on optimal transport using in-memory computing' || [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf '''Machine learning based on optimal transport using in-memory computing'''] || HDC || algorithmic design<br />
|-<br />
| MA || Developing Efficient Models of Strong AI for Intelligence Quotient (IQ) Test || [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Developing Efficient Models of '''Strong AI for Intelligence Quotient (IQ) Test'''] || HDC || algorithmic design<br />
|-<br />
| MA || Accurate deep learning inference using computational memory || [https://www.zurich.ibm.com/careers/2020_027.html Project description and application] || IMC || algorithmic design<br />
|-<br />
| MA || ADC design for computational memory || style="text-align: justify;"|Digital-to-Analog converters ('''DAC'''s) and Analog-to-Digital converters ('''ADC'''s) are extensively employed in Computational Memory ('''CM''') to handle the crossing between the digital and analog domain, in which computationally expensive tasks, like Matrix-Vector Multiplications ('''MVM'''), are carried out with O(1) complexity. Each conversion costs a certain amount of energy and its precision can only be guaranteed up to the Effective Number of Bits ('''ENOB''') of the employed data converter. <br /> The research focus will be on understanding the system level requirements on ADC and DAC for optimal performance of Deep Neural Network inference using CM. Furthermore, the effects of noise, non-linearity and manufacturing tolerances shall be examined and counter measurements, like for example periodic digital ADC recalibration and digital post processing, shall be evaluated with regards to effectivity and energy costs.|| IMC || analog circuit design<br />
|-<br />
| SA || Testing of a computational memory chip || style="text-align: justify;"|TBA || IMC || PCB Design <br /> and/or<br /> µ-code implementation <br />
|-<br />
| MA/SA || Neural Network Training on a <br /> computational memory chip || style="text-align: justify;"|TBA || IMC || algorithm/system design<br />and/or<br /> analog circuit design<br />
|}<br />
<br />
==Contact==<br />
: Thesis will be at IBM Zurich in Rüschlikon<br />
; Hyperdimensional Computing (HDC) projects<br />
: Contact (at ETH Zurich): [[:User:kgf | Dr. Frank K. Gurkaynak]] and [[:User:Herschmi | Michael Hersche]]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian] <br />
: Contact (at IBM): [mailto:abr@zurich.ibm.com Dr. Abbas Rahimi]<br />
: Professor: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]<br />
; In-Memory Computing (IMC) projects<br />
: Contact (at IBM/ETH Zurich): [[:User:rid | Riduan Khaddam-Aljameh]]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian]<br />
: Professor: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]</div>Herschmihttp://iis-projects.ee.ethz.ch/index.php?title=IBM_Research&diff=6078IBM Research2020-11-16T14:25:23Z<p>Herschmi: /* Useful Reading */</p>
<hr />
<div>[[Category:Digital]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]] <br />
[[Category:Available]] <br />
[[Category:2020]]<br />
[[Category:Hot]]<br />
[[File:IBM_ZRLab.png|center]]<br />
<br />
==Short Description==<br />
Today, we are entering the era of cognitive computing, which holds great promise in deriving intelligence and knowledge from huge volumes of data. In today’s computers based on von Neumann architecture, huge amounts of data need to be shuttled back and forth at high speeds, a task at which this architecture is inefficient.<br />
<br />
It is becoming increasingly clear that to build efficient cognitive computers, we need to transition to non-von Neumann architectures in which memory and processing coexist in some form. At IBM Research–Zurich in the [https://www.zurich.ibm.com/sto/memory/ Neuromorphic and In-memory Computing Group], we explore various such computing paradigms from in-memory computing to brain-inspired neuromorphic computing. Our research spans from devices and architectures to algorithms and applications. <!--Here is a list of available projects:<br />
<br />
* [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf '''Artificial general intelligence (AGI):''' lifelong learning challenge] <br />
* [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf '''Machine learning based on optimal transport using in-memory computing''']<br />
* [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Developing Efficient Models of '''Strong AI for Intelligence Quotient (IQ) Test''']<br />
--><br />
<br />
===About the IBM Research–Zurich===<br />
The location in Zurich is one of IBM’s 12 global research labs. IBM has maintained a research laboratory in Switzerland since 1956. As the first European branch of IBM Research, the mission of the Zurich Lab, in addition to pursuing cutting-edge research for tomorrow’s information technology, is to cultivate close relationships with academic and industrial partners, be one of the premier places to work for world-class researchers, to promote women in IT and science, and to help drive Europe’s innovation agenda. [https://www.zurich.ibm.com/pdf/ZRL_Fact_Sheet.pdf Download factsheet]<br />
<br />
==Hyperdimensional Computing (HDC)==<br />
<br />
[[File:NatureElectronics20.jpg|thumb|right|200px]]<br />
Hyperdimensional computing (HDC) is a brain-inspired computing paradigm based on representing information with hypervectors with dimensions in the thousands. Hypervectors are holographic and (pseudo)random with independent and identically distributed (i.i.d.) components. Principles of HDC allow implementing efficient machine learning models, where it has shown few-shot learning capabilities by requiring less training data than conventional machine learning models to do accurate predictions. By its very nature, HDC is extremely robust against failures, defects, variations, and noise, all of which are complementary to ultra-low energy computation on nanoscale fabrics such as Phase-Change Memory (PCM) devices. By eliminating non von-Neumann bottleneck, PCM-based in-memory computing systems enable building systems that can run HDC models at very high energy efficiencies. This opens doors to a vast spectrum of further research in applications, algorithms, architecture and system development.<br />
<br />
<br />
HDC has also been successfully applied in few-shot meta-learning problems - where a controller is trained to distinguish previously unseen classes by feeding only a few examples. The state-of-the-art accuracy in few-shot learning problems has been achieved by augmenting the controller with an explicit memory and guiding the controller to learning HDC representations. Exciting further research awaiting in this direction includes exploring compact representation of explicit memory and exploring novel cost functions and training algorithms to further improving the classification accuracy.<br />
<br />
===Useful Reading===<br />
*[https://link.springer.com/article/10.1007/s12559-009-9009-8 Hyperdimensional Computing: An Introduction to Computing in Distributed Representation with High-Dimensional Random Vectors]<br />
*[https://www.nature.com/articles/s41928-020-0410-3 In-memory hyperdimensional computing]<br />
*[https://arxiv.org/pdf/2010.01939.pdf Robust High-dimensional Memory-augmented Neural Networks]<br />
<br />
==In-Memory Computing (IMC)==<br />
For decades, conventional computers based on the von Neumann architecture have performed computation by repeatedly transferring data between their processing and their memory units, which are physically separated. As computation becomes increasingly data-centric and as the scalability limits in terms of performance and power are being reached, alternative computing paradigms are searched for in which computation and storage are collocated. A fascinating new approach is that of computational memory where the physics of nanoscale memory devices are used to perform certain computational tasks within the memory unit in a non-von Neumann manner. '''Computational Memory''' (CM) is finding application in a variety of areas such as machine learning and signal processing. In addition to novel non-volatile memory technologies like '''PCM''' and '''ReRAM''', also conventional '''SRAM''' has been proposed as underlying storage technology in Computational Memories.<br />
<br />
===Useful Reading===<br />
*[https://www.nature.com/articles/s41565-020-0655-z Memory devices and applications for in-memory computing]<br />
<br />
===Prerequisites===<br />
*General interest in Deep Learning and memory/system design<br />
*VLSI I and VLSI II (''recommended'')<br />
*Analog Circuit Design (''recommended'')<br />
Specific requirements for the different projects vary and are generally negotiable.<br />
<br />
==Available Projects==<br />
We are inviting applications from students to conduct their master’s thesis work or an internship project at the IBM Research lab in Zurich on this exciting new topic. The work performed could span low-level hardware experiments on phase-change memory chips comprising more than 1 million devices to high-level algorithmic development in a deep learning framework such as TensorFlow or PyTorch. It also involves interactions with several researchers across IBM research focusing on various aspects of the project. The ideal candidate should have a multi-disciplinary background, strong mathematical aptitude and programming skills. Prior knowledge on emerging memory technologies such as phase-change memory is a bonus but not necessary.<br />
<br />
<br />
{| class="wikitable" style="text-align: center;"<br />
|-<br />
! style="width: 5%;"|Type !! style="width: 20%"|Project !! style="width: 60%"|Description !! style="width: 5%"|Topic !! style="width: 15%"|Workload Type <br />
|-<br />
| MA|| Lifelong learning challenge || [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf '''Artificial general intelligence (AGI):''' lifelong learning challenge] || HDC || algorithmic design<br />
|-<br />
| MA|| Machine learning based on optimal transport using in-memory computing' || [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf '''Machine learning based on optimal transport using in-memory computing'''] || HDC || algorithmic design<br />
|-<br />
| MA || Developing Efficient Models of Strong AI for Intelligence Quotient (IQ) Test || [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Developing Efficient Models of '''Strong AI for Intelligence Quotient (IQ) Test'''] || HDC || algorithmic design<br />
|-<br />
| MA || Accurate deep learning inference using computational memory || [https://www.zurich.ibm.com/careers/2020_027.html Project description and application] || IMC || algorithmic design<br />
|-<br />
| MA || ADC design for computational memory || style="text-align: justify;"|Digital-to-Analog converters ('''DAC'''s) and Analog-to-Digital converters ('''ADC'''s) are extensively employed in Computational Memory ('''CM''') to handle the crossing between the digital and analog domain, in which computationally expensive tasks, like Matrix-Vector Multiplications ('''MVM'''), are carried out with O(1) complexity. Each conversion costs a certain amount of energy and its precision can only be guaranteed up to the Effective Number of Bits ('''ENOB''') of the employed data converter. <br /> The research focus will be on understanding the system level requirements on ADC and DAC for optimal performance of Deep Neural Network inference using CM. Furthermore, the effects of noise, non-linearity and manufacturing tolerances shall be examined and counter measurements, like for example periodic digital ADC recalibration and digital post processing, shall be evaluated with regards to effectivity and energy costs.|| IMC || analog circuit design<br />
|-<br />
| SA || Testing of a computational memory chip || style="text-align: justify;"|TBA || IMC || PCB Design <br /> and/or<br /> µ-code implementation <br />
|-<br />
| MA/SA || Neural Network Training on a <br /> computational memory chip || style="text-align: justify;"|TBA || IMC || algorithm/system design<br />and/or<br /> analog circuit design<br />
|}<br />
<br />
==Contact==<br />
: Thesis will be at IBM Zurich in Rüschlikon<br />
; Hyperdimensional Computing (HDC) projects<br />
: Contact (at ETH Zurich): [[:User:kgf | Dr. Frank K. Gurkaynak]] and [[:User:Herschmi | Michael Hersche]]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian] <br />
: Contact (at IBM): [mailto:abr@zurich.ibm.com Dr. Abbas Rahimi]<br />
: Professor: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]<br />
; In-Memory Computing (IMC) projects<br />
: Contact (at IBM/ETH Zurich): [[:User:rid | Riduan Khaddam-Aljameh]]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian]<br />
: Professor: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]</div>Herschmihttp://iis-projects.ee.ethz.ch/index.php?title=IBM_Research&diff=6077IBM Research2020-11-16T14:24:51Z<p>Herschmi: /* Prerequisites */</p>
<hr />
<div>[[Category:Digital]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]] <br />
[[Category:Available]] <br />
[[Category:2020]]<br />
[[Category:Hot]]<br />
[[File:IBM_ZRLab.png|center]]<br />
<br />
==Short Description==<br />
Today, we are entering the era of cognitive computing, which holds great promise in deriving intelligence and knowledge from huge volumes of data. In today’s computers based on von Neumann architecture, huge amounts of data need to be shuttled back and forth at high speeds, a task at which this architecture is inefficient.<br />
<br />
It is becoming increasingly clear that to build efficient cognitive computers, we need to transition to non-von Neumann architectures in which memory and processing coexist in some form. At IBM Research–Zurich in the [https://www.zurich.ibm.com/sto/memory/ Neuromorphic and In-memory Computing Group], we explore various such computing paradigms from in-memory computing to brain-inspired neuromorphic computing. Our research spans from devices and architectures to algorithms and applications. <!--Here is a list of available projects:<br />
<br />
* [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf '''Artificial general intelligence (AGI):''' lifelong learning challenge] <br />
* [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf '''Machine learning based on optimal transport using in-memory computing''']<br />
* [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Developing Efficient Models of '''Strong AI for Intelligence Quotient (IQ) Test''']<br />
--><br />
<br />
===About the IBM Research–Zurich===<br />
The location in Zurich is one of IBM’s 12 global research labs. IBM has maintained a research laboratory in Switzerland since 1956. As the first European branch of IBM Research, the mission of the Zurich Lab, in addition to pursuing cutting-edge research for tomorrow’s information technology, is to cultivate close relationships with academic and industrial partners, be one of the premier places to work for world-class researchers, to promote women in IT and science, and to help drive Europe’s innovation agenda. [https://www.zurich.ibm.com/pdf/ZRL_Fact_Sheet.pdf Download factsheet]<br />
<br />
==Hyperdimensional Computing (HDC)==<br />
<br />
[[File:NatureElectronics20.jpg|thumb|right|200px]]<br />
Hyperdimensional computing (HDC) is a brain-inspired computing paradigm based on representing information with hypervectors with dimensions in the thousands. Hypervectors are holographic and (pseudo)random with independent and identically distributed (i.i.d.) components. Principles of HDC allow implementing efficient machine learning models, where it has shown few-shot learning capabilities by requiring less training data than conventional machine learning models to do accurate predictions. By its very nature, HDC is extremely robust against failures, defects, variations, and noise, all of which are complementary to ultra-low energy computation on nanoscale fabrics such as Phase-Change Memory (PCM) devices. By eliminating non von-Neumann bottleneck, PCM-based in-memory computing systems enable building systems that can run HDC models at very high energy efficiencies. This opens doors to a vast spectrum of further research in applications, algorithms, architecture and system development.<br />
<br />
<br />
HDC has also been successfully applied in few-shot meta-learning problems - where a controller is trained to distinguish previously unseen classes by feeding only a few examples. The state-of-the-art accuracy in few-shot learning problems has been achieved by augmenting the controller with an explicit memory and guiding the controller to learning HDC representations. Exciting further research awaiting in this direction includes exploring compact representation of explicit memory and exploring novel cost functions and training algorithms to further improving the classification accuracy.<br />
<br />
===Useful Reading===<br />
*[https://link.springer.com/article/10.1007/s12559-009-9009-8 Hyperdimensional Computing: An Introduction to Computing in Distributed Representation with High-Dimensional Random Vectors]<br />
*[https://www.nature.com/articles/s41928-020-0410-3 In-memory hyperdimensional computing]<br />
*[https://arxiv.org/pdf/2010.01939.pdf Robust High-dimensional Memory-augmented Neural Networks]<br />
<br />
==In-Memory Computing (IMC)==<br />
For decades, conventional computers based on the von Neumann architecture have performed computation by repeatedly transferring data between their processing and their memory units, which are physically separated. As computation becomes increasingly data-centric and as the scalability limits in terms of performance and power are being reached, alternative computing paradigms are searched for in which computation and storage are collocated. A fascinating new approach is that of computational memory where the physics of nanoscale memory devices are used to perform certain computational tasks within the memory unit in a non-von Neumann manner. '''Computational Memory''' (CM) is finding application in a variety of areas such as machine learning and signal processing. In addition to novel non-volatile memory technologies like '''PCM''' and '''ReRAM''', also conventional '''SRAM''' has been proposed as underlying storage technology in Computational Memories.<br />
<br />
===Useful Reading===<br />
*[https://www.nature.com/articles/s41565-020-0655-z Memory devices and applications for in-memory computing]<br />
<br />
<br />
===Prerequisites===<br />
*General interest in Deep Learning and memory/system design<br />
*VLSI I and VLSI II (''recommended'')<br />
*Analog Circuit Design (''recommended'')<br />
Specific requirements for the different projects vary and are generally negotiable.<br />
<br />
==Available Projects==<br />
We are inviting applications from students to conduct their master’s thesis work or an internship project at the IBM Research lab in Zurich on this exciting new topic. The work performed could span low-level hardware experiments on phase-change memory chips comprising more than 1 million devices to high-level algorithmic development in a deep learning framework such as TensorFlow or PyTorch. It also involves interactions with several researchers across IBM research focusing on various aspects of the project. The ideal candidate should have a multi-disciplinary background, strong mathematical aptitude and programming skills. Prior knowledge on emerging memory technologies such as phase-change memory is a bonus but not necessary.<br />
<br />
<br />
{| class="wikitable" style="text-align: center;"<br />
|-<br />
! style="width: 5%;"|Type !! style="width: 20%"|Project !! style="width: 60%"|Description !! style="width: 5%"|Topic !! style="width: 15%"|Workload Type <br />
|-<br />
| MA|| Lifelong learning challenge || [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf '''Artificial general intelligence (AGI):''' lifelong learning challenge] || HDC || algorithmic design<br />
|-<br />
| MA|| Machine learning based on optimal transport using in-memory computing' || [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf '''Machine learning based on optimal transport using in-memory computing'''] || HDC || algorithmic design<br />
|-<br />
| MA || Developing Efficient Models of Strong AI for Intelligence Quotient (IQ) Test || [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Developing Efficient Models of '''Strong AI for Intelligence Quotient (IQ) Test'''] || HDC || algorithmic design<br />
|-<br />
| MA || Accurate deep learning inference using computational memory || [https://www.zurich.ibm.com/careers/2020_027.html Project description and application] || IMC || algorithmic design<br />
|-<br />
| MA || ADC design for computational memory || style="text-align: justify;"|Digital-to-Analog converters ('''DAC'''s) and Analog-to-Digital converters ('''ADC'''s) are extensively employed in Computational Memory ('''CM''') to handle the crossing between the digital and analog domain, in which computationally expensive tasks, like Matrix-Vector Multiplications ('''MVM'''), are carried out with O(1) complexity. Each conversion costs a certain amount of energy and its precision can only be guaranteed up to the Effective Number of Bits ('''ENOB''') of the employed data converter. <br /> The research focus will be on understanding the system level requirements on ADC and DAC for optimal performance of Deep Neural Network inference using CM. Furthermore, the effects of noise, non-linearity and manufacturing tolerances shall be examined and counter measurements, like for example periodic digital ADC recalibration and digital post processing, shall be evaluated with regards to effectivity and energy costs.|| IMC || analog circuit design<br />
|-<br />
| SA || Testing of a computational memory chip || style="text-align: justify;"|TBA || IMC || PCB Design <br /> and/or<br /> µ-code implementation <br />
|-<br />
| MA/SA || Neural Network Training on a <br /> computational memory chip || style="text-align: justify;"|TBA || IMC || algorithm/system design<br />and/or<br /> analog circuit design<br />
|}<br />
<br />
==Contact==<br />
: Thesis will be at IBM Zurich in Rüschlikon<br />
; Hyperdimensional Computing (HDC) projects<br />
: Contact (at ETH Zurich): [[:User:kgf | Dr. Frank K. Gurkaynak]] and [[:User:Herschmi | Michael Hersche]]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian] <br />
: Contact (at IBM): [mailto:abr@zurich.ibm.com Dr. Abbas Rahimi]<br />
: Professor: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]<br />
; In-Memory Computing (IMC) projects<br />
: Contact (at IBM/ETH Zurich): [[:User:rid | Riduan Khaddam-Aljameh]]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian]<br />
: Professor: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]</div>Herschmihttp://iis-projects.ee.ethz.ch/index.php?title=IBM_Research&diff=6076IBM Research2020-11-16T14:20:51Z<p>Herschmi: /* Available Projects */</p>
<hr />
<div>[[Category:Digital]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]] <br />
[[Category:Available]] <br />
[[Category:2020]]<br />
[[Category:Hot]]<br />
[[File:IBM_ZRLab.png|center]]<br />
<br />
==Short Description==<br />
Today, we are entering the era of cognitive computing, which holds great promise in deriving intelligence and knowledge from huge volumes of data. In today’s computers based on von Neumann architecture, huge amounts of data need to be shuttled back and forth at high speeds, a task at which this architecture is inefficient.<br />
<br />
It is becoming increasingly clear that to build efficient cognitive computers, we need to transition to non-von Neumann architectures in which memory and processing coexist in some form. At IBM Research–Zurich in the [https://www.zurich.ibm.com/sto/memory/ Neuromorphic and In-memory Computing Group], we explore various such computing paradigms from in-memory computing to brain-inspired neuromorphic computing. Our research spans from devices and architectures to algorithms and applications. <!--Here is a list of available projects:<br />
<br />
* [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf '''Artificial general intelligence (AGI):''' lifelong learning challenge] <br />
* [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf '''Machine learning based on optimal transport using in-memory computing''']<br />
* [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Developing Efficient Models of '''Strong AI for Intelligence Quotient (IQ) Test''']<br />
--><br />
<br />
===About the IBM Research–Zurich===<br />
The location in Zurich is one of IBM’s 12 global research labs. IBM has maintained a research laboratory in Switzerland since 1956. As the first European branch of IBM Research, the mission of the Zurich Lab, in addition to pursuing cutting-edge research for tomorrow’s information technology, is to cultivate close relationships with academic and industrial partners, be one of the premier places to work for world-class researchers, to promote women in IT and science, and to help drive Europe’s innovation agenda. [https://www.zurich.ibm.com/pdf/ZRL_Fact_Sheet.pdf Download factsheet]<br />
<br />
==Hyperdimensional Computing (HDC)==<br />
<br />
[[File:NatureElectronics20.jpg|thumb|right|200px]]<br />
Hyperdimensional computing (HDC) is a brain-inspired computing paradigm based on representing information with hypervectors with dimensions in the thousands. Hypervectors are holographic and (pseudo)random with independent and identically distributed (i.i.d.) components. Principles of HDC allow implementing efficient machine learning models, where it has shown few-shot learning capabilities by requiring less training data than conventional machine learning models to do accurate predictions. By its very nature, HDC is extremely robust against failures, defects, variations, and noise, all of which are complementary to ultra-low energy computation on nanoscale fabrics such as Phase-Change Memory (PCM) devices. By eliminating non von-Neumann bottleneck, PCM-based in-memory computing systems enable building systems that can run HDC models at very high energy efficiencies. This opens doors to a vast spectrum of further research in applications, algorithms, architecture and system development.<br />
<br />
<br />
HDC has also been successfully applied in few-shot meta-learning problems - where a controller is trained to distinguish previously unseen classes by feeding only a few examples. The state-of-the-art accuracy in few-shot learning problems has been achieved by augmenting the controller with an explicit memory and guiding the controller to learning HDC representations. Exciting further research awaiting in this direction includes exploring compact representation of explicit memory and exploring novel cost functions and training algorithms to further improving the classification accuracy.<br />
<br />
===Useful Reading===<br />
*[https://link.springer.com/article/10.1007/s12559-009-9009-8 Hyperdimensional Computing: An Introduction to Computing in Distributed Representation with High-Dimensional Random Vectors]<br />
*[https://www.nature.com/articles/s41928-020-0410-3 In-memory hyperdimensional computing]<br />
*[https://arxiv.org/pdf/2010.01939.pdf Robust High-dimensional Memory-augmented Neural Networks]<br />
<br />
==In-Memory Computing (IMC)==<br />
For decades, conventional computers based on the von Neumann architecture have performed computation by repeatedly transferring data between their processing and their memory units, which are physically separated. As computation becomes increasingly data-centric and as the scalability limits in terms of performance and power are being reached, alternative computing paradigms are searched for in which computation and storage are collocated. A fascinating new approach is that of computational memory where the physics of nanoscale memory devices are used to perform certain computational tasks within the memory unit in a non-von Neumann manner. '''Computational Memory''' (CM) is finding application in a variety of areas such as machine learning and signal processing. In addition to novel non-volatile memory technologies like '''PCM''' and '''ReRAM''', also conventional '''SRAM''' has been proposed as underlying storage technology in Computational Memories.<br />
<br />
===Prerequisites===<br />
*General interest in Deep Learning and memory/system design<br />
*VLSI I and VLSI II (''recommended'')<br />
*Analog Circuit Design (''recommended'')<br />
Specific requirements for the different projects vary and are generally negotiable.<br />
<br />
==Available Projects==<br />
We are inviting applications from students to conduct their master’s thesis work or an internship project at the IBM Research lab in Zurich on this exciting new topic. The work performed could span low-level hardware experiments on phase-change memory chips comprising more than 1 million devices to high-level algorithmic development in a deep learning framework such as TensorFlow or PyTorch. It also involves interactions with several researchers across IBM research focusing on various aspects of the project. The ideal candidate should have a multi-disciplinary background, strong mathematical aptitude and programming skills. Prior knowledge on emerging memory technologies such as phase-change memory is a bonus but not necessary.<br />
<br />
<br />
{| class="wikitable" style="text-align: center;"<br />
|-<br />
! style="width: 5%;"|Type !! style="width: 20%"|Project !! style="width: 60%"|Description !! style="width: 5%"|Topic !! style="width: 15%"|Workload Type <br />
|-<br />
| MA|| Lifelong learning challenge || [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf '''Artificial general intelligence (AGI):''' lifelong learning challenge] || HDC || algorithmic design<br />
|-<br />
| MA|| Machine learning based on optimal transport using in-memory computing' || [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf '''Machine learning based on optimal transport using in-memory computing'''] || HDC || algorithmic design<br />
|-<br />
| MA || Developing Efficient Models of Strong AI for Intelligence Quotient (IQ) Test || [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Developing Efficient Models of '''Strong AI for Intelligence Quotient (IQ) Test'''] || HDC || algorithmic design<br />
|-<br />
| MA || Accurate deep learning inference using computational memory || [https://www.zurich.ibm.com/careers/2020_027.html Project description and application] || IMC || algorithmic design<br />
|-<br />
| MA || ADC design for computational memory || style="text-align: justify;"|Digital-to-Analog converters ('''DAC'''s) and Analog-to-Digital converters ('''ADC'''s) are extensively employed in Computational Memory ('''CM''') to handle the crossing between the digital and analog domain, in which computationally expensive tasks, like Matrix-Vector Multiplications ('''MVM'''), are carried out with O(1) complexity. Each conversion costs a certain amount of energy and its precision can only be guaranteed up to the Effective Number of Bits ('''ENOB''') of the employed data converter. <br /> The research focus will be on understanding the system level requirements on ADC and DAC for optimal performance of Deep Neural Network inference using CM. Furthermore, the effects of noise, non-linearity and manufacturing tolerances shall be examined and counter measurements, like for example periodic digital ADC recalibration and digital post processing, shall be evaluated with regards to effectivity and energy costs.|| IMC || analog circuit design<br />
|-<br />
| SA || Testing of a computational memory chip || style="text-align: justify;"|TBA || IMC || PCB Design <br /> and/or<br /> µ-code implementation <br />
|-<br />
| MA/SA || Neural Network Training on a <br /> computational memory chip || style="text-align: justify;"|TBA || IMC || algorithm/system design<br />and/or<br /> analog circuit design<br />
|}<br />
<br />
==Contact==<br />
: Thesis will be at IBM Zurich in Rüschlikon<br />
; Hyperdimensional Computing (HDC) projects<br />
: Contact (at ETH Zurich): [[:User:kgf | Dr. Frank K. Gurkaynak]] and [[:User:Herschmi | Michael Hersche]]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian] <br />
: Contact (at IBM): [mailto:abr@zurich.ibm.com Dr. Abbas Rahimi]<br />
: Professor: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]<br />
; In-Memory Computing (IMC) projects<br />
: Contact (at IBM/ETH Zurich): [[:User:rid | Riduan Khaddam-Aljameh]]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian]<br />
: Professor: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]</div>Herschmihttp://iis-projects.ee.ethz.ch/index.php?title=IBM_Research&diff=6031IBM Research2020-11-14T06:32:58Z<p>Herschmi: /* Hyperdimensional Computing (HDC) */</p>
<hr />
<div>[[Category:Digital]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]] <br />
[[Category:Available]] <br />
[[Category:2020]]<br />
[[Category:Hot]]<br />
[[File:IBM_ZRLab.png|center]]<br />
<br />
==Short Description==<br />
Today, we are entering the era of cognitive computing, which holds great promise in deriving intelligence and knowledge from huge volumes of data. In today’s computers based on von Neumann architecture, huge amounts of data need to be shuttled back and forth at high speeds, a task at which this architecture is inefficient.<br />
<br />
It is becoming increasingly clear that to build efficient cognitive computers, we need to transition to non-von Neumann architectures in which memory and processing coexist in some form. At IBM Research–Zurich in the [https://www.zurich.ibm.com/sto/memory/ Neuromorphic and In-memory Computing Group], we explore various such computing paradigms from in-memory computing to brain-inspired neuromorphic computing. Our research spans from devices and architectures to algorithms and applications. <!--Here is a list of available projects:<br />
<br />
* [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf '''Artificial general intelligence (AGI):''' lifelong learning challenge] <br />
* [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf '''Machine learning based on optimal transport using in-memory computing''']<br />
* [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Developing Efficient Models of '''Strong AI for Intelligence Quotient (IQ) Test''']<br />
--><br />
<br />
===About the IBM Research–Zurich===<br />
The location in Zurich is one of IBM’s 12 global research labs. IBM has maintained a research laboratory in Switzerland since 1956. As the first European branch of IBM Research, the mission of the Zurich Lab, in addition to pursuing cutting-edge research for tomorrow’s information technology, is to cultivate close relationships with academic and industrial partners, be one of the premier places to work for world-class researchers, to promote women in IT and science, and to help drive Europe’s innovation agenda. [https://www.zurich.ibm.com/pdf/ZRL_Fact_Sheet.pdf Download factsheet]<br />
<br />
==Hyperdimensional Computing (HDC)==<br />
<br />
[[File:NatureElectronics20.jpg|thumb|right|200px]]<br />
Hyperdimensional computing (HDC) is a brain-inspired computing paradigm based on representing information with hypervectors with dimensions in the thousands. Hypervectors are holographic and (pseudo)random with independent and identically distributed (i.i.d.) components. Principles of HDC allow implementing efficient machine learning models, where it has shown few-shot learning capabilities by requiring less training data than conventional machine learning models to do accurate predictions. By its very nature, HDC is extremely robust against failures, defects, variations, and noise, all of which are complementary to ultra-low energy computation on nanoscale fabrics such as Phase-Change Memory (PCM) devices. By eliminating non von-Neumann bottleneck, PCM-based in-memory computing systems enable building systems that can run HDC models at very high energy efficiencies. This opens doors to a vast spectrum of further research in applications, algorithms, architecture and system development.<br />
<br />
<br />
HDC has also been successfully applied in few-shot meta-learning problems - where a controller is trained to distinguish previously unseen classes by feeding only a few examples. The state-of-the-art accuracy in few-shot learning problems has been achieved by augmenting the controller with an explicit memory and guiding the controller to learning HDC representations. Exciting further research awaiting in this direction includes exploring compact representation of explicit memory and exploring novel cost functions and training algorithms to further improving the classification accuracy.<br />
<br />
===Useful Reading===<br />
*[https://link.springer.com/article/10.1007/s12559-009-9009-8 Hyperdimensional Computing: An Introduction to Computing in Distributed Representation with High-Dimensional Random Vectors]<br />
*[https://www.nature.com/articles/s41928-020-0410-3 In-memory hyperdimensional computing]<br />
*[https://arxiv.org/pdf/2010.01939.pdf Robust High-dimensional Memory-augmented Neural Networks]<br />
<br />
==In-Memory Computing (IMC)==<br />
For decades, conventional computers based on the von Neumann architecture have performed computation by repeatedly transferring data between their processing and their memory units, which are physically separated. As computation becomes increasingly data-centric and as the scalability limits in terms of performance and power are being reached, alternative computing paradigms are searched for in which computation and storage are collocated. A fascinating new approach is that of computational memory where the physics of nanoscale memory devices are used to perform certain computational tasks within the memory unit in a non-von Neumann manner. '''Computational Memory''' (CM) is finding application in a variety of areas such as machine learning and signal processing. In addition to novel non-volatile memory technologies like '''PCM''' and '''ReRAM''', also conventional '''SRAM''' has been proposed as underlying storage technology in Computational Memories.<br />
<br />
===Prerequisites===<br />
*General interest in Deep Learning and memory/system design<br />
*VLSI I and VLSI II (''recommended'')<br />
*Analog Circuit Design (''recommended'')<br />
Specific requirements for the different projects vary and are generally negotiable.<br />
<br />
==Available Projects==<br />
We are inviting applications from students to conduct their master’s thesis work or an internship project at the IBM Research lab in Zurich on this exciting new topic. The work performed could span low-level hardware experiments on phase-change memory chips comprising more than 1 million devices to high-level algorithmic development in a deep learning framework such as TensorFlow or PyTorch. It also involves interactions with several researchers across IBM research focusing on various aspects of the project. The ideal candidate should have a multi-disciplinary background, strong mathematical aptitude and programming skills. Prior knowledge on emerging memory technologies such as phase-change memory is a bonus but not necessary.<br />
<br />
<br />
{| class="wikitable" style="text-align: center;"<br />
|-<br />
! style="width: 5%;"|Type !! style="width: 20%"|Project !! style="width: 60%"|Description !! style="width: 5%"|Topic !! style="width: 15%"|Workload Type <br />
|-<br />
| MA|| Lifelong learning challenge || [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf '''Artificial general intelligence (AGI):''' lifelong learning challenge] || HDC || algorithmic design<br />
|-<br />
| MA|| Machine learning based on optimal transport using in-memory computing' || [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf '''Machine learning based on optimal transport using in-memory computing'''] || HDC || algorithmic design<br />
|-<br />
| MA || Developing Efficient Models of Strong AI for Intelligence Quotient (IQ) Test || [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Developing Efficient Models of '''Strong AI for Intelligence Quotient (IQ) Test'''] || HDC || algorithmic design<br />
|-<br />
| MA || Accurate deep learning inference using computational memory || [https://www.zurich.ibm.com/careers/2020_027.html Project description and application] || IMC || algorithmic design<br />
|-<br />
| SA || Testing of a computational memory chip || style="text-align: justify;"|TBA || IMC || PCB Design <br /> and/or<br /> µ-code implementation <br />
|-<br />
| MA/SA || Neural Network Training on a <br /> computational memory chip || style="text-align: justify;"|TBA || IMC || algorithm/system design<br />and/or<br /> analog circuit design <br />
|-<br />
| MA || ADC design for computational memory || style="text-align: justify;"|Digital-to-Analog converters ('''DAC'''s) and Analog-to-Digital converters ('''ADC'''s) are extensively employed in Computational Memory ('''CM''') to handle the crossing between the digital and analog domain, in which computationally expensive tasks, like Matrix-Vector Multiplications ('''MVM'''), are carried out with O(1) complexity. Each conversion costs a certain amount of energy and its precision can only be guaranteed up to the Effective Number of Bits ('''ENOB''') of the employed data converter. <br /> The research focus will be on understanding the system level requirements on ADC and DAC for optimal performance of Deep Neural Network inference using CM. Furthermore, the effects of noise, non-linearity and manufacturing tolerances shall be examined and counter measurements, like for example periodic digital ADC recalibration and digital post processing, shall be evaluated with regards to effectivity and energy costs.|| IMC || analog circuit design<br />
|}<br />
<br />
==Contact==<br />
: Thesis will be at IBM Zurich in Rüschlikon<br />
; Hyperdimensional Computing (HDC) projects<br />
: Contact (at ETH Zurich): [[:User:kgf | Dr. Frank K. Gurkaynak]] and [[:User:Herschmi | Michael Hersche]]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian] <br />
: Contact (at IBM): [mailto:abr@zurich.ibm.com Dr. Abbas Rahimi]<br />
: Professor: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]<br />
; In-Memory Computing (IMC) projects<br />
: Contact (at IBM/ETH Zurich): [[:User:rid | Riduan Khaddam-Aljameh]]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian]<br />
: Professor: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]</div>Herschmi