http://iis-projects.ee.ethz.ch/api.php?action=feedcontributions&user=Abbas&feedformat=atomiis-projects - User contributions [en]2024-03-28T19:56:08ZUser contributionsMediaWiki 1.28.0http://iis-projects.ee.ethz.ch/index.php?title=IBM_Research&diff=9616IBM Research2023-10-06T16:04:09Z<p>Abbas: /* Available Projects */</p>
<hr />
<div>[[Category:Digital]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]] <br />
[[Category:Available]] <br />
[[Category:2021]]<br />
[[Category:Hot]]<br />
[[File:IBM_ZRLab.png|center]]<br />
<br />
==Short Description==<br />
Today, we are entering the era of cognitive computing, which holds great promise in deriving intelligence and knowledge from huge volumes of data. In today’s computers based on von Neumann architecture, huge amounts of data need to be shuttled back and forth at high speeds, a task at which this architecture is inefficient.<br />
<br />
It is becoming increasingly clear that to build efficient cognitive computers, we need to transition to non-von Neumann architectures in which memory and processing coexist in some form. At IBM Research–Zurich in the [https://www.zurich.ibm.com/sto/memory/ Neuromorphic and In-memory Computing Group], we explore various such computing paradigms from in-memory computing to brain-inspired neuromorphic computing. Our research spans from devices and architectures to algorithms and applications. <!--Here is a list of available projects:<br />
<br />
* [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf '''Artificial general intelligence (AGI):''' lifelong learning challenge] <br />
* [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf '''Machine learning based on optimal transport using in-memory computing''']<br />
* [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Developing Efficient Models of '''Strong AI for Intelligence Quotient (IQ) Test''']<br />
--><br />
<br />
===About the IBM Research–Zurich===<br />
The location in Zurich is one of IBM’s 12 global research labs. IBM has maintained a research laboratory in Switzerland since 1956. As the first European branch of IBM Research, the mission of the Zurich Lab, in addition to pursuing cutting-edge research for tomorrow’s information technology, is to cultivate close relationships with academic and industrial partners, be one of the premier places to work for world-class researchers, to promote women in IT and science, and to help drive Europe’s innovation agenda. [https://www.zurich.ibm.com/pdf/ZRL_Fact_Sheet.pdf Download factsheet]<br />
<br />
==Hybrid AI Systems (HAS)==<br />
<br />
[[File:NatureElectronics20.jpg|thumb|right|200px]]<br />
Neither symbolic AI nor neural networks alone has produced the kind of intelligence expressed in human and animal behavior. Why? Each has a long and rich history, but has addressed a relatively narrow aspect of the problem. Symbolic AI focuses on solving cognitive problems, drawing upon the rich framework of symbolic computation to manipulate internal representations in order to perform reasoning and inference. But it suffers from being non-adaptive, lacking the ability to learn from example or by direct observation of the world (aka symbol grounding problem). Neural networks on the other hand have the ability to learn from data, and derive much of their power from nonlinear function approximation combined with stochastic gradient descent. But intelligence requires more than modeling input-output relationships. Without the richness of symbolic computation, neural nets lack the simple but powerful operations such as variable binding that allow for analogy making and reasoning, which underlie the ability to generalize from few examples. To address this gap, '''neurosymbolic AI''' aims to combine the best of both worlds to approach human-level intelligence.<br />
<br />
We approach the problem from a very different perspective, inspired by the brain’s high-dimensional circuits and the unique mathematical properties of high-dimensional spaces. ''It leads us to a novel information processing architecture that combines the strengths of symbolic AI and neural networks, and yet has novel emergent properties of its own.'' By combining a small set of basic operations on high-dimensional vectors, we obtain hybrid AI system (HAS) that makes it possible to represent and manipulate data in ways familiar to us from symbolic AI, and to learn from the statistics of data in ways familiar to us from artificial neural networks and deep learning. Further, principles of such HAS allow few-shot learning capabilities, and extremely robust operations against failures, defects, variations, and noise, all of which are complementary to ultra-low energy computation on nanoscale fabrics such as phase-change memory devices. Exciting further research (listed in below table) awaiting in this direction spans high-level algorithmic exploration all the way to efficient hardware design for emerging computational fabrics.<br />
<br />
===Useful Reading===<br />
*[https://www.youtube.com/watch?v=HhymId8dr5Q Neurosymbolic AI Explained, IBM-Research]<br />
*[https://www.youtube.com/watch?v=CYbkoAP5dME Neurosymbolic AI, Invited Talk by David Cox, IAAI /AAAI 2020] <br />
*[https://www.nature.com/articles/s41928-020-0410-3 In-memory hyperdimensional computing], Nature Electronics, 2020.<br />
*[https://www.nature.com/articles/s41467-021-22364-0 Robust high-dimensional memory-augmented neural networks], Nature Communications, 2021.<br />
*[https://www.nature.com/articles/s41565-023-01357-8 In-memory factorization of holographic perceptual representations], Nature Nanotechnology, 2023.<br />
<br />
===Prerequisites===<br />
*Python<br />
*Background in machine learning (''recommended'')<br />
*Experience with any deep learning framework such as TensorFlow or PyTorch (''recommended'')<br />
*VLSI I (''recommended'')<br />
<br />
<!-- ==In-Memory Computing (IMC)==<br />
<br />
[[File:NNcover_imc.jpg|thumb|right|200px]]<br />
For decades, conventional computers based on the von Neumann architecture have performed computation by repeatedly transferring data between their processing and their memory units, which are physically separated. As computation becomes increasingly data-centric and as the scalability limits in terms of performance and power are being reached, alternative computing paradigms are searched for in which computation and storage are collocated. A fascinating new approach is that of computational memory where the physics of nanoscale memory devices are used to perform certain computational tasks within the memory unit in a non-von Neumann manner. '''Computational Memory''' (CM) is finding application in a variety of areas such as machine learning and signal processing. In addition to novel non-volatile memory technologies like '''PCM''' and '''ReRAM''', also conventional '''SRAM''' has been proposed as underlying storage technology in Computational Memories.<br />
<br />
===Useful Reading===<br />
*[https://ieeexplore.ieee.org/document/9288639 An SRAM-Based Multibit In-Memory Matrix-Vector Multiplier With a Precision That Scales Linearly in Area, Time, and Power]<br />
*[https://www.nature.com/articles/s41565-020-0655-z Memory devices and applications for in-memory computing]<br />
*[https://ieeexplore.ieee.org/abstract/document/8865099 Deep learning acceleration based on in-memory computing]<br />
<br />
===Prerequisites===<br />
*General interest in Deep Learning and memory/system design<br />
*VLSI I and VLSI II (''recommended'')<br />
Specific requirements for the different projects vary and are generally negotiable.<br />
---><br />
<br />
==Available Projects==<br />
We are inviting applications from students to conduct their thesis (bachelor, semester, and master) or an internship project at the IBM Research lab in Zurich on this exciting new topic. <br />
<!--<br />
The work performed could span low-level hardware experiments on phase-change memory chips comprising more than 1 million devices to high-level algorithmic development in a deep learning framework such as TensorFlow or PyTorch. It also involves interactions with several researchers across IBM research focusing on various aspects of the project. The ideal candidate should have a multi-disciplinary background, strong mathematical aptitude and programming skills. Prior knowledge on emerging memory technologies such as phase-change memory is a bonus but not necessary.<br />
--><br />
<br />
{| class="wikitable" style="text-align: center;"<br />
|-<br />
! style="width: 5%;"|Type !! style="width: 20%"|Project !! style="width: 60%"|Description !! style="width: 5%"|Workload Type <br />
|-<br />
<br />
<br />
| MA || Neurosymbolic Architectures to Approach Human-like AI || [https://iis-projects.ee.ethz.ch/images/c/cc/IBM_Neurosymbolic.pdf Link to description] || algorithmic design<br />
|-<br />
<br />
<br />
<br />
| MA || Developing Efficient Models for Solving Intelligence Quotient (IQ) Test || [https://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Link to description] || algorithmic/hardware design<br />
|-<br />
<br />
<br />
| MA|| Crytography meets in-memory computing || [https://iis-projects.ee.ethz.ch/images/6/62/Y2021_10_Advertisement_VME.pdf Link to description] || algorithmic design and hardware experiments<br />
|-<br />
<br />
<br />
| MA|| Optimal routing for 2D Mesh-based Analog Compute-In-Memory Accelerator Architecture || [https://iis-projects.ee.ethz.ch/images/8/83/ADS_Routing.pdf Link to description] || hardware design <br />
|-<br />
<br />
<br />
<br />
|}<br />
<br />
==Contact==<br />
: Thesis will be at IBM Zurich in Rüschlikon<br />
; Hybrid AI Systems (HAS) projects<br />
: Contact (at ETH Zurich): [[:User:kgf | Dr. Frank K. Gurkaynak]], [[:User:Herschmi | Michael Hersche]]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian] <br />
: Contact (at IBM): [mailto:abr@zurich.ibm.com Dr. Abbas Rahimi]<br />
: Professor: [https://iis.ee.ethz.ch/people/person-detail.html?persid=194234 Luca Benini]</div>Abbashttp://iis-projects.ee.ethz.ch/index.php?title=IBM_Research&diff=9615IBM Research2023-10-06T16:03:49Z<p>Abbas: /* Available Projects */</p>
<hr />
<div>[[Category:Digital]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]] <br />
[[Category:Available]] <br />
[[Category:2021]]<br />
[[Category:Hot]]<br />
[[File:IBM_ZRLab.png|center]]<br />
<br />
==Short Description==<br />
Today, we are entering the era of cognitive computing, which holds great promise in deriving intelligence and knowledge from huge volumes of data. In today’s computers based on von Neumann architecture, huge amounts of data need to be shuttled back and forth at high speeds, a task at which this architecture is inefficient.<br />
<br />
It is becoming increasingly clear that to build efficient cognitive computers, we need to transition to non-von Neumann architectures in which memory and processing coexist in some form. At IBM Research–Zurich in the [https://www.zurich.ibm.com/sto/memory/ Neuromorphic and In-memory Computing Group], we explore various such computing paradigms from in-memory computing to brain-inspired neuromorphic computing. Our research spans from devices and architectures to algorithms and applications. <!--Here is a list of available projects:<br />
<br />
* [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf '''Artificial general intelligence (AGI):''' lifelong learning challenge] <br />
* [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf '''Machine learning based on optimal transport using in-memory computing''']<br />
* [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Developing Efficient Models of '''Strong AI for Intelligence Quotient (IQ) Test''']<br />
--><br />
<br />
===About the IBM Research–Zurich===<br />
The location in Zurich is one of IBM’s 12 global research labs. IBM has maintained a research laboratory in Switzerland since 1956. As the first European branch of IBM Research, the mission of the Zurich Lab, in addition to pursuing cutting-edge research for tomorrow’s information technology, is to cultivate close relationships with academic and industrial partners, be one of the premier places to work for world-class researchers, to promote women in IT and science, and to help drive Europe’s innovation agenda. [https://www.zurich.ibm.com/pdf/ZRL_Fact_Sheet.pdf Download factsheet]<br />
<br />
==Hybrid AI Systems (HAS)==<br />
<br />
[[File:NatureElectronics20.jpg|thumb|right|200px]]<br />
Neither symbolic AI nor neural networks alone has produced the kind of intelligence expressed in human and animal behavior. Why? Each has a long and rich history, but has addressed a relatively narrow aspect of the problem. Symbolic AI focuses on solving cognitive problems, drawing upon the rich framework of symbolic computation to manipulate internal representations in order to perform reasoning and inference. But it suffers from being non-adaptive, lacking the ability to learn from example or by direct observation of the world (aka symbol grounding problem). Neural networks on the other hand have the ability to learn from data, and derive much of their power from nonlinear function approximation combined with stochastic gradient descent. But intelligence requires more than modeling input-output relationships. Without the richness of symbolic computation, neural nets lack the simple but powerful operations such as variable binding that allow for analogy making and reasoning, which underlie the ability to generalize from few examples. To address this gap, '''neurosymbolic AI''' aims to combine the best of both worlds to approach human-level intelligence.<br />
<br />
We approach the problem from a very different perspective, inspired by the brain’s high-dimensional circuits and the unique mathematical properties of high-dimensional spaces. ''It leads us to a novel information processing architecture that combines the strengths of symbolic AI and neural networks, and yet has novel emergent properties of its own.'' By combining a small set of basic operations on high-dimensional vectors, we obtain hybrid AI system (HAS) that makes it possible to represent and manipulate data in ways familiar to us from symbolic AI, and to learn from the statistics of data in ways familiar to us from artificial neural networks and deep learning. Further, principles of such HAS allow few-shot learning capabilities, and extremely robust operations against failures, defects, variations, and noise, all of which are complementary to ultra-low energy computation on nanoscale fabrics such as phase-change memory devices. Exciting further research (listed in below table) awaiting in this direction spans high-level algorithmic exploration all the way to efficient hardware design for emerging computational fabrics.<br />
<br />
===Useful Reading===<br />
*[https://www.youtube.com/watch?v=HhymId8dr5Q Neurosymbolic AI Explained, IBM-Research]<br />
*[https://www.youtube.com/watch?v=CYbkoAP5dME Neurosymbolic AI, Invited Talk by David Cox, IAAI /AAAI 2020] <br />
*[https://www.nature.com/articles/s41928-020-0410-3 In-memory hyperdimensional computing], Nature Electronics, 2020.<br />
*[https://www.nature.com/articles/s41467-021-22364-0 Robust high-dimensional memory-augmented neural networks], Nature Communications, 2021.<br />
*[https://www.nature.com/articles/s41565-023-01357-8 In-memory factorization of holographic perceptual representations], Nature Nanotechnology, 2023.<br />
<br />
===Prerequisites===<br />
*Python<br />
*Background in machine learning (''recommended'')<br />
*Experience with any deep learning framework such as TensorFlow or PyTorch (''recommended'')<br />
*VLSI I (''recommended'')<br />
<br />
<!-- ==In-Memory Computing (IMC)==<br />
<br />
[[File:NNcover_imc.jpg|thumb|right|200px]]<br />
For decades, conventional computers based on the von Neumann architecture have performed computation by repeatedly transferring data between their processing and their memory units, which are physically separated. As computation becomes increasingly data-centric and as the scalability limits in terms of performance and power are being reached, alternative computing paradigms are searched for in which computation and storage are collocated. A fascinating new approach is that of computational memory where the physics of nanoscale memory devices are used to perform certain computational tasks within the memory unit in a non-von Neumann manner. '''Computational Memory''' (CM) is finding application in a variety of areas such as machine learning and signal processing. In addition to novel non-volatile memory technologies like '''PCM''' and '''ReRAM''', also conventional '''SRAM''' has been proposed as underlying storage technology in Computational Memories.<br />
<br />
===Useful Reading===<br />
*[https://ieeexplore.ieee.org/document/9288639 An SRAM-Based Multibit In-Memory Matrix-Vector Multiplier With a Precision That Scales Linearly in Area, Time, and Power]<br />
*[https://www.nature.com/articles/s41565-020-0655-z Memory devices and applications for in-memory computing]<br />
*[https://ieeexplore.ieee.org/abstract/document/8865099 Deep learning acceleration based on in-memory computing]<br />
<br />
===Prerequisites===<br />
*General interest in Deep Learning and memory/system design<br />
*VLSI I and VLSI II (''recommended'')<br />
Specific requirements for the different projects vary and are generally negotiable.<br />
---><br />
<br />
==Available Projects==<br />
We are inviting applications from students to conduct their thesis (bachelor, semester, and master) or an internship project at the IBM Research lab in Zurich on this exciting new topic. <br />
<!--<br />
The work performed could span low-level hardware experiments on phase-change memory chips comprising more than 1 million devices to high-level algorithmic development in a deep learning framework such as TensorFlow or PyTorch. It also involves interactions with several researchers across IBM research focusing on various aspects of the project. The ideal candidate should have a multi-disciplinary background, strong mathematical aptitude and programming skills. Prior knowledge on emerging memory technologies such as phase-change memory is a bonus but not necessary.<br />
--><br />
<br />
{| class="wikitable" style="text-align: center;"<br />
|-<br />
! style="width: 5%;"|Type !! style="width: 20%"|Project !! style="width: 60%"|Description !! style="width: 5%"|Workload Type <br />
|-<br />
<br />
<br />
| MA/SA || Neurosymbolic Architectures to Approach Human-like AI || [https://iis-projects.ee.ethz.ch/images/c/cc/IBM_Neurosymbolic.pdf Link to description] || algorithmic design<br />
|-<br />
<br />
<br />
<br />
| MA/SA || Developing Efficient Models for Solving Intelligence Quotient (IQ) Test || [https://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Link to description] || algorithmic/hardware design<br />
|-<br />
<br />
<br />
| MA|| Crytography meets in-memory computing || [https://iis-projects.ee.ethz.ch/images/6/62/Y2021_10_Advertisement_VME.pdf Link to description] || algorithmic design and hardware experiments<br />
|-<br />
<br />
<br />
| MA|| Optimal routing for 2D Mesh-based Analog Compute-In-Memory Accelerator Architecture || [https://iis-projects.ee.ethz.ch/images/8/83/ADS_Routing.pdf Link to description] || hardware design <br />
|-<br />
<br />
<br />
<br />
|}<br />
<br />
==Contact==<br />
: Thesis will be at IBM Zurich in Rüschlikon<br />
; Hybrid AI Systems (HAS) projects<br />
: Contact (at ETH Zurich): [[:User:kgf | Dr. Frank K. Gurkaynak]], [[:User:Herschmi | Michael Hersche]]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian] <br />
: Contact (at IBM): [mailto:abr@zurich.ibm.com Dr. Abbas Rahimi]<br />
: Professor: [https://iis.ee.ethz.ch/people/person-detail.html?persid=194234 Luca Benini]</div>Abbashttp://iis-projects.ee.ethz.ch/index.php?title=IBM_Research&diff=9477IBM Research2023-08-04T15:44:22Z<p>Abbas: /* Available Projects */</p>
<hr />
<div>[[Category:Digital]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]] <br />
[[Category:Available]] <br />
[[Category:2021]]<br />
[[Category:Hot]]<br />
[[File:IBM_ZRLab.png|center]]<br />
<br />
==Short Description==<br />
Today, we are entering the era of cognitive computing, which holds great promise in deriving intelligence and knowledge from huge volumes of data. In today’s computers based on von Neumann architecture, huge amounts of data need to be shuttled back and forth at high speeds, a task at which this architecture is inefficient.<br />
<br />
It is becoming increasingly clear that to build efficient cognitive computers, we need to transition to non-von Neumann architectures in which memory and processing coexist in some form. At IBM Research–Zurich in the [https://www.zurich.ibm.com/sto/memory/ Neuromorphic and In-memory Computing Group], we explore various such computing paradigms from in-memory computing to brain-inspired neuromorphic computing. Our research spans from devices and architectures to algorithms and applications. <!--Here is a list of available projects:<br />
<br />
* [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf '''Artificial general intelligence (AGI):''' lifelong learning challenge] <br />
* [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf '''Machine learning based on optimal transport using in-memory computing''']<br />
* [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Developing Efficient Models of '''Strong AI for Intelligence Quotient (IQ) Test''']<br />
--><br />
<br />
===About the IBM Research–Zurich===<br />
The location in Zurich is one of IBM’s 12 global research labs. IBM has maintained a research laboratory in Switzerland since 1956. As the first European branch of IBM Research, the mission of the Zurich Lab, in addition to pursuing cutting-edge research for tomorrow’s information technology, is to cultivate close relationships with academic and industrial partners, be one of the premier places to work for world-class researchers, to promote women in IT and science, and to help drive Europe’s innovation agenda. [https://www.zurich.ibm.com/pdf/ZRL_Fact_Sheet.pdf Download factsheet]<br />
<br />
==Hybrid AI Systems (HAS)==<br />
<br />
[[File:NatureElectronics20.jpg|thumb|right|200px]]<br />
Neither symbolic AI nor neural networks alone has produced the kind of intelligence expressed in human and animal behavior. Why? Each has a long and rich history, but has addressed a relatively narrow aspect of the problem. Symbolic AI focuses on solving cognitive problems, drawing upon the rich framework of symbolic computation to manipulate internal representations in order to perform reasoning and inference. But it suffers from being non-adaptive, lacking the ability to learn from example or by direct observation of the world (aka symbol grounding problem). Neural networks on the other hand have the ability to learn from data, and derive much of their power from nonlinear function approximation combined with stochastic gradient descent. But intelligence requires more than modeling input-output relationships. Without the richness of symbolic computation, neural nets lack the simple but powerful operations such as variable binding that allow for analogy making and reasoning, which underlie the ability to generalize from few examples. To address this gap, '''neurosymbolic AI''' aims to combine the best of both worlds to approach human-level intelligence.<br />
<br />
We approach the problem from a very different perspective, inspired by the brain’s high-dimensional circuits and the unique mathematical properties of high-dimensional spaces. ''It leads us to a novel information processing architecture that combines the strengths of symbolic AI and neural networks, and yet has novel emergent properties of its own.'' By combining a small set of basic operations on high-dimensional vectors, we obtain hybrid AI system (HAS) that makes it possible to represent and manipulate data in ways familiar to us from symbolic AI, and to learn from the statistics of data in ways familiar to us from artificial neural networks and deep learning. Further, principles of such HAS allow few-shot learning capabilities, and extremely robust operations against failures, defects, variations, and noise, all of which are complementary to ultra-low energy computation on nanoscale fabrics such as phase-change memory devices. Exciting further research (listed in below table) awaiting in this direction spans high-level algorithmic exploration all the way to efficient hardware design for emerging computational fabrics.<br />
<br />
===Useful Reading===<br />
*[https://www.youtube.com/watch?v=HhymId8dr5Q Neurosymbolic AI Explained, IBM-Research]<br />
*[https://www.youtube.com/watch?v=CYbkoAP5dME Neurosymbolic AI, Invited Talk by David Cox, IAAI /AAAI 2020] <br />
*[https://www.nature.com/articles/s41928-020-0410-3 In-memory hyperdimensional computing], Nature Electronics, 2020.<br />
*[https://www.nature.com/articles/s41467-021-22364-0 Robust high-dimensional memory-augmented neural networks], Nature Communications, 2021.<br />
*[https://www.nature.com/articles/s41565-023-01357-8 In-memory factorization of holographic perceptual representations], Nature Nanotechnology, 2023.<br />
<br />
===Prerequisites===<br />
*Python<br />
*Background in machine learning (''recommended'')<br />
*Experience with any deep learning framework such as TensorFlow or PyTorch (''recommended'')<br />
*VLSI I (''recommended'')<br />
<br />
<!-- ==In-Memory Computing (IMC)==<br />
<br />
[[File:NNcover_imc.jpg|thumb|right|200px]]<br />
For decades, conventional computers based on the von Neumann architecture have performed computation by repeatedly transferring data between their processing and their memory units, which are physically separated. As computation becomes increasingly data-centric and as the scalability limits in terms of performance and power are being reached, alternative computing paradigms are searched for in which computation and storage are collocated. A fascinating new approach is that of computational memory where the physics of nanoscale memory devices are used to perform certain computational tasks within the memory unit in a non-von Neumann manner. '''Computational Memory''' (CM) is finding application in a variety of areas such as machine learning and signal processing. In addition to novel non-volatile memory technologies like '''PCM''' and '''ReRAM''', also conventional '''SRAM''' has been proposed as underlying storage technology in Computational Memories.<br />
<br />
===Useful Reading===<br />
*[https://ieeexplore.ieee.org/document/9288639 An SRAM-Based Multibit In-Memory Matrix-Vector Multiplier With a Precision That Scales Linearly in Area, Time, and Power]<br />
*[https://www.nature.com/articles/s41565-020-0655-z Memory devices and applications for in-memory computing]<br />
*[https://ieeexplore.ieee.org/abstract/document/8865099 Deep learning acceleration based on in-memory computing]<br />
<br />
===Prerequisites===<br />
*General interest in Deep Learning and memory/system design<br />
*VLSI I and VLSI II (''recommended'')<br />
Specific requirements for the different projects vary and are generally negotiable.<br />
---><br />
<br />
==Available Projects==<br />
We are inviting applications from students to conduct their thesis (bachelor, semester, and master) or an internship project at the IBM Research lab in Zurich on this exciting new topic. <br />
<!--<br />
The work performed could span low-level hardware experiments on phase-change memory chips comprising more than 1 million devices to high-level algorithmic development in a deep learning framework such as TensorFlow or PyTorch. It also involves interactions with several researchers across IBM research focusing on various aspects of the project. The ideal candidate should have a multi-disciplinary background, strong mathematical aptitude and programming skills. Prior knowledge on emerging memory technologies such as phase-change memory is a bonus but not necessary.<br />
--><br />
<br />
{| class="wikitable" style="text-align: center;"<br />
|-<br />
! style="width: 5%;"|Type !! style="width: 20%"|Project !! style="width: 60%"|Description !! style="width: 5%"|Workload Type <br />
|-<br />
<br />
| MA/SA || Continual and Few-shot Learning || [https://iis-projects.ee.ethz.ch/images/a/a2/IBM_CL.pdf Link to description] || algorithmic/hardware co-design<br />
|-<br />
<br />
<br />
| MA/SA || Neurosymbolic Architectures to Approach Human-like AI || [https://iis-projects.ee.ethz.ch/images/c/cc/IBM_Neurosymbolic.pdf Link to description] || algorithmic design<br />
|-<br />
<br />
<br />
<br />
| MA/SA || Developing Efficient Models for Solving Intelligence Quotient (IQ) Test || [https://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Link to description] || algorithmic/hardware design<br />
|-<br />
<br />
<br />
| MA|| Crytography meets in-memory computing || [https://iis-projects.ee.ethz.ch/images/6/62/Y2021_10_Advertisement_VME.pdf Link to description] || algorithmic design and hardware experiments<br />
|-<br />
<br />
<br />
| MA|| Optimal routing for 2D Mesh-based Analog Compute-In-Memory Accelerator Architecture || [https://iis-projects.ee.ethz.ch/images/8/83/ADS_Routing.pdf Link to description] || hardware design <br />
|-<br />
<br />
| MA|| Analysis of feature extraction for brain state prediction || [https://iis-projects.ee.ethz.ch/images/7/75/IEEG_FX.pdf Link to description] || algorithmic design <br />
|-<br />
<br />
|}<br />
<br />
==Contact==<br />
: Thesis will be at IBM Zurich in Rüschlikon<br />
; Hybrid AI Systems (HAS) projects<br />
: Contact (at ETH Zurich): [[:User:kgf | Dr. Frank K. Gurkaynak]], [[:User:Herschmi | Michael Hersche]]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian] <br />
: Contact (at IBM): [mailto:abr@zurich.ibm.com Dr. Abbas Rahimi]<br />
: Professor: [https://iis.ee.ethz.ch/people/person-detail.html?persid=194234 Luca Benini]</div>Abbashttp://iis-projects.ee.ethz.ch/index.php?title=File:IEEG_FX.pdf&diff=9476File:IEEG FX.pdf2023-08-04T15:43:41Z<p>Abbas: </p>
<hr />
<div></div>Abbashttp://iis-projects.ee.ethz.ch/index.php?title=IBM_Research&diff=9268IBM Research2023-07-17T13:15:29Z<p>Abbas: /* Available Projects */</p>
<hr />
<div>[[Category:Digital]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]] <br />
[[Category:Available]] <br />
[[Category:2021]]<br />
[[Category:Hot]]<br />
[[File:IBM_ZRLab.png|center]]<br />
<br />
==Short Description==<br />
Today, we are entering the era of cognitive computing, which holds great promise in deriving intelligence and knowledge from huge volumes of data. In today’s computers based on von Neumann architecture, huge amounts of data need to be shuttled back and forth at high speeds, a task at which this architecture is inefficient.<br />
<br />
It is becoming increasingly clear that to build efficient cognitive computers, we need to transition to non-von Neumann architectures in which memory and processing coexist in some form. At IBM Research–Zurich in the [https://www.zurich.ibm.com/sto/memory/ Neuromorphic and In-memory Computing Group], we explore various such computing paradigms from in-memory computing to brain-inspired neuromorphic computing. Our research spans from devices and architectures to algorithms and applications. <!--Here is a list of available projects:<br />
<br />
* [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf '''Artificial general intelligence (AGI):''' lifelong learning challenge] <br />
* [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf '''Machine learning based on optimal transport using in-memory computing''']<br />
* [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Developing Efficient Models of '''Strong AI for Intelligence Quotient (IQ) Test''']<br />
--><br />
<br />
===About the IBM Research–Zurich===<br />
The location in Zurich is one of IBM’s 12 global research labs. IBM has maintained a research laboratory in Switzerland since 1956. As the first European branch of IBM Research, the mission of the Zurich Lab, in addition to pursuing cutting-edge research for tomorrow’s information technology, is to cultivate close relationships with academic and industrial partners, be one of the premier places to work for world-class researchers, to promote women in IT and science, and to help drive Europe’s innovation agenda. [https://www.zurich.ibm.com/pdf/ZRL_Fact_Sheet.pdf Download factsheet]<br />
<br />
==Hybrid AI Systems (HAS)==<br />
<br />
[[File:NatureElectronics20.jpg|thumb|right|200px]]<br />
Neither symbolic AI nor neural networks alone has produced the kind of intelligence expressed in human and animal behavior. Why? Each has a long and rich history, but has addressed a relatively narrow aspect of the problem. Symbolic AI focuses on solving cognitive problems, drawing upon the rich framework of symbolic computation to manipulate internal representations in order to perform reasoning and inference. But it suffers from being non-adaptive, lacking the ability to learn from example or by direct observation of the world (aka symbol grounding problem). Neural networks on the other hand have the ability to learn from data, and derive much of their power from nonlinear function approximation combined with stochastic gradient descent. But intelligence requires more than modeling input-output relationships. Without the richness of symbolic computation, neural nets lack the simple but powerful operations such as variable binding that allow for analogy making and reasoning, which underlie the ability to generalize from few examples. To address this gap, '''neurosymbolic AI''' aims to combine the best of both worlds to approach human-level intelligence.<br />
<br />
We approach the problem from a very different perspective, inspired by the brain’s high-dimensional circuits and the unique mathematical properties of high-dimensional spaces. ''It leads us to a novel information processing architecture that combines the strengths of symbolic AI and neural networks, and yet has novel emergent properties of its own.'' By combining a small set of basic operations on high-dimensional vectors, we obtain hybrid AI system (HAS) that makes it possible to represent and manipulate data in ways familiar to us from symbolic AI, and to learn from the statistics of data in ways familiar to us from artificial neural networks and deep learning. Further, principles of such HAS allow few-shot learning capabilities, and extremely robust operations against failures, defects, variations, and noise, all of which are complementary to ultra-low energy computation on nanoscale fabrics such as phase-change memory devices. Exciting further research (listed in below table) awaiting in this direction spans high-level algorithmic exploration all the way to efficient hardware design for emerging computational fabrics.<br />
<br />
===Useful Reading===<br />
*[https://www.youtube.com/watch?v=HhymId8dr5Q Neurosymbolic AI Explained, IBM-Research]<br />
*[https://www.youtube.com/watch?v=CYbkoAP5dME Neurosymbolic AI, Invited Talk by David Cox, IAAI /AAAI 2020] <br />
*[https://www.nature.com/articles/s41928-020-0410-3 In-memory hyperdimensional computing], Nature Electronics, 2020.<br />
*[https://www.nature.com/articles/s41467-021-22364-0 Robust high-dimensional memory-augmented neural networks], Nature Communications, 2021.<br />
*[https://www.nature.com/articles/s41565-023-01357-8 In-memory factorization of holographic perceptual representations], Nature Nanotechnology, 2023.<br />
<br />
===Prerequisites===<br />
*Python<br />
*Background in machine learning (''recommended'')<br />
*Experience with any deep learning framework such as TensorFlow or PyTorch (''recommended'')<br />
*VLSI I (''recommended'')<br />
<br />
<!-- ==In-Memory Computing (IMC)==<br />
<br />
[[File:NNcover_imc.jpg|thumb|right|200px]]<br />
For decades, conventional computers based on the von Neumann architecture have performed computation by repeatedly transferring data between their processing and their memory units, which are physically separated. As computation becomes increasingly data-centric and as the scalability limits in terms of performance and power are being reached, alternative computing paradigms are searched for in which computation and storage are collocated. A fascinating new approach is that of computational memory where the physics of nanoscale memory devices are used to perform certain computational tasks within the memory unit in a non-von Neumann manner. '''Computational Memory''' (CM) is finding application in a variety of areas such as machine learning and signal processing. In addition to novel non-volatile memory technologies like '''PCM''' and '''ReRAM''', also conventional '''SRAM''' has been proposed as underlying storage technology in Computational Memories.<br />
<br />
===Useful Reading===<br />
*[https://ieeexplore.ieee.org/document/9288639 An SRAM-Based Multibit In-Memory Matrix-Vector Multiplier With a Precision That Scales Linearly in Area, Time, and Power]<br />
*[https://www.nature.com/articles/s41565-020-0655-z Memory devices and applications for in-memory computing]<br />
*[https://ieeexplore.ieee.org/abstract/document/8865099 Deep learning acceleration based on in-memory computing]<br />
<br />
===Prerequisites===<br />
*General interest in Deep Learning and memory/system design<br />
*VLSI I and VLSI II (''recommended'')<br />
Specific requirements for the different projects vary and are generally negotiable.<br />
---><br />
<br />
==Available Projects==<br />
We are inviting applications from students to conduct their thesis (bachelor, semester, and master) or an internship project at the IBM Research lab in Zurich on this exciting new topic. <br />
<!--<br />
The work performed could span low-level hardware experiments on phase-change memory chips comprising more than 1 million devices to high-level algorithmic development in a deep learning framework such as TensorFlow or PyTorch. It also involves interactions with several researchers across IBM research focusing on various aspects of the project. The ideal candidate should have a multi-disciplinary background, strong mathematical aptitude and programming skills. Prior knowledge on emerging memory technologies such as phase-change memory is a bonus but not necessary.<br />
--><br />
<br />
{| class="wikitable" style="text-align: center;"<br />
|-<br />
! style="width: 5%;"|Type !! style="width: 20%"|Project !! style="width: 60%"|Description !! style="width: 5%"|Workload Type <br />
|-<br />
<br />
| MA/SA || Continual and Few-shot Learning || [https://iis-projects.ee.ethz.ch/images/a/a2/IBM_CL.pdf Link to description] || algorithmic/hardware co-design<br />
|-<br />
<br />
<br />
| MA/SA || Neurosymbolic Architectures to Approach Human-like AI || [https://iis-projects.ee.ethz.ch/images/c/cc/IBM_Neurosymbolic.pdf Link to description] || algorithmic design<br />
|-<br />
<br />
<br />
<br />
| MA/SA || Developing Efficient Models for Solving Intelligence Quotient (IQ) Test || [https://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Link to description] || algorithmic/hardware design<br />
|-<br />
<br />
<br />
| MA|| Crytography meets in-memory computing || [https://iis-projects.ee.ethz.ch/images/6/62/Y2021_10_Advertisement_VME.pdf Link to description] || algorithmic design and hardware experiments<br />
|-<br />
<br />
<br />
| MA|| Optimal routing for 2D Mesh-based Analog Compute-In-Memory Accelerator Architecture || [https://iis-projects.ee.ethz.ch/images/8/83/ADS_Routing.pdf Link to description] || hardware design <br />
|-<br />
<br />
<br />
<br />
|}<br />
<br />
==Contact==<br />
: Thesis will be at IBM Zurich in Rüschlikon<br />
; Hybrid AI Systems (HAS) projects<br />
: Contact (at ETH Zurich): [[:User:kgf | Dr. Frank K. Gurkaynak]], [[:User:Herschmi | Michael Hersche]]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian] <br />
: Contact (at IBM): [mailto:abr@zurich.ibm.com Dr. Abbas Rahimi]<br />
: Professor: [https://iis.ee.ethz.ch/people/person-detail.html?persid=194234 Luca Benini]</div>Abbashttp://iis-projects.ee.ethz.ch/index.php?title=File:ADS_Routing.pdf&diff=9267File:ADS Routing.pdf2023-07-17T13:13:42Z<p>Abbas: </p>
<hr />
<div></div>Abbashttp://iis-projects.ee.ethz.ch/index.php?title=IBM_Research&diff=9266IBM Research2023-07-17T11:50:05Z<p>Abbas: /* Useful Reading */</p>
<hr />
<div>[[Category:Digital]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]] <br />
[[Category:Available]] <br />
[[Category:2021]]<br />
[[Category:Hot]]<br />
[[File:IBM_ZRLab.png|center]]<br />
<br />
==Short Description==<br />
Today, we are entering the era of cognitive computing, which holds great promise in deriving intelligence and knowledge from huge volumes of data. In today’s computers based on von Neumann architecture, huge amounts of data need to be shuttled back and forth at high speeds, a task at which this architecture is inefficient.<br />
<br />
It is becoming increasingly clear that to build efficient cognitive computers, we need to transition to non-von Neumann architectures in which memory and processing coexist in some form. At IBM Research–Zurich in the [https://www.zurich.ibm.com/sto/memory/ Neuromorphic and In-memory Computing Group], we explore various such computing paradigms from in-memory computing to brain-inspired neuromorphic computing. Our research spans from devices and architectures to algorithms and applications. <!--Here is a list of available projects:<br />
<br />
* [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf '''Artificial general intelligence (AGI):''' lifelong learning challenge] <br />
* [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf '''Machine learning based on optimal transport using in-memory computing''']<br />
* [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Developing Efficient Models of '''Strong AI for Intelligence Quotient (IQ) Test''']<br />
--><br />
<br />
===About the IBM Research–Zurich===<br />
The location in Zurich is one of IBM’s 12 global research labs. IBM has maintained a research laboratory in Switzerland since 1956. As the first European branch of IBM Research, the mission of the Zurich Lab, in addition to pursuing cutting-edge research for tomorrow’s information technology, is to cultivate close relationships with academic and industrial partners, be one of the premier places to work for world-class researchers, to promote women in IT and science, and to help drive Europe’s innovation agenda. [https://www.zurich.ibm.com/pdf/ZRL_Fact_Sheet.pdf Download factsheet]<br />
<br />
==Hybrid AI Systems (HAS)==<br />
<br />
[[File:NatureElectronics20.jpg|thumb|right|200px]]<br />
Neither symbolic AI nor neural networks alone has produced the kind of intelligence expressed in human and animal behavior. Why? Each has a long and rich history, but has addressed a relatively narrow aspect of the problem. Symbolic AI focuses on solving cognitive problems, drawing upon the rich framework of symbolic computation to manipulate internal representations in order to perform reasoning and inference. But it suffers from being non-adaptive, lacking the ability to learn from example or by direct observation of the world (aka symbol grounding problem). Neural networks on the other hand have the ability to learn from data, and derive much of their power from nonlinear function approximation combined with stochastic gradient descent. But intelligence requires more than modeling input-output relationships. Without the richness of symbolic computation, neural nets lack the simple but powerful operations such as variable binding that allow for analogy making and reasoning, which underlie the ability to generalize from few examples. To address this gap, '''neurosymbolic AI''' aims to combine the best of both worlds to approach human-level intelligence.<br />
<br />
We approach the problem from a very different perspective, inspired by the brain’s high-dimensional circuits and the unique mathematical properties of high-dimensional spaces. ''It leads us to a novel information processing architecture that combines the strengths of symbolic AI and neural networks, and yet has novel emergent properties of its own.'' By combining a small set of basic operations on high-dimensional vectors, we obtain hybrid AI system (HAS) that makes it possible to represent and manipulate data in ways familiar to us from symbolic AI, and to learn from the statistics of data in ways familiar to us from artificial neural networks and deep learning. Further, principles of such HAS allow few-shot learning capabilities, and extremely robust operations against failures, defects, variations, and noise, all of which are complementary to ultra-low energy computation on nanoscale fabrics such as phase-change memory devices. Exciting further research (listed in below table) awaiting in this direction spans high-level algorithmic exploration all the way to efficient hardware design for emerging computational fabrics.<br />
<br />
===Useful Reading===<br />
*[https://www.youtube.com/watch?v=HhymId8dr5Q Neurosymbolic AI Explained, IBM-Research]<br />
*[https://www.youtube.com/watch?v=CYbkoAP5dME Neurosymbolic AI, Invited Talk by David Cox, IAAI /AAAI 2020] <br />
*[https://www.nature.com/articles/s41928-020-0410-3 In-memory hyperdimensional computing], Nature Electronics, 2020.<br />
*[https://www.nature.com/articles/s41467-021-22364-0 Robust high-dimensional memory-augmented neural networks], Nature Communications, 2021.<br />
*[https://www.nature.com/articles/s41565-023-01357-8 In-memory factorization of holographic perceptual representations], Nature Nanotechnology, 2023.<br />
<br />
===Prerequisites===<br />
*Python<br />
*Background in machine learning (''recommended'')<br />
*Experience with any deep learning framework such as TensorFlow or PyTorch (''recommended'')<br />
*VLSI I (''recommended'')<br />
<br />
<!-- ==In-Memory Computing (IMC)==<br />
<br />
[[File:NNcover_imc.jpg|thumb|right|200px]]<br />
For decades, conventional computers based on the von Neumann architecture have performed computation by repeatedly transferring data between their processing and their memory units, which are physically separated. As computation becomes increasingly data-centric and as the scalability limits in terms of performance and power are being reached, alternative computing paradigms are searched for in which computation and storage are collocated. A fascinating new approach is that of computational memory where the physics of nanoscale memory devices are used to perform certain computational tasks within the memory unit in a non-von Neumann manner. '''Computational Memory''' (CM) is finding application in a variety of areas such as machine learning and signal processing. In addition to novel non-volatile memory technologies like '''PCM''' and '''ReRAM''', also conventional '''SRAM''' has been proposed as underlying storage technology in Computational Memories.<br />
<br />
===Useful Reading===<br />
*[https://ieeexplore.ieee.org/document/9288639 An SRAM-Based Multibit In-Memory Matrix-Vector Multiplier With a Precision That Scales Linearly in Area, Time, and Power]<br />
*[https://www.nature.com/articles/s41565-020-0655-z Memory devices and applications for in-memory computing]<br />
*[https://ieeexplore.ieee.org/abstract/document/8865099 Deep learning acceleration based on in-memory computing]<br />
<br />
===Prerequisites===<br />
*General interest in Deep Learning and memory/system design<br />
*VLSI I and VLSI II (''recommended'')<br />
Specific requirements for the different projects vary and are generally negotiable.<br />
---><br />
<br />
==Available Projects==<br />
We are inviting applications from students to conduct their thesis (bachelor, semester, and master) or an internship project at the IBM Research lab in Zurich on this exciting new topic. <br />
<!--<br />
The work performed could span low-level hardware experiments on phase-change memory chips comprising more than 1 million devices to high-level algorithmic development in a deep learning framework such as TensorFlow or PyTorch. It also involves interactions with several researchers across IBM research focusing on various aspects of the project. The ideal candidate should have a multi-disciplinary background, strong mathematical aptitude and programming skills. Prior knowledge on emerging memory technologies such as phase-change memory is a bonus but not necessary.<br />
--><br />
<br />
{| class="wikitable" style="text-align: center;"<br />
|-<br />
! style="width: 5%;"|Type !! style="width: 20%"|Project !! style="width: 60%"|Description !! style="width: 5%"|Workload Type <br />
|-<br />
<br />
| MA/SA || Continual and Few-shot Learning || [https://iis-projects.ee.ethz.ch/images/a/a2/IBM_CL.pdf Link to description] || algorithmic/hardware co-design<br />
|-<br />
<br />
<br />
| MA/SA || Neurosymbolic Architectures to Approach Human-like AI || [https://iis-projects.ee.ethz.ch/images/c/cc/IBM_Neurosymbolic.pdf Link to description] || algorithmic design<br />
|-<br />
<br />
<br />
<br />
| MA/SA || Developing Efficient Models for Solving Intelligence Quotient (IQ) Test || [https://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Link to description] || algorithmic/hardware design<br />
|-<br />
<br />
<br />
<br />
<br />
| MA|| Crytography meets in-memory computing || [https://iis-projects.ee.ethz.ch/images/6/62/Y2021_10_Advertisement_VME.pdf Link to description] || algorithmic design and hardware experiments<br />
|-<br />
<br />
<br />
|}<br />
<br />
==Contact==<br />
: Thesis will be at IBM Zurich in Rüschlikon<br />
; Hybrid AI Systems (HAS) projects<br />
: Contact (at ETH Zurich): [[:User:kgf | Dr. Frank K. Gurkaynak]], [[:User:Herschmi | Michael Hersche]]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian] <br />
: Contact (at IBM): [mailto:abr@zurich.ibm.com Dr. Abbas Rahimi]<br />
: Professor: [https://iis.ee.ethz.ch/people/person-detail.html?persid=194234 Luca Benini]</div>Abbashttp://iis-projects.ee.ethz.ch/index.php?title=IBM_Research&diff=9265IBM Research2023-07-17T11:48:28Z<p>Abbas: /* Available Projects */</p>
<hr />
<div>[[Category:Digital]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]] <br />
[[Category:Available]] <br />
[[Category:2021]]<br />
[[Category:Hot]]<br />
[[File:IBM_ZRLab.png|center]]<br />
<br />
==Short Description==<br />
Today, we are entering the era of cognitive computing, which holds great promise in deriving intelligence and knowledge from huge volumes of data. In today’s computers based on von Neumann architecture, huge amounts of data need to be shuttled back and forth at high speeds, a task at which this architecture is inefficient.<br />
<br />
It is becoming increasingly clear that to build efficient cognitive computers, we need to transition to non-von Neumann architectures in which memory and processing coexist in some form. At IBM Research–Zurich in the [https://www.zurich.ibm.com/sto/memory/ Neuromorphic and In-memory Computing Group], we explore various such computing paradigms from in-memory computing to brain-inspired neuromorphic computing. Our research spans from devices and architectures to algorithms and applications. <!--Here is a list of available projects:<br />
<br />
* [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf '''Artificial general intelligence (AGI):''' lifelong learning challenge] <br />
* [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf '''Machine learning based on optimal transport using in-memory computing''']<br />
* [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Developing Efficient Models of '''Strong AI for Intelligence Quotient (IQ) Test''']<br />
--><br />
<br />
===About the IBM Research–Zurich===<br />
The location in Zurich is one of IBM’s 12 global research labs. IBM has maintained a research laboratory in Switzerland since 1956. As the first European branch of IBM Research, the mission of the Zurich Lab, in addition to pursuing cutting-edge research for tomorrow’s information technology, is to cultivate close relationships with academic and industrial partners, be one of the premier places to work for world-class researchers, to promote women in IT and science, and to help drive Europe’s innovation agenda. [https://www.zurich.ibm.com/pdf/ZRL_Fact_Sheet.pdf Download factsheet]<br />
<br />
==Hybrid AI Systems (HAS)==<br />
<br />
[[File:NatureElectronics20.jpg|thumb|right|200px]]<br />
Neither symbolic AI nor neural networks alone has produced the kind of intelligence expressed in human and animal behavior. Why? Each has a long and rich history, but has addressed a relatively narrow aspect of the problem. Symbolic AI focuses on solving cognitive problems, drawing upon the rich framework of symbolic computation to manipulate internal representations in order to perform reasoning and inference. But it suffers from being non-adaptive, lacking the ability to learn from example or by direct observation of the world (aka symbol grounding problem). Neural networks on the other hand have the ability to learn from data, and derive much of their power from nonlinear function approximation combined with stochastic gradient descent. But intelligence requires more than modeling input-output relationships. Without the richness of symbolic computation, neural nets lack the simple but powerful operations such as variable binding that allow for analogy making and reasoning, which underlie the ability to generalize from few examples. To address this gap, '''neurosymbolic AI''' aims to combine the best of both worlds to approach human-level intelligence.<br />
<br />
We approach the problem from a very different perspective, inspired by the brain’s high-dimensional circuits and the unique mathematical properties of high-dimensional spaces. ''It leads us to a novel information processing architecture that combines the strengths of symbolic AI and neural networks, and yet has novel emergent properties of its own.'' By combining a small set of basic operations on high-dimensional vectors, we obtain hybrid AI system (HAS) that makes it possible to represent and manipulate data in ways familiar to us from symbolic AI, and to learn from the statistics of data in ways familiar to us from artificial neural networks and deep learning. Further, principles of such HAS allow few-shot learning capabilities, and extremely robust operations against failures, defects, variations, and noise, all of which are complementary to ultra-low energy computation on nanoscale fabrics such as phase-change memory devices. Exciting further research (listed in below table) awaiting in this direction spans high-level algorithmic exploration all the way to efficient hardware design for emerging computational fabrics.<br />
<br />
===Useful Reading===<br />
*[https://www.youtube.com/watch?v=HhymId8dr5Q Neurosymbolic AI Explained, IBM-Research]<br />
*[https://www.youtube.com/watch?v=CYbkoAP5dME Neurosymbolic AI, Invited Talk by David Cox, IAAI /AAAI 2020] <br />
*[https://www.nature.com/articles/s41467-021-22364-0 Robust high-dimensional memory-augmented neural networks], Nature Communications, 2021.<br />
*[https://www.nature.com/articles/s41928-020-0410-3 In-memory hyperdimensional computing], Nature Electronics, 2020.<br />
*[https://www.nature.com/articles/s41928-020-00510-8 A wearable biosensing system with in-sensor adaptive machine learning for hand gesture recognition], Nature Electronics, 2020. <br />
*[https://link.springer.com/article/10.1007/s12559-009-9009-8 Hyperdimensional computing: an introduction to computing in distributed representation with high-dimensional random vectors], Cognitive Computation, 2009.<br />
<br />
===Prerequisites===<br />
*Python<br />
*Background in machine learning (''recommended'')<br />
*Experience with any deep learning framework such as TensorFlow or PyTorch (''recommended'')<br />
*VLSI I (''recommended'')<br />
<br />
<!-- ==In-Memory Computing (IMC)==<br />
<br />
[[File:NNcover_imc.jpg|thumb|right|200px]]<br />
For decades, conventional computers based on the von Neumann architecture have performed computation by repeatedly transferring data between their processing and their memory units, which are physically separated. As computation becomes increasingly data-centric and as the scalability limits in terms of performance and power are being reached, alternative computing paradigms are searched for in which computation and storage are collocated. A fascinating new approach is that of computational memory where the physics of nanoscale memory devices are used to perform certain computational tasks within the memory unit in a non-von Neumann manner. '''Computational Memory''' (CM) is finding application in a variety of areas such as machine learning and signal processing. In addition to novel non-volatile memory technologies like '''PCM''' and '''ReRAM''', also conventional '''SRAM''' has been proposed as underlying storage technology in Computational Memories.<br />
<br />
===Useful Reading===<br />
*[https://ieeexplore.ieee.org/document/9288639 An SRAM-Based Multibit In-Memory Matrix-Vector Multiplier With a Precision That Scales Linearly in Area, Time, and Power]<br />
*[https://www.nature.com/articles/s41565-020-0655-z Memory devices and applications for in-memory computing]<br />
*[https://ieeexplore.ieee.org/abstract/document/8865099 Deep learning acceleration based on in-memory computing]<br />
<br />
===Prerequisites===<br />
*General interest in Deep Learning and memory/system design<br />
*VLSI I and VLSI II (''recommended'')<br />
Specific requirements for the different projects vary and are generally negotiable.<br />
---><br />
<br />
==Available Projects==<br />
We are inviting applications from students to conduct their thesis (bachelor, semester, and master) or an internship project at the IBM Research lab in Zurich on this exciting new topic. <br />
<!--<br />
The work performed could span low-level hardware experiments on phase-change memory chips comprising more than 1 million devices to high-level algorithmic development in a deep learning framework such as TensorFlow or PyTorch. It also involves interactions with several researchers across IBM research focusing on various aspects of the project. The ideal candidate should have a multi-disciplinary background, strong mathematical aptitude and programming skills. Prior knowledge on emerging memory technologies such as phase-change memory is a bonus but not necessary.<br />
--><br />
<br />
{| class="wikitable" style="text-align: center;"<br />
|-<br />
! style="width: 5%;"|Type !! style="width: 20%"|Project !! style="width: 60%"|Description !! style="width: 5%"|Workload Type <br />
|-<br />
<br />
| MA/SA || Continual and Few-shot Learning || [https://iis-projects.ee.ethz.ch/images/a/a2/IBM_CL.pdf Link to description] || algorithmic/hardware co-design<br />
|-<br />
<br />
<br />
| MA/SA || Neurosymbolic Architectures to Approach Human-like AI || [https://iis-projects.ee.ethz.ch/images/c/cc/IBM_Neurosymbolic.pdf Link to description] || algorithmic design<br />
|-<br />
<br />
<br />
<br />
| MA/SA || Developing Efficient Models for Solving Intelligence Quotient (IQ) Test || [https://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Link to description] || algorithmic/hardware design<br />
|-<br />
<br />
<br />
<br />
<br />
| MA|| Crytography meets in-memory computing || [https://iis-projects.ee.ethz.ch/images/6/62/Y2021_10_Advertisement_VME.pdf Link to description] || algorithmic design and hardware experiments<br />
|-<br />
<br />
<br />
|}<br />
<br />
==Contact==<br />
: Thesis will be at IBM Zurich in Rüschlikon<br />
; Hybrid AI Systems (HAS) projects<br />
: Contact (at ETH Zurich): [[:User:kgf | Dr. Frank K. Gurkaynak]], [[:User:Herschmi | Michael Hersche]]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian] <br />
: Contact (at IBM): [mailto:abr@zurich.ibm.com Dr. Abbas Rahimi]<br />
: Professor: [https://iis.ee.ethz.ch/people/person-detail.html?persid=194234 Luca Benini]</div>Abbashttp://iis-projects.ee.ethz.ch/index.php?title=IBM_Research&diff=9264IBM Research2023-07-17T11:46:52Z<p>Abbas: /* Contact */</p>
<hr />
<div>[[Category:Digital]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]] <br />
[[Category:Available]] <br />
[[Category:2021]]<br />
[[Category:Hot]]<br />
[[File:IBM_ZRLab.png|center]]<br />
<br />
==Short Description==<br />
Today, we are entering the era of cognitive computing, which holds great promise in deriving intelligence and knowledge from huge volumes of data. In today’s computers based on von Neumann architecture, huge amounts of data need to be shuttled back and forth at high speeds, a task at which this architecture is inefficient.<br />
<br />
It is becoming increasingly clear that to build efficient cognitive computers, we need to transition to non-von Neumann architectures in which memory and processing coexist in some form. At IBM Research–Zurich in the [https://www.zurich.ibm.com/sto/memory/ Neuromorphic and In-memory Computing Group], we explore various such computing paradigms from in-memory computing to brain-inspired neuromorphic computing. Our research spans from devices and architectures to algorithms and applications. <!--Here is a list of available projects:<br />
<br />
* [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf '''Artificial general intelligence (AGI):''' lifelong learning challenge] <br />
* [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf '''Machine learning based on optimal transport using in-memory computing''']<br />
* [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Developing Efficient Models of '''Strong AI for Intelligence Quotient (IQ) Test''']<br />
--><br />
<br />
===About the IBM Research–Zurich===<br />
The location in Zurich is one of IBM’s 12 global research labs. IBM has maintained a research laboratory in Switzerland since 1956. As the first European branch of IBM Research, the mission of the Zurich Lab, in addition to pursuing cutting-edge research for tomorrow’s information technology, is to cultivate close relationships with academic and industrial partners, be one of the premier places to work for world-class researchers, to promote women in IT and science, and to help drive Europe’s innovation agenda. [https://www.zurich.ibm.com/pdf/ZRL_Fact_Sheet.pdf Download factsheet]<br />
<br />
==Hybrid AI Systems (HAS)==<br />
<br />
[[File:NatureElectronics20.jpg|thumb|right|200px]]<br />
Neither symbolic AI nor neural networks alone has produced the kind of intelligence expressed in human and animal behavior. Why? Each has a long and rich history, but has addressed a relatively narrow aspect of the problem. Symbolic AI focuses on solving cognitive problems, drawing upon the rich framework of symbolic computation to manipulate internal representations in order to perform reasoning and inference. But it suffers from being non-adaptive, lacking the ability to learn from example or by direct observation of the world (aka symbol grounding problem). Neural networks on the other hand have the ability to learn from data, and derive much of their power from nonlinear function approximation combined with stochastic gradient descent. But intelligence requires more than modeling input-output relationships. Without the richness of symbolic computation, neural nets lack the simple but powerful operations such as variable binding that allow for analogy making and reasoning, which underlie the ability to generalize from few examples. To address this gap, '''neurosymbolic AI''' aims to combine the best of both worlds to approach human-level intelligence.<br />
<br />
We approach the problem from a very different perspective, inspired by the brain’s high-dimensional circuits and the unique mathematical properties of high-dimensional spaces. ''It leads us to a novel information processing architecture that combines the strengths of symbolic AI and neural networks, and yet has novel emergent properties of its own.'' By combining a small set of basic operations on high-dimensional vectors, we obtain hybrid AI system (HAS) that makes it possible to represent and manipulate data in ways familiar to us from symbolic AI, and to learn from the statistics of data in ways familiar to us from artificial neural networks and deep learning. Further, principles of such HAS allow few-shot learning capabilities, and extremely robust operations against failures, defects, variations, and noise, all of which are complementary to ultra-low energy computation on nanoscale fabrics such as phase-change memory devices. Exciting further research (listed in below table) awaiting in this direction spans high-level algorithmic exploration all the way to efficient hardware design for emerging computational fabrics.<br />
<br />
===Useful Reading===<br />
*[https://www.youtube.com/watch?v=HhymId8dr5Q Neurosymbolic AI Explained, IBM-Research]<br />
*[https://www.youtube.com/watch?v=CYbkoAP5dME Neurosymbolic AI, Invited Talk by David Cox, IAAI /AAAI 2020] <br />
*[https://www.nature.com/articles/s41467-021-22364-0 Robust high-dimensional memory-augmented neural networks], Nature Communications, 2021.<br />
*[https://www.nature.com/articles/s41928-020-0410-3 In-memory hyperdimensional computing], Nature Electronics, 2020.<br />
*[https://www.nature.com/articles/s41928-020-00510-8 A wearable biosensing system with in-sensor adaptive machine learning for hand gesture recognition], Nature Electronics, 2020. <br />
*[https://link.springer.com/article/10.1007/s12559-009-9009-8 Hyperdimensional computing: an introduction to computing in distributed representation with high-dimensional random vectors], Cognitive Computation, 2009.<br />
<br />
===Prerequisites===<br />
*Python<br />
*Background in machine learning (''recommended'')<br />
*Experience with any deep learning framework such as TensorFlow or PyTorch (''recommended'')<br />
*VLSI I (''recommended'')<br />
<br />
<!-- ==In-Memory Computing (IMC)==<br />
<br />
[[File:NNcover_imc.jpg|thumb|right|200px]]<br />
For decades, conventional computers based on the von Neumann architecture have performed computation by repeatedly transferring data between their processing and their memory units, which are physically separated. As computation becomes increasingly data-centric and as the scalability limits in terms of performance and power are being reached, alternative computing paradigms are searched for in which computation and storage are collocated. A fascinating new approach is that of computational memory where the physics of nanoscale memory devices are used to perform certain computational tasks within the memory unit in a non-von Neumann manner. '''Computational Memory''' (CM) is finding application in a variety of areas such as machine learning and signal processing. In addition to novel non-volatile memory technologies like '''PCM''' and '''ReRAM''', also conventional '''SRAM''' has been proposed as underlying storage technology in Computational Memories.<br />
<br />
===Useful Reading===<br />
*[https://ieeexplore.ieee.org/document/9288639 An SRAM-Based Multibit In-Memory Matrix-Vector Multiplier With a Precision That Scales Linearly in Area, Time, and Power]<br />
*[https://www.nature.com/articles/s41565-020-0655-z Memory devices and applications for in-memory computing]<br />
*[https://ieeexplore.ieee.org/abstract/document/8865099 Deep learning acceleration based on in-memory computing]<br />
<br />
===Prerequisites===<br />
*General interest in Deep Learning and memory/system design<br />
*VLSI I and VLSI II (''recommended'')<br />
Specific requirements for the different projects vary and are generally negotiable.<br />
---><br />
<br />
==Available Projects==<br />
We are inviting applications from students to conduct their thesis (bachelor, semester, and master) or an internship project at the IBM Research lab in Zurich on this exciting new topic. <br />
<!--<br />
The work performed could span low-level hardware experiments on phase-change memory chips comprising more than 1 million devices to high-level algorithmic development in a deep learning framework such as TensorFlow or PyTorch. It also involves interactions with several researchers across IBM research focusing on various aspects of the project. The ideal candidate should have a multi-disciplinary background, strong mathematical aptitude and programming skills. Prior knowledge on emerging memory technologies such as phase-change memory is a bonus but not necessary.<br />
--><br />
<br />
{| class="wikitable" style="text-align: center;"<br />
|-<br />
! style="width: 5%;"|Type !! style="width: 20%"|Project !! style="width: 60%"|Description !! style="width: 5%"|Workload Type <br />
|-<br />
<br />
| MA/SA || Continual and Few-shot Learning || [https://iis-projects.ee.ethz.ch/images/a/a2/IBM_CL.pdf Link to description] || algorithmic/hardware co-design<br />
|-<br />
<br />
<br />
| MA/SA || Neurosymbolic Architectures to Approach Human-like AI || [https://iis-projects.ee.ethz.ch/images/c/cc/IBM_Neurosymbolic.pdf Link to description] || algorithmic design<br />
|-<br />
<br />
| MA/SA || In-memory Factorizer || [https://iis-projects.ee.ethz.ch/images/0/05/IBM_factorizer.pdf Link to description] || Hardware design<br />
|-<br />
<br />
<br />
| MA/SA || Developing Efficient Models for Solving Intelligence Quotient (IQ) Test || [https://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Link to description] || algorithmic/hardware design<br />
|-<br />
<br />
<br />
<br />
| MA/SA || Accelerating Mixers with Computational Memory || [http://iis-projects.ee.ethz.ch/images/e/e8/IBM_Mixer.pdf Link to description] || Hardware design and experiments<br />
|-<br />
<br />
| MA/SA || Accelerating Transformers with Computational Memory || [http://iis-projects.ee.ethz.ch/images/b/be/IBM_TransfAcc.pdf Link to description] || Hardware/algorithmic design<br />
|-<br />
<br />
| MA/SA/BA || Face Recognition at Scale || [http://iis-projects.ee.ethz.ch/images/3/3d/IBM_FaceRec_at_Scale.pdf Link to description] || algorithmic design<br />
|-<br />
<br />
<br />
| MA|| Lifelong learning challenge || [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf Link to description] || algorithmic/hardware design<br />
|-<br />
<br />
| MA|| Crytography meets in-memory computing || [https://iis-projects.ee.ethz.ch/images/6/62/Y2021_10_Advertisement_VME.pdf Link to description] || algorithmic design and hardware experiments<br />
|-<br />
<br />
<br />
|}<br />
<br />
==Contact==<br />
: Thesis will be at IBM Zurich in Rüschlikon<br />
; Hybrid AI Systems (HAS) projects<br />
: Contact (at ETH Zurich): [[:User:kgf | Dr. Frank K. Gurkaynak]], [[:User:Herschmi | Michael Hersche]]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian] <br />
: Contact (at IBM): [mailto:abr@zurich.ibm.com Dr. Abbas Rahimi]<br />
: Professor: [https://iis.ee.ethz.ch/people/person-detail.html?persid=194234 Luca Benini]</div>Abbashttp://iis-projects.ee.ethz.ch/index.php?title=IBM_Research&diff=8442IBM Research2022-11-29T08:25:55Z<p>Abbas: /* Available Projects */</p>
<hr />
<div>[[Category:Digital]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]] <br />
[[Category:Available]] <br />
[[Category:2021]]<br />
[[Category:Hot]]<br />
[[File:IBM_ZRLab.png|center]]<br />
<br />
==Short Description==<br />
Today, we are entering the era of cognitive computing, which holds great promise in deriving intelligence and knowledge from huge volumes of data. In today’s computers based on von Neumann architecture, huge amounts of data need to be shuttled back and forth at high speeds, a task at which this architecture is inefficient.<br />
<br />
It is becoming increasingly clear that to build efficient cognitive computers, we need to transition to non-von Neumann architectures in which memory and processing coexist in some form. At IBM Research–Zurich in the [https://www.zurich.ibm.com/sto/memory/ Neuromorphic and In-memory Computing Group], we explore various such computing paradigms from in-memory computing to brain-inspired neuromorphic computing. Our research spans from devices and architectures to algorithms and applications. <!--Here is a list of available projects:<br />
<br />
* [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf '''Artificial general intelligence (AGI):''' lifelong learning challenge] <br />
* [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf '''Machine learning based on optimal transport using in-memory computing''']<br />
* [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Developing Efficient Models of '''Strong AI for Intelligence Quotient (IQ) Test''']<br />
--><br />
<br />
===About the IBM Research–Zurich===<br />
The location in Zurich is one of IBM’s 12 global research labs. IBM has maintained a research laboratory in Switzerland since 1956. As the first European branch of IBM Research, the mission of the Zurich Lab, in addition to pursuing cutting-edge research for tomorrow’s information technology, is to cultivate close relationships with academic and industrial partners, be one of the premier places to work for world-class researchers, to promote women in IT and science, and to help drive Europe’s innovation agenda. [https://www.zurich.ibm.com/pdf/ZRL_Fact_Sheet.pdf Download factsheet]<br />
<br />
==Hybrid AI Systems (HAS)==<br />
<br />
[[File:NatureElectronics20.jpg|thumb|right|200px]]<br />
Neither symbolic AI nor neural networks alone has produced the kind of intelligence expressed in human and animal behavior. Why? Each has a long and rich history, but has addressed a relatively narrow aspect of the problem. Symbolic AI focuses on solving cognitive problems, drawing upon the rich framework of symbolic computation to manipulate internal representations in order to perform reasoning and inference. But it suffers from being non-adaptive, lacking the ability to learn from example or by direct observation of the world (aka symbol grounding problem). Neural networks on the other hand have the ability to learn from data, and derive much of their power from nonlinear function approximation combined with stochastic gradient descent. But intelligence requires more than modeling input-output relationships. Without the richness of symbolic computation, neural nets lack the simple but powerful operations such as variable binding that allow for analogy making and reasoning, which underlie the ability to generalize from few examples. To address this gap, '''neurosymbolic AI''' aims to combine the best of both worlds to approach human-level intelligence.<br />
<br />
We approach the problem from a very different perspective, inspired by the brain’s high-dimensional circuits and the unique mathematical properties of high-dimensional spaces. ''It leads us to a novel information processing architecture that combines the strengths of symbolic AI and neural networks, and yet has novel emergent properties of its own.'' By combining a small set of basic operations on high-dimensional vectors, we obtain hybrid AI system (HAS) that makes it possible to represent and manipulate data in ways familiar to us from symbolic AI, and to learn from the statistics of data in ways familiar to us from artificial neural networks and deep learning. Further, principles of such HAS allow few-shot learning capabilities, and extremely robust operations against failures, defects, variations, and noise, all of which are complementary to ultra-low energy computation on nanoscale fabrics such as phase-change memory devices. Exciting further research (listed in below table) awaiting in this direction spans high-level algorithmic exploration all the way to efficient hardware design for emerging computational fabrics.<br />
<br />
===Useful Reading===<br />
*[https://www.youtube.com/watch?v=HhymId8dr5Q Neurosymbolic AI Explained, IBM-Research]<br />
*[https://www.youtube.com/watch?v=CYbkoAP5dME Neurosymbolic AI, Invited Talk by David Cox, IAAI /AAAI 2020] <br />
*[https://www.nature.com/articles/s41467-021-22364-0 Robust high-dimensional memory-augmented neural networks], Nature Communications, 2021.<br />
*[https://www.nature.com/articles/s41928-020-0410-3 In-memory hyperdimensional computing], Nature Electronics, 2020.<br />
*[https://www.nature.com/articles/s41928-020-00510-8 A wearable biosensing system with in-sensor adaptive machine learning for hand gesture recognition], Nature Electronics, 2020. <br />
*[https://link.springer.com/article/10.1007/s12559-009-9009-8 Hyperdimensional computing: an introduction to computing in distributed representation with high-dimensional random vectors], Cognitive Computation, 2009.<br />
<br />
===Prerequisites===<br />
*Python<br />
*Background in machine learning (''recommended'')<br />
*Experience with any deep learning framework such as TensorFlow or PyTorch (''recommended'')<br />
*VLSI I (''recommended'')<br />
<br />
<!-- ==In-Memory Computing (IMC)==<br />
<br />
[[File:NNcover_imc.jpg|thumb|right|200px]]<br />
For decades, conventional computers based on the von Neumann architecture have performed computation by repeatedly transferring data between their processing and their memory units, which are physically separated. As computation becomes increasingly data-centric and as the scalability limits in terms of performance and power are being reached, alternative computing paradigms are searched for in which computation and storage are collocated. A fascinating new approach is that of computational memory where the physics of nanoscale memory devices are used to perform certain computational tasks within the memory unit in a non-von Neumann manner. '''Computational Memory''' (CM) is finding application in a variety of areas such as machine learning and signal processing. In addition to novel non-volatile memory technologies like '''PCM''' and '''ReRAM''', also conventional '''SRAM''' has been proposed as underlying storage technology in Computational Memories.<br />
<br />
===Useful Reading===<br />
*[https://ieeexplore.ieee.org/document/9288639 An SRAM-Based Multibit In-Memory Matrix-Vector Multiplier With a Precision That Scales Linearly in Area, Time, and Power]<br />
*[https://www.nature.com/articles/s41565-020-0655-z Memory devices and applications for in-memory computing]<br />
*[https://ieeexplore.ieee.org/abstract/document/8865099 Deep learning acceleration based on in-memory computing]<br />
<br />
===Prerequisites===<br />
*General interest in Deep Learning and memory/system design<br />
*VLSI I and VLSI II (''recommended'')<br />
Specific requirements for the different projects vary and are generally negotiable.<br />
---><br />
<br />
==Available Projects==<br />
We are inviting applications from students to conduct their thesis (bachelor, semester, and master) or an internship project at the IBM Research lab in Zurich on this exciting new topic. <br />
<!--<br />
The work performed could span low-level hardware experiments on phase-change memory chips comprising more than 1 million devices to high-level algorithmic development in a deep learning framework such as TensorFlow or PyTorch. It also involves interactions with several researchers across IBM research focusing on various aspects of the project. The ideal candidate should have a multi-disciplinary background, strong mathematical aptitude and programming skills. Prior knowledge on emerging memory technologies such as phase-change memory is a bonus but not necessary.<br />
--><br />
<br />
{| class="wikitable" style="text-align: center;"<br />
|-<br />
! style="width: 5%;"|Type !! style="width: 20%"|Project !! style="width: 60%"|Description !! style="width: 5%"|Workload Type <br />
|-<br />
<br />
| MA/SA || Continual and Few-shot Learning || [https://iis-projects.ee.ethz.ch/images/a/a2/IBM_CL.pdf Link to description] || algorithmic/hardware co-design<br />
|-<br />
<br />
<br />
| MA/SA || Neurosymbolic Architectures to Approach Human-like AI || [https://iis-projects.ee.ethz.ch/images/c/cc/IBM_Neurosymbolic.pdf Link to description] || algorithmic design<br />
|-<br />
<br />
| MA/SA || In-memory Factorizer || [https://iis-projects.ee.ethz.ch/images/0/05/IBM_factorizer.pdf Link to description] || Hardware design<br />
|-<br />
<br />
<br />
| MA/SA || Developing Efficient Models for Solving Intelligence Quotient (IQ) Test || [https://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Link to description] || algorithmic/hardware design<br />
|-<br />
<br />
<br />
<br />
| MA/SA || Accelerating Mixers with Computational Memory || [http://iis-projects.ee.ethz.ch/images/e/e8/IBM_Mixer.pdf Link to description] || Hardware design and experiments<br />
|-<br />
<br />
| MA/SA || Accelerating Transformers with Computational Memory || [http://iis-projects.ee.ethz.ch/images/b/be/IBM_TransfAcc.pdf Link to description] || Hardware/algorithmic design<br />
|-<br />
<br />
| MA/SA/BA || Face Recognition at Scale || [http://iis-projects.ee.ethz.ch/images/3/3d/IBM_FaceRec_at_Scale.pdf Link to description] || algorithmic design<br />
|-<br />
<br />
<br />
| MA|| Lifelong learning challenge || [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf Link to description] || algorithmic/hardware design<br />
|-<br />
<br />
| MA|| Crytography meets in-memory computing || [https://iis-projects.ee.ethz.ch/images/6/62/Y2021_10_Advertisement_VME.pdf Link to description] || algorithmic design and hardware experiments<br />
|-<br />
<br />
<br />
|}<br />
<br />
==Contact==<br />
: Thesis will be at IBM Zurich in Rüschlikon<br />
; Hybrid AI Systems (HAS) projects<br />
: Contact (at ETH Zurich): [[:User:kgf | Dr. Frank K. Gurkaynak]], [[:User:Herschmi | Michael Hersche]]<br />
: Contact (at IBM): [mailto:kar@zurich.ibm.com Geethan Karunaratne]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian] <br />
: Contact (at IBM): [mailto:abr@zurich.ibm.com Dr. Abbas Rahimi]<br />
: Professor: [https://iis.ee.ethz.ch/people/person-detail.html?persid=194234 Luca Benini]</div>Abbashttp://iis-projects.ee.ethz.ch/index.php?title=File:IBM_factorizer.pdf&diff=8441File:IBM factorizer.pdf2022-11-29T08:25:31Z<p>Abbas: </p>
<hr />
<div></div>Abbashttp://iis-projects.ee.ethz.ch/index.php?title=IBM_Research&diff=8434IBM Research2022-11-27T11:02:53Z<p>Abbas: /* Available Projects */</p>
<hr />
<div>[[Category:Digital]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]] <br />
[[Category:Available]] <br />
[[Category:2021]]<br />
[[Category:Hot]]<br />
[[File:IBM_ZRLab.png|center]]<br />
<br />
==Short Description==<br />
Today, we are entering the era of cognitive computing, which holds great promise in deriving intelligence and knowledge from huge volumes of data. In today’s computers based on von Neumann architecture, huge amounts of data need to be shuttled back and forth at high speeds, a task at which this architecture is inefficient.<br />
<br />
It is becoming increasingly clear that to build efficient cognitive computers, we need to transition to non-von Neumann architectures in which memory and processing coexist in some form. At IBM Research–Zurich in the [https://www.zurich.ibm.com/sto/memory/ Neuromorphic and In-memory Computing Group], we explore various such computing paradigms from in-memory computing to brain-inspired neuromorphic computing. Our research spans from devices and architectures to algorithms and applications. <!--Here is a list of available projects:<br />
<br />
* [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf '''Artificial general intelligence (AGI):''' lifelong learning challenge] <br />
* [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf '''Machine learning based on optimal transport using in-memory computing''']<br />
* [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Developing Efficient Models of '''Strong AI for Intelligence Quotient (IQ) Test''']<br />
--><br />
<br />
===About the IBM Research–Zurich===<br />
The location in Zurich is one of IBM’s 12 global research labs. IBM has maintained a research laboratory in Switzerland since 1956. As the first European branch of IBM Research, the mission of the Zurich Lab, in addition to pursuing cutting-edge research for tomorrow’s information technology, is to cultivate close relationships with academic and industrial partners, be one of the premier places to work for world-class researchers, to promote women in IT and science, and to help drive Europe’s innovation agenda. [https://www.zurich.ibm.com/pdf/ZRL_Fact_Sheet.pdf Download factsheet]<br />
<br />
==Hybrid AI Systems (HAS)==<br />
<br />
[[File:NatureElectronics20.jpg|thumb|right|200px]]<br />
Neither symbolic AI nor neural networks alone has produced the kind of intelligence expressed in human and animal behavior. Why? Each has a long and rich history, but has addressed a relatively narrow aspect of the problem. Symbolic AI focuses on solving cognitive problems, drawing upon the rich framework of symbolic computation to manipulate internal representations in order to perform reasoning and inference. But it suffers from being non-adaptive, lacking the ability to learn from example or by direct observation of the world (aka symbol grounding problem). Neural networks on the other hand have the ability to learn from data, and derive much of their power from nonlinear function approximation combined with stochastic gradient descent. But intelligence requires more than modeling input-output relationships. Without the richness of symbolic computation, neural nets lack the simple but powerful operations such as variable binding that allow for analogy making and reasoning, which underlie the ability to generalize from few examples. To address this gap, '''neurosymbolic AI''' aims to combine the best of both worlds to approach human-level intelligence.<br />
<br />
We approach the problem from a very different perspective, inspired by the brain’s high-dimensional circuits and the unique mathematical properties of high-dimensional spaces. ''It leads us to a novel information processing architecture that combines the strengths of symbolic AI and neural networks, and yet has novel emergent properties of its own.'' By combining a small set of basic operations on high-dimensional vectors, we obtain hybrid AI system (HAS) that makes it possible to represent and manipulate data in ways familiar to us from symbolic AI, and to learn from the statistics of data in ways familiar to us from artificial neural networks and deep learning. Further, principles of such HAS allow few-shot learning capabilities, and extremely robust operations against failures, defects, variations, and noise, all of which are complementary to ultra-low energy computation on nanoscale fabrics such as phase-change memory devices. Exciting further research (listed in below table) awaiting in this direction spans high-level algorithmic exploration all the way to efficient hardware design for emerging computational fabrics.<br />
<br />
===Useful Reading===<br />
*[https://www.youtube.com/watch?v=HhymId8dr5Q Neurosymbolic AI Explained, IBM-Research]<br />
*[https://www.youtube.com/watch?v=CYbkoAP5dME Neurosymbolic AI, Invited Talk by David Cox, IAAI /AAAI 2020] <br />
*[https://www.nature.com/articles/s41467-021-22364-0 Robust high-dimensional memory-augmented neural networks], Nature Communications, 2021.<br />
*[https://www.nature.com/articles/s41928-020-0410-3 In-memory hyperdimensional computing], Nature Electronics, 2020.<br />
*[https://www.nature.com/articles/s41928-020-00510-8 A wearable biosensing system with in-sensor adaptive machine learning for hand gesture recognition], Nature Electronics, 2020. <br />
*[https://link.springer.com/article/10.1007/s12559-009-9009-8 Hyperdimensional computing: an introduction to computing in distributed representation with high-dimensional random vectors], Cognitive Computation, 2009.<br />
<br />
===Prerequisites===<br />
*Python<br />
*Background in machine learning (''recommended'')<br />
*Experience with any deep learning framework such as TensorFlow or PyTorch (''recommended'')<br />
*VLSI I (''recommended'')<br />
<br />
<!-- ==In-Memory Computing (IMC)==<br />
<br />
[[File:NNcover_imc.jpg|thumb|right|200px]]<br />
For decades, conventional computers based on the von Neumann architecture have performed computation by repeatedly transferring data between their processing and their memory units, which are physically separated. As computation becomes increasingly data-centric and as the scalability limits in terms of performance and power are being reached, alternative computing paradigms are searched for in which computation and storage are collocated. A fascinating new approach is that of computational memory where the physics of nanoscale memory devices are used to perform certain computational tasks within the memory unit in a non-von Neumann manner. '''Computational Memory''' (CM) is finding application in a variety of areas such as machine learning and signal processing. In addition to novel non-volatile memory technologies like '''PCM''' and '''ReRAM''', also conventional '''SRAM''' has been proposed as underlying storage technology in Computational Memories.<br />
<br />
===Useful Reading===<br />
*[https://ieeexplore.ieee.org/document/9288639 An SRAM-Based Multibit In-Memory Matrix-Vector Multiplier With a Precision That Scales Linearly in Area, Time, and Power]<br />
*[https://www.nature.com/articles/s41565-020-0655-z Memory devices and applications for in-memory computing]<br />
*[https://ieeexplore.ieee.org/abstract/document/8865099 Deep learning acceleration based on in-memory computing]<br />
<br />
===Prerequisites===<br />
*General interest in Deep Learning and memory/system design<br />
*VLSI I and VLSI II (''recommended'')<br />
Specific requirements for the different projects vary and are generally negotiable.<br />
---><br />
<br />
==Available Projects==<br />
We are inviting applications from students to conduct their thesis (bachelor, semester, and master) or an internship project at the IBM Research lab in Zurich on this exciting new topic. <br />
<!--<br />
The work performed could span low-level hardware experiments on phase-change memory chips comprising more than 1 million devices to high-level algorithmic development in a deep learning framework such as TensorFlow or PyTorch. It also involves interactions with several researchers across IBM research focusing on various aspects of the project. The ideal candidate should have a multi-disciplinary background, strong mathematical aptitude and programming skills. Prior knowledge on emerging memory technologies such as phase-change memory is a bonus but not necessary.<br />
--><br />
<br />
{| class="wikitable" style="text-align: center;"<br />
|-<br />
! style="width: 5%;"|Type !! style="width: 20%"|Project !! style="width: 60%"|Description !! style="width: 5%"|Workload Type <br />
|-<br />
<br />
| MA/SA || Continual and Few-shot Learning || [https://iis-projects.ee.ethz.ch/images/a/a2/IBM_CL.pdf Link to description] || algorithmic/hardware co-design<br />
|-<br />
<br />
<br />
| MA/SA || Neurosymbolic Architectures to Approach Human-like AI || [https://iis-projects.ee.ethz.ch/images/c/cc/IBM_Neurosymbolic.pdf Link to description] || algorithmic design<br />
|-<br />
<br />
<br />
| MA/SA || Developing Efficient Models for Solving Intelligence Quotient (IQ) Test || [https://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Link to description] || algorithmic/hardware design<br />
|-<br />
<br />
<br />
<br />
| MA/SA || Accelerating Mixers with Computational Memory || [http://iis-projects.ee.ethz.ch/images/e/e8/IBM_Mixer.pdf Link to description] || Hardware design and experiments<br />
|-<br />
<br />
| MA/SA || Accelerating Transformers with Computational Memory || [http://iis-projects.ee.ethz.ch/images/b/be/IBM_TransfAcc.pdf Link to description] || Hardware/algorithmic design<br />
|-<br />
<br />
| MA/SA/BA || Face Recognition at Scale || [http://iis-projects.ee.ethz.ch/images/3/3d/IBM_FaceRec_at_Scale.pdf Link to description] || algorithmic design<br />
|-<br />
<br />
<br />
| MA|| Lifelong learning challenge || [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf Link to description] || algorithmic/hardware design<br />
|-<br />
<br />
| MA|| Crytography meets in-memory computing || [https://iis-projects.ee.ethz.ch/images/6/62/Y2021_10_Advertisement_VME.pdf Link to description] || algorithmic design and hardware experiments<br />
|-<br />
<br />
<br />
|}<br />
<br />
==Contact==<br />
: Thesis will be at IBM Zurich in Rüschlikon<br />
; Hybrid AI Systems (HAS) projects<br />
: Contact (at ETH Zurich): [[:User:kgf | Dr. Frank K. Gurkaynak]], [[:User:Herschmi | Michael Hersche]]<br />
: Contact (at IBM): [mailto:kar@zurich.ibm.com Geethan Karunaratne]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian] <br />
: Contact (at IBM): [mailto:abr@zurich.ibm.com Dr. Abbas Rahimi]<br />
: Professor: [https://iis.ee.ethz.ch/people/person-detail.html?persid=194234 Luca Benini]</div>Abbashttp://iis-projects.ee.ethz.ch/index.php?title=File:IBM_CL.pdf&diff=8433File:IBM CL.pdf2022-11-27T11:02:21Z<p>Abbas: </p>
<hr />
<div></div>Abbashttp://iis-projects.ee.ethz.ch/index.php?title=IBM_Research&diff=7049IBM Research2021-10-15T10:36:32Z<p>Abbas: /* Available Projects */</p>
<hr />
<div>[[Category:Digital]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]] <br />
[[Category:Available]] <br />
[[Category:2021]]<br />
[[Category:Hot]]<br />
[[File:IBM_ZRLab.png|center]]<br />
<br />
==Short Description==<br />
Today, we are entering the era of cognitive computing, which holds great promise in deriving intelligence and knowledge from huge volumes of data. In today’s computers based on von Neumann architecture, huge amounts of data need to be shuttled back and forth at high speeds, a task at which this architecture is inefficient.<br />
<br />
It is becoming increasingly clear that to build efficient cognitive computers, we need to transition to non-von Neumann architectures in which memory and processing coexist in some form. At IBM Research–Zurich in the [https://www.zurich.ibm.com/sto/memory/ Neuromorphic and In-memory Computing Group], we explore various such computing paradigms from in-memory computing to brain-inspired neuromorphic computing. Our research spans from devices and architectures to algorithms and applications. <!--Here is a list of available projects:<br />
<br />
* [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf '''Artificial general intelligence (AGI):''' lifelong learning challenge] <br />
* [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf '''Machine learning based on optimal transport using in-memory computing''']<br />
* [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Developing Efficient Models of '''Strong AI for Intelligence Quotient (IQ) Test''']<br />
--><br />
<br />
===About the IBM Research–Zurich===<br />
The location in Zurich is one of IBM’s 12 global research labs. IBM has maintained a research laboratory in Switzerland since 1956. As the first European branch of IBM Research, the mission of the Zurich Lab, in addition to pursuing cutting-edge research for tomorrow’s information technology, is to cultivate close relationships with academic and industrial partners, be one of the premier places to work for world-class researchers, to promote women in IT and science, and to help drive Europe’s innovation agenda. [https://www.zurich.ibm.com/pdf/ZRL_Fact_Sheet.pdf Download factsheet]<br />
<br />
==Hybrid AI Systems (HAS)==<br />
<br />
[[File:NatureElectronics20.jpg|thumb|right|200px]]<br />
Neither symbolic AI nor neural networks alone has produced the kind of intelligence expressed in human and animal behavior. Why? Each has a long and rich history, but has addressed a relatively narrow aspect of the problem. Symbolic AI focuses on solving cognitive problems, drawing upon the rich framework of symbolic computation to manipulate internal representations in order to perform reasoning and inference. But it suffers from being non-adaptive, lacking the ability to learn from example or by direct observation of the world (aka symbol grounding problem). Neural networks on the other hand have the ability to learn from data, and derive much of their power from nonlinear function approximation combined with stochastic gradient descent. But intelligence requires more than modeling input-output relationships. Without the richness of symbolic computation, neural nets lack the simple but powerful operations such as variable binding that allow for analogy making and reasoning, which underlie the ability to generalize from few examples. To address this gap, '''neurosymbolic AI''' aims to combine the best of both worlds to approach human-level intelligence.<br />
<br />
We approach the problem from a very different perspective, inspired by the brain’s high-dimensional circuits and the unique mathematical properties of high-dimensional spaces. ''It leads us to a novel information processing architecture that combines the strengths of symbolic AI and neural networks, and yet has novel emergent properties of its own.'' By combining a small set of basic operations on high-dimensional vectors, we obtain hybrid AI system (HAS) that makes it possible to represent and manipulate data in ways familiar to us from symbolic AI, and to learn from the statistics of data in ways familiar to us from artificial neural networks and deep learning. Further, principles of such HAS allow few-shot learning capabilities, and extremely robust operations against failures, defects, variations, and noise, all of which are complementary to ultra-low energy computation on nanoscale fabrics such as phase-change memory devices. Exciting further research (listed in below table) awaiting in this direction spans high-level algorithmic exploration all the way to efficient hardware design for emerging computational fabrics.<br />
<br />
===Useful Reading===<br />
*[https://www.youtube.com/watch?v=HhymId8dr5Q Neurosymbolic AI Explained, IBM-Research]<br />
*[https://www.youtube.com/watch?v=CYbkoAP5dME Neurosymbolic AI, Invited Talk by David Cox, IAAI /AAAI 2020] <br />
*[https://www.nature.com/articles/s41467-021-22364-0 Robust high-dimensional memory-augmented neural networks], Nature Communications, 2021.<br />
*[https://www.nature.com/articles/s41928-020-0410-3 In-memory hyperdimensional computing], Nature Electronics, 2020.<br />
*[https://www.nature.com/articles/s41928-020-00510-8 A wearable biosensing system with in-sensor adaptive machine learning for hand gesture recognition], Nature Electronics, 2020. <br />
*[https://link.springer.com/article/10.1007/s12559-009-9009-8 Hyperdimensional computing: an introduction to computing in distributed representation with high-dimensional random vectors], Cognitive Computation, 2009.<br />
<br />
===Prerequisites===<br />
*Python<br />
*Background in machine learning (''recommended'')<br />
*Experience with any deep learning framework such as TensorFlow or PyTorch (''recommended'')<br />
*VLSI I (''recommended'')<br />
<br />
<!-- ==In-Memory Computing (IMC)==<br />
<br />
[[File:NNcover_imc.jpg|thumb|right|200px]]<br />
For decades, conventional computers based on the von Neumann architecture have performed computation by repeatedly transferring data between their processing and their memory units, which are physically separated. As computation becomes increasingly data-centric and as the scalability limits in terms of performance and power are being reached, alternative computing paradigms are searched for in which computation and storage are collocated. A fascinating new approach is that of computational memory where the physics of nanoscale memory devices are used to perform certain computational tasks within the memory unit in a non-von Neumann manner. '''Computational Memory''' (CM) is finding application in a variety of areas such as machine learning and signal processing. In addition to novel non-volatile memory technologies like '''PCM''' and '''ReRAM''', also conventional '''SRAM''' has been proposed as underlying storage technology in Computational Memories.<br />
<br />
===Useful Reading===<br />
*[https://ieeexplore.ieee.org/document/9288639 An SRAM-Based Multibit In-Memory Matrix-Vector Multiplier With a Precision That Scales Linearly in Area, Time, and Power]<br />
*[https://www.nature.com/articles/s41565-020-0655-z Memory devices and applications for in-memory computing]<br />
*[https://ieeexplore.ieee.org/abstract/document/8865099 Deep learning acceleration based on in-memory computing]<br />
<br />
===Prerequisites===<br />
*General interest in Deep Learning and memory/system design<br />
*VLSI I and VLSI II (''recommended'')<br />
Specific requirements for the different projects vary and are generally negotiable.<br />
---><br />
<br />
==Available Projects==<br />
We are inviting applications from students to conduct their thesis (bachelor, semester, and master) or an internship project at the IBM Research lab in Zurich on this exciting new topic. <br />
<!--<br />
The work performed could span low-level hardware experiments on phase-change memory chips comprising more than 1 million devices to high-level algorithmic development in a deep learning framework such as TensorFlow or PyTorch. It also involves interactions with several researchers across IBM research focusing on various aspects of the project. The ideal candidate should have a multi-disciplinary background, strong mathematical aptitude and programming skills. Prior knowledge on emerging memory technologies such as phase-change memory is a bonus but not necessary.<br />
--><br />
<br />
{| class="wikitable" style="text-align: center;"<br />
|-<br />
! style="width: 5%;"|Type !! style="width: 20%"|Project !! style="width: 60%"|Description !! style="width: 5%"|Workload Type <br />
|-<br />
<br />
| MA/SA/BA || Neurosymbolic Architectures to Approach Human-like AI || [https://iis-projects.ee.ethz.ch/images/c/cc/IBM_Neurosymbolic.pdf Link to description] || algorithmic design<br />
|-<br />
<br />
<br />
| MA/SA/BA || Developing Efficient Models for Solving Intelligence Quotient (IQ) Test || [https://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Link to description] || algorithmic/hardware design<br />
|-<br />
<br />
<br />
| MA/SA/BA || Zero-shot learning || TBD || algorithmic design<br />
|-<br />
<br />
| MA/SA || Accelerating Mixers with Computational Memory || [http://iis-projects.ee.ethz.ch/images/e/e8/IBM_Mixer.pdf Link to description] || Hardware design and experiments<br />
|-<br />
<br />
| MA/SA || Accelerating Transformers with Computational Memory || [http://iis-projects.ee.ethz.ch/images/b/be/IBM_TransfAcc.pdf Link to description] || Hardware/algorithmic design<br />
|-<br />
<br />
| MA/SA/BA || Face Recognition at Scale || [http://iis-projects.ee.ethz.ch/images/3/3d/IBM_FaceRec_at_Scale.pdf Link to description] || algorithmic design<br />
|-<br />
<br />
<br />
| MA|| Lifelong learning challenge || [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf Link to description] || algorithmic/hardware design<br />
|-<br />
<br />
| MA|| Crytography meets in-memory computing || [https://iis-projects.ee.ethz.ch/images/6/62/Y2021_10_Advertisement_VME.pdf Link to description] || algorithmic design and hardware experiments<br />
|-<br />
<br />
<br />
|}<br />
<br />
==Contact==<br />
: Thesis will be at IBM Zurich in Rüschlikon<br />
; Hybrid AI Systems (HAS) projects<br />
: Contact (at ETH Zurich): [[:User:kgf | Dr. Frank K. Gurkaynak]], [[:User:Herschmi | Michael Hersche]]<br />
: Contact (at IBM): [mailto:kar@zurich.ibm.com Geethan Karunaratne]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian] <br />
: Contact (at IBM): [mailto:abr@zurich.ibm.com Dr. Abbas Rahimi]<br />
: Professor: [https://iis.ee.ethz.ch/people/person-detail.html?persid=194234 Luca Benini]</div>Abbashttp://iis-projects.ee.ethz.ch/index.php?title=File:IBM_RPM.pdf&diff=7048File:IBM RPM.pdf2021-10-15T10:35:44Z<p>Abbas: Abbas uploaded a new version of File:IBM RPM.pdf</p>
<hr />
<div></div>Abbashttp://iis-projects.ee.ethz.ch/index.php?title=IBM_Research&diff=7047IBM Research2021-10-15T10:35:01Z<p>Abbas: /* Available Projects */</p>
<hr />
<div>[[Category:Digital]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]] <br />
[[Category:Available]] <br />
[[Category:2021]]<br />
[[Category:Hot]]<br />
[[File:IBM_ZRLab.png|center]]<br />
<br />
==Short Description==<br />
Today, we are entering the era of cognitive computing, which holds great promise in deriving intelligence and knowledge from huge volumes of data. In today’s computers based on von Neumann architecture, huge amounts of data need to be shuttled back and forth at high speeds, a task at which this architecture is inefficient.<br />
<br />
It is becoming increasingly clear that to build efficient cognitive computers, we need to transition to non-von Neumann architectures in which memory and processing coexist in some form. At IBM Research–Zurich in the [https://www.zurich.ibm.com/sto/memory/ Neuromorphic and In-memory Computing Group], we explore various such computing paradigms from in-memory computing to brain-inspired neuromorphic computing. Our research spans from devices and architectures to algorithms and applications. <!--Here is a list of available projects:<br />
<br />
* [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf '''Artificial general intelligence (AGI):''' lifelong learning challenge] <br />
* [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf '''Machine learning based on optimal transport using in-memory computing''']<br />
* [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Developing Efficient Models of '''Strong AI for Intelligence Quotient (IQ) Test''']<br />
--><br />
<br />
===About the IBM Research–Zurich===<br />
The location in Zurich is one of IBM’s 12 global research labs. IBM has maintained a research laboratory in Switzerland since 1956. As the first European branch of IBM Research, the mission of the Zurich Lab, in addition to pursuing cutting-edge research for tomorrow’s information technology, is to cultivate close relationships with academic and industrial partners, be one of the premier places to work for world-class researchers, to promote women in IT and science, and to help drive Europe’s innovation agenda. [https://www.zurich.ibm.com/pdf/ZRL_Fact_Sheet.pdf Download factsheet]<br />
<br />
==Hybrid AI Systems (HAS)==<br />
<br />
[[File:NatureElectronics20.jpg|thumb|right|200px]]<br />
Neither symbolic AI nor neural networks alone has produced the kind of intelligence expressed in human and animal behavior. Why? Each has a long and rich history, but has addressed a relatively narrow aspect of the problem. Symbolic AI focuses on solving cognitive problems, drawing upon the rich framework of symbolic computation to manipulate internal representations in order to perform reasoning and inference. But it suffers from being non-adaptive, lacking the ability to learn from example or by direct observation of the world (aka symbol grounding problem). Neural networks on the other hand have the ability to learn from data, and derive much of their power from nonlinear function approximation combined with stochastic gradient descent. But intelligence requires more than modeling input-output relationships. Without the richness of symbolic computation, neural nets lack the simple but powerful operations such as variable binding that allow for analogy making and reasoning, which underlie the ability to generalize from few examples. To address this gap, '''neurosymbolic AI''' aims to combine the best of both worlds to approach human-level intelligence.<br />
<br />
We approach the problem from a very different perspective, inspired by the brain’s high-dimensional circuits and the unique mathematical properties of high-dimensional spaces. ''It leads us to a novel information processing architecture that combines the strengths of symbolic AI and neural networks, and yet has novel emergent properties of its own.'' By combining a small set of basic operations on high-dimensional vectors, we obtain hybrid AI system (HAS) that makes it possible to represent and manipulate data in ways familiar to us from symbolic AI, and to learn from the statistics of data in ways familiar to us from artificial neural networks and deep learning. Further, principles of such HAS allow few-shot learning capabilities, and extremely robust operations against failures, defects, variations, and noise, all of which are complementary to ultra-low energy computation on nanoscale fabrics such as phase-change memory devices. Exciting further research (listed in below table) awaiting in this direction spans high-level algorithmic exploration all the way to efficient hardware design for emerging computational fabrics.<br />
<br />
===Useful Reading===<br />
*[https://www.youtube.com/watch?v=HhymId8dr5Q Neurosymbolic AI Explained, IBM-Research]<br />
*[https://www.youtube.com/watch?v=CYbkoAP5dME Neurosymbolic AI, Invited Talk by David Cox, IAAI /AAAI 2020] <br />
*[https://www.nature.com/articles/s41467-021-22364-0 Robust high-dimensional memory-augmented neural networks], Nature Communications, 2021.<br />
*[https://www.nature.com/articles/s41928-020-0410-3 In-memory hyperdimensional computing], Nature Electronics, 2020.<br />
*[https://www.nature.com/articles/s41928-020-00510-8 A wearable biosensing system with in-sensor adaptive machine learning for hand gesture recognition], Nature Electronics, 2020. <br />
*[https://link.springer.com/article/10.1007/s12559-009-9009-8 Hyperdimensional computing: an introduction to computing in distributed representation with high-dimensional random vectors], Cognitive Computation, 2009.<br />
<br />
===Prerequisites===<br />
*Python<br />
*Background in machine learning (''recommended'')<br />
*Experience with any deep learning framework such as TensorFlow or PyTorch (''recommended'')<br />
*VLSI I (''recommended'')<br />
<br />
<!-- ==In-Memory Computing (IMC)==<br />
<br />
[[File:NNcover_imc.jpg|thumb|right|200px]]<br />
For decades, conventional computers based on the von Neumann architecture have performed computation by repeatedly transferring data between their processing and their memory units, which are physically separated. As computation becomes increasingly data-centric and as the scalability limits in terms of performance and power are being reached, alternative computing paradigms are searched for in which computation and storage are collocated. A fascinating new approach is that of computational memory where the physics of nanoscale memory devices are used to perform certain computational tasks within the memory unit in a non-von Neumann manner. '''Computational Memory''' (CM) is finding application in a variety of areas such as machine learning and signal processing. In addition to novel non-volatile memory technologies like '''PCM''' and '''ReRAM''', also conventional '''SRAM''' has been proposed as underlying storage technology in Computational Memories.<br />
<br />
===Useful Reading===<br />
*[https://ieeexplore.ieee.org/document/9288639 An SRAM-Based Multibit In-Memory Matrix-Vector Multiplier With a Precision That Scales Linearly in Area, Time, and Power]<br />
*[https://www.nature.com/articles/s41565-020-0655-z Memory devices and applications for in-memory computing]<br />
*[https://ieeexplore.ieee.org/abstract/document/8865099 Deep learning acceleration based on in-memory computing]<br />
<br />
===Prerequisites===<br />
*General interest in Deep Learning and memory/system design<br />
*VLSI I and VLSI II (''recommended'')<br />
Specific requirements for the different projects vary and are generally negotiable.<br />
---><br />
<br />
==Available Projects==<br />
We are inviting applications from students to conduct their thesis (bachelor, semester, and master) or an internship project at the IBM Research lab in Zurich on this exciting new topic. <br />
<!--<br />
The work performed could span low-level hardware experiments on phase-change memory chips comprising more than 1 million devices to high-level algorithmic development in a deep learning framework such as TensorFlow or PyTorch. It also involves interactions with several researchers across IBM research focusing on various aspects of the project. The ideal candidate should have a multi-disciplinary background, strong mathematical aptitude and programming skills. Prior knowledge on emerging memory technologies such as phase-change memory is a bonus but not necessary.<br />
--><br />
<br />
{| class="wikitable" style="text-align: center;"<br />
|-<br />
! style="width: 5%;"|Type !! style="width: 20%"|Project !! style="width: 60%"|Description !! style="width: 5%"|Workload Type <br />
|-<br />
<br />
| MA/SA/BA || Neurosymbolic Architectures to Approach Human-like AI || [https://iis-projects.ee.ethz.ch/images/c/cc/IBM_Neurosymbolic.pdf Link to description] || algorithmic design<br />
|-<br />
<br />
<br />
| MA/SA/BA || Developing Efficient Models for Solving Intelligence Quotient (IQ) Test || [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Link to description] || algorithmic/hardware design<br />
|-<br />
<br />
| MA/SA/BA || Zero-shot learning || TBD || algorithmic design<br />
|-<br />
<br />
| MA/SA || Accelerating Mixers with Computational Memory || [http://iis-projects.ee.ethz.ch/images/e/e8/IBM_Mixer.pdf Link to description] || Hardware design and experiments<br />
|-<br />
<br />
| MA/SA || Accelerating Transformers with Computational Memory || [http://iis-projects.ee.ethz.ch/images/b/be/IBM_TransfAcc.pdf Link to description] || Hardware/algorithmic design<br />
|-<br />
<br />
| MA/SA/BA || Face Recognition at Scale || [http://iis-projects.ee.ethz.ch/images/3/3d/IBM_FaceRec_at_Scale.pdf Link to description] || algorithmic design<br />
|-<br />
<br />
<br />
| MA|| Lifelong learning challenge || [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf Link to description] || algorithmic/hardware design<br />
|-<br />
<br />
| MA|| Crytography meets in-memory computing || [https://iis-projects.ee.ethz.ch/images/6/62/Y2021_10_Advertisement_VME.pdf Link to description] || algorithmic design and hardware experiments<br />
|-<br />
<br />
<br />
|}<br />
<br />
==Contact==<br />
: Thesis will be at IBM Zurich in Rüschlikon<br />
; Hybrid AI Systems (HAS) projects<br />
: Contact (at ETH Zurich): [[:User:kgf | Dr. Frank K. Gurkaynak]], [[:User:Herschmi | Michael Hersche]]<br />
: Contact (at IBM): [mailto:kar@zurich.ibm.com Geethan Karunaratne]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian] <br />
: Contact (at IBM): [mailto:abr@zurich.ibm.com Dr. Abbas Rahimi]<br />
: Professor: [https://iis.ee.ethz.ch/people/person-detail.html?persid=194234 Luca Benini]</div>Abbashttp://iis-projects.ee.ethz.ch/index.php?title=File:IBM_Neurosymbolic.pdf&diff=7046File:IBM Neurosymbolic.pdf2021-10-15T10:34:35Z<p>Abbas: </p>
<hr />
<div></div>Abbashttp://iis-projects.ee.ethz.ch/index.php?title=IBM_Research&diff=7045IBM Research2021-10-15T10:27:46Z<p>Abbas: /* Hybrid AI Systems (HAS) */</p>
<hr />
<div>[[Category:Digital]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]] <br />
[[Category:Available]] <br />
[[Category:2021]]<br />
[[Category:Hot]]<br />
[[File:IBM_ZRLab.png|center]]<br />
<br />
==Short Description==<br />
Today, we are entering the era of cognitive computing, which holds great promise in deriving intelligence and knowledge from huge volumes of data. In today’s computers based on von Neumann architecture, huge amounts of data need to be shuttled back and forth at high speeds, a task at which this architecture is inefficient.<br />
<br />
It is becoming increasingly clear that to build efficient cognitive computers, we need to transition to non-von Neumann architectures in which memory and processing coexist in some form. At IBM Research–Zurich in the [https://www.zurich.ibm.com/sto/memory/ Neuromorphic and In-memory Computing Group], we explore various such computing paradigms from in-memory computing to brain-inspired neuromorphic computing. Our research spans from devices and architectures to algorithms and applications. <!--Here is a list of available projects:<br />
<br />
* [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf '''Artificial general intelligence (AGI):''' lifelong learning challenge] <br />
* [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf '''Machine learning based on optimal transport using in-memory computing''']<br />
* [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Developing Efficient Models of '''Strong AI for Intelligence Quotient (IQ) Test''']<br />
--><br />
<br />
===About the IBM Research–Zurich===<br />
The location in Zurich is one of IBM’s 12 global research labs. IBM has maintained a research laboratory in Switzerland since 1956. As the first European branch of IBM Research, the mission of the Zurich Lab, in addition to pursuing cutting-edge research for tomorrow’s information technology, is to cultivate close relationships with academic and industrial partners, be one of the premier places to work for world-class researchers, to promote women in IT and science, and to help drive Europe’s innovation agenda. [https://www.zurich.ibm.com/pdf/ZRL_Fact_Sheet.pdf Download factsheet]<br />
<br />
==Hybrid AI Systems (HAS)==<br />
<br />
[[File:NatureElectronics20.jpg|thumb|right|200px]]<br />
Neither symbolic AI nor neural networks alone has produced the kind of intelligence expressed in human and animal behavior. Why? Each has a long and rich history, but has addressed a relatively narrow aspect of the problem. Symbolic AI focuses on solving cognitive problems, drawing upon the rich framework of symbolic computation to manipulate internal representations in order to perform reasoning and inference. But it suffers from being non-adaptive, lacking the ability to learn from example or by direct observation of the world (aka symbol grounding problem). Neural networks on the other hand have the ability to learn from data, and derive much of their power from nonlinear function approximation combined with stochastic gradient descent. But intelligence requires more than modeling input-output relationships. Without the richness of symbolic computation, neural nets lack the simple but powerful operations such as variable binding that allow for analogy making and reasoning, which underlie the ability to generalize from few examples. To address this gap, '''neurosymbolic AI''' aims to combine the best of both worlds to approach human-level intelligence.<br />
<br />
We approach the problem from a very different perspective, inspired by the brain’s high-dimensional circuits and the unique mathematical properties of high-dimensional spaces. ''It leads us to a novel information processing architecture that combines the strengths of symbolic AI and neural networks, and yet has novel emergent properties of its own.'' By combining a small set of basic operations on high-dimensional vectors, we obtain hybrid AI system (HAS) that makes it possible to represent and manipulate data in ways familiar to us from symbolic AI, and to learn from the statistics of data in ways familiar to us from artificial neural networks and deep learning. Further, principles of such HAS allow few-shot learning capabilities, and extremely robust operations against failures, defects, variations, and noise, all of which are complementary to ultra-low energy computation on nanoscale fabrics such as phase-change memory devices. Exciting further research (listed in below table) awaiting in this direction spans high-level algorithmic exploration all the way to efficient hardware design for emerging computational fabrics.<br />
<br />
===Useful Reading===<br />
*[https://www.youtube.com/watch?v=HhymId8dr5Q Neurosymbolic AI Explained, IBM-Research]<br />
*[https://www.youtube.com/watch?v=CYbkoAP5dME Neurosymbolic AI, Invited Talk by David Cox, IAAI /AAAI 2020] <br />
*[https://www.nature.com/articles/s41467-021-22364-0 Robust high-dimensional memory-augmented neural networks], Nature Communications, 2021.<br />
*[https://www.nature.com/articles/s41928-020-0410-3 In-memory hyperdimensional computing], Nature Electronics, 2020.<br />
*[https://www.nature.com/articles/s41928-020-00510-8 A wearable biosensing system with in-sensor adaptive machine learning for hand gesture recognition], Nature Electronics, 2020. <br />
*[https://link.springer.com/article/10.1007/s12559-009-9009-8 Hyperdimensional computing: an introduction to computing in distributed representation with high-dimensional random vectors], Cognitive Computation, 2009.<br />
<br />
===Prerequisites===<br />
*Python<br />
*Background in machine learning (''recommended'')<br />
*Experience with any deep learning framework such as TensorFlow or PyTorch (''recommended'')<br />
*VLSI I (''recommended'')<br />
<br />
<!-- ==In-Memory Computing (IMC)==<br />
<br />
[[File:NNcover_imc.jpg|thumb|right|200px]]<br />
For decades, conventional computers based on the von Neumann architecture have performed computation by repeatedly transferring data between their processing and their memory units, which are physically separated. As computation becomes increasingly data-centric and as the scalability limits in terms of performance and power are being reached, alternative computing paradigms are searched for in which computation and storage are collocated. A fascinating new approach is that of computational memory where the physics of nanoscale memory devices are used to perform certain computational tasks within the memory unit in a non-von Neumann manner. '''Computational Memory''' (CM) is finding application in a variety of areas such as machine learning and signal processing. In addition to novel non-volatile memory technologies like '''PCM''' and '''ReRAM''', also conventional '''SRAM''' has been proposed as underlying storage technology in Computational Memories.<br />
<br />
===Useful Reading===<br />
*[https://ieeexplore.ieee.org/document/9288639 An SRAM-Based Multibit In-Memory Matrix-Vector Multiplier With a Precision That Scales Linearly in Area, Time, and Power]<br />
*[https://www.nature.com/articles/s41565-020-0655-z Memory devices and applications for in-memory computing]<br />
*[https://ieeexplore.ieee.org/abstract/document/8865099 Deep learning acceleration based on in-memory computing]<br />
<br />
===Prerequisites===<br />
*General interest in Deep Learning and memory/system design<br />
*VLSI I and VLSI II (''recommended'')<br />
Specific requirements for the different projects vary and are generally negotiable.<br />
---><br />
<br />
==Available Projects==<br />
We are inviting applications from students to conduct their thesis (bachelor, semester, and master) or an internship project at the IBM Research lab in Zurich on this exciting new topic. <br />
<!--<br />
The work performed could span low-level hardware experiments on phase-change memory chips comprising more than 1 million devices to high-level algorithmic development in a deep learning framework such as TensorFlow or PyTorch. It also involves interactions with several researchers across IBM research focusing on various aspects of the project. The ideal candidate should have a multi-disciplinary background, strong mathematical aptitude and programming skills. Prior knowledge on emerging memory technologies such as phase-change memory is a bonus but not necessary.<br />
--><br />
<br />
{| class="wikitable" style="text-align: center;"<br />
|-<br />
! style="width: 5%;"|Type !! style="width: 20%"|Project !! style="width: 60%"|Description !! style="width: 5%"|Workload Type <br />
|-<br />
<br />
| MA/SA/BA || Developing Efficient Models of Strong AI for Intelligence Quotient (IQ) Test || [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Link to description] || algorithmic design<br />
|-<br />
<br />
| MA/SA/BA || Zero-shot learning || TBD || algorithmic design<br />
|-<br />
<br />
| MA/SA || Accelerating Mixers with Computational Memory || [http://iis-projects.ee.ethz.ch/images/e/e8/IBM_Mixer.pdf Link to description] || Hardware design and experiments<br />
|-<br />
<br />
| MA/SA || Accelerating Transformers with Computational Memory || [http://iis-projects.ee.ethz.ch/images/b/be/IBM_TransfAcc.pdf Link to description] || Hardware/algorithmic design<br />
|-<br />
<br />
| MA/SA/BA || Face Recognition at Scale || [http://iis-projects.ee.ethz.ch/images/3/3d/IBM_FaceRec_at_Scale.pdf Link to description] || algorithmic design<br />
|-<br />
<br />
<br />
| MA|| Lifelong learning challenge || [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf Link to description] || algorithmic/hardware design<br />
|-<br />
<br />
| MA|| Crytography meets in-memory computing || [https://iis-projects.ee.ethz.ch/images/6/62/Y2021_10_Advertisement_VME.pdf Link to description] || algorithmic design and hardware experiments<br />
|-<br />
<br />
<br />
|}<br />
<br />
==Contact==<br />
: Thesis will be at IBM Zurich in Rüschlikon<br />
; Hybrid AI Systems (HAS) projects<br />
: Contact (at ETH Zurich): [[:User:kgf | Dr. Frank K. Gurkaynak]], [[:User:Herschmi | Michael Hersche]]<br />
: Contact (at IBM): [mailto:kar@zurich.ibm.com Geethan Karunaratne]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian] <br />
: Contact (at IBM): [mailto:abr@zurich.ibm.com Dr. Abbas Rahimi]<br />
: Professor: [https://iis.ee.ethz.ch/people/person-detail.html?persid=194234 Luca Benini]</div>Abbashttp://iis-projects.ee.ethz.ch/index.php?title=File:IBM_Mixer.pdf&diff=6544File:IBM Mixer.pdf2021-05-16T16:32:05Z<p>Abbas: Abbas uploaded a new version of File:IBM Mixer.pdf</p>
<hr />
<div></div>Abbashttp://iis-projects.ee.ethz.ch/index.php?title=IBM_Research&diff=6543IBM Research2021-05-16T16:29:26Z<p>Abbas: /* Available Projects */</p>
<hr />
<div>[[Category:Digital]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]] <br />
[[Category:Available]] <br />
[[Category:2020]]<br />
[[Category:Hot]]<br />
[[File:IBM_ZRLab.png|center]]<br />
<br />
==Short Description==<br />
Today, we are entering the era of cognitive computing, which holds great promise in deriving intelligence and knowledge from huge volumes of data. In today’s computers based on von Neumann architecture, huge amounts of data need to be shuttled back and forth at high speeds, a task at which this architecture is inefficient.<br />
<br />
It is becoming increasingly clear that to build efficient cognitive computers, we need to transition to non-von Neumann architectures in which memory and processing coexist in some form. At IBM Research–Zurich in the [https://www.zurich.ibm.com/sto/memory/ Neuromorphic and In-memory Computing Group], we explore various such computing paradigms from in-memory computing to brain-inspired neuromorphic computing. Our research spans from devices and architectures to algorithms and applications. <!--Here is a list of available projects:<br />
<br />
* [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf '''Artificial general intelligence (AGI):''' lifelong learning challenge] <br />
* [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf '''Machine learning based on optimal transport using in-memory computing''']<br />
* [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Developing Efficient Models of '''Strong AI for Intelligence Quotient (IQ) Test''']<br />
--><br />
<br />
===About the IBM Research–Zurich===<br />
The location in Zurich is one of IBM’s 12 global research labs. IBM has maintained a research laboratory in Switzerland since 1956. As the first European branch of IBM Research, the mission of the Zurich Lab, in addition to pursuing cutting-edge research for tomorrow’s information technology, is to cultivate close relationships with academic and industrial partners, be one of the premier places to work for world-class researchers, to promote women in IT and science, and to help drive Europe’s innovation agenda. [https://www.zurich.ibm.com/pdf/ZRL_Fact_Sheet.pdf Download factsheet]<br />
<br />
==Hybrid AI Systems (HAS)==<br />
<br />
[[File:NatureElectronics20.jpg|thumb|right|200px]]<br />
Neither symbolic AI nor neural networks alone has produced the kind of intelligence expressed in human and animal behavior. Why? Each has a long and rich history, but has addressed a relatively narrow aspect of the problem. Symbolic AI focuses on solving cognitive problems, drawing upon the rich framework of symbolic computation to manipulate internal representations in order to perform reasoning and inference. But it suffers from being non-adaptive, lacking the ability to learn from example or by direct observation of the world. Neural networks on the other hand have the ability to learn from data, and derive much of their power from nonlinear function approximation combined with stochastic gradient descent. But intelligence requires more than modeling input-output relationships. Without the richness of symbolic computation, neural nets lack the simple but powerful operations such as variable binding that allow for analogy making and reasoning, which underlie the ability to generalize from few examples.<br />
<br />
We approach the problem from a very different perspective, inspired by the brain’s high-dimensional circuits and the unique mathematical properties of high-dimensional spaces. ''It leads us to a novel information processing architecture that combines the strengths of symbolic AI and neural networks, and yet has novel emergent properties of its own.'' By combining a small set of basic operations on high-dimensional vectors, we obtain hybrid AI system (HAS) that makes it possible to represent and manipulate data in ways familiar to us from symbolic AI, and to learn from the statistics of data in ways familiar to us from artificial neural networks and deep learning. Further, principles of such HAS allow few-shot learning capabilities, and extremely robust operations against failures, defects, variations, and noise, all of which are complementary to ultra-low energy computation on nanoscale fabrics such as phase-change memory devices. Exciting further research (listed in below table) awaiting in this direction spans high-level algorithmic exploration all the way to efficient hardware design for emerging computational fabrics.<br />
<br />
===Useful Reading===<br />
*[https://www.nature.com/articles/s41467-021-22364-0 Robust high-dimensional memory-augmented neural networks], Nature Communications, 2021.<br />
*[https://www.nature.com/articles/s41928-020-0410-3 In-memory hyperdimensional computing], Nature Electronics, 2020.<br />
*[https://www.nature.com/articles/s41928-020-00510-8 A wearable biosensing system with in-sensor adaptive machine learning for hand gesture recognition], Nature Electronics, 2020. <br />
*[https://link.springer.com/article/10.1007/s12559-009-9009-8 Hyperdimensional computing: an introduction to computing in distributed representation with high-dimensional random vectors], Cognitive Computation, 2009.<br />
<br />
===Prerequisites===<br />
*Python<br />
*Background in machine learning (''recommended'')<br />
*Experience with any deep learning framework such as TensorFlow or PyTorch (''recommended'')<br />
*VLSI I (''recommended'')<br />
<br />
==In-Memory Computing (IMC)==<br />
<br />
[[File:NNcover_imc.jpg|thumb|right|200px]]<br />
For decades, conventional computers based on the von Neumann architecture have performed computation by repeatedly transferring data between their processing and their memory units, which are physically separated. As computation becomes increasingly data-centric and as the scalability limits in terms of performance and power are being reached, alternative computing paradigms are searched for in which computation and storage are collocated. A fascinating new approach is that of computational memory where the physics of nanoscale memory devices are used to perform certain computational tasks within the memory unit in a non-von Neumann manner. '''Computational Memory''' (CM) is finding application in a variety of areas such as machine learning and signal processing. In addition to novel non-volatile memory technologies like '''PCM''' and '''ReRAM''', also conventional '''SRAM''' has been proposed as underlying storage technology in Computational Memories.<br />
<br />
===Useful Reading===<br />
*[https://ieeexplore.ieee.org/document/9288639 An SRAM-Based Multibit In-Memory Matrix-Vector Multiplier With a Precision That Scales Linearly in Area, Time, and Power]<br />
*[https://www.nature.com/articles/s41565-020-0655-z Memory devices and applications for in-memory computing]<br />
*[https://ieeexplore.ieee.org/abstract/document/8865099 Deep learning acceleration based on in-memory computing]<br />
<br />
===Prerequisites===<br />
*General interest in Deep Learning and memory/system design<br />
*VLSI I and VLSI II (''recommended'')<br />
*Analog Circuit Design (''recommended'')<br />
Specific requirements for the different projects vary and are generally negotiable.<br />
<br />
==Available Projects==<br />
We are inviting applications from students to conduct their master’s thesis work or an internship project at the IBM Research lab in Zurich on this exciting new topic. <br />
<!--<br />
The work performed could span low-level hardware experiments on phase-change memory chips comprising more than 1 million devices to high-level algorithmic development in a deep learning framework such as TensorFlow or PyTorch. It also involves interactions with several researchers across IBM research focusing on various aspects of the project. The ideal candidate should have a multi-disciplinary background, strong mathematical aptitude and programming skills. Prior knowledge on emerging memory technologies such as phase-change memory is a bonus but not necessary.<br />
--><br />
<br />
{| class="wikitable" style="text-align: center;"<br />
|-<br />
! style="width: 5%;"|Type !! style="width: 20%"|Project !! style="width: 60%"|Description !! style="width: 5%"|Topic !! style="width: 15%"|Workload Type <br />
|-<br />
<br />
| MA || Accelerating Mixers with Computational Memory || [http://iis-projects.ee.ethz.ch/images/e/e8/IBM_Mixer.pdf Link to description] || HAS || Hardware design and experiments<br />
|-<br />
<br />
| MA || Accelerating Transformers with Computational Memory || [http://iis-projects.ee.ethz.ch/images/b/be/IBM_TransfAcc.pdf Link to description] || HAS || Hardware/algorithmic design<br />
|-<br />
<br />
| MA || Face Recognition at Scale || [http://iis-projects.ee.ethz.ch/images/3/3d/IBM_FaceRec_at_Scale.pdf Link to description] || HAS || algorithmic design<br />
|-<br />
<br />
| MA || Developing Efficient Models of Strong AI for Intelligence Quotient (IQ) Test || [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Link to description] || HAS || algorithmic design<br />
|-<br />
<br />
| MA|| Lifelong learning challenge || [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf Link to description] || HAS || algorithmic/hardware design<br />
|-<br />
<br />
<br />
| MA || Accurate deep learning inference using computational memory || [https://www.zurich.ibm.com/careers/2020_027.html Project description and application] || IMC || algorithmic design<br />
|-<br />
| MA || ADC design for computational memory || style="text-align: justify;"|Digital-to-Analog converters ('''DAC'''s) and Analog-to-Digital converters ('''ADC'''s) are extensively employed in Computational Memory ('''CM''') to handle the crossing between the digital and analog domain, in which computationally expensive tasks, like Matrix-Vector Multiplications ('''MVM'''), are carried out with O(1) complexity. Each conversion costs a certain amount of energy and its precision can only be guaranteed up to the Effective Number of Bits ('''ENOB''') of the employed data converter. <br /> The research focus will be on understanding the system level requirements on ADC and DAC for optimal performance of Deep Neural Network inference using CM. Furthermore, the effects of noise, non-linearity and manufacturing tolerances shall be examined and counter measurements, like for example periodic digital ADC recalibration and digital post processing, shall be evaluated with regards to effectivity and energy costs.|| IMC || analog circuit design<br />
|}<br />
<br />
==Contact==<br />
: Thesis will be at IBM Zurich in Rüschlikon<br />
; Hybrid AI Systems (HAS) projects<br />
: Contact (at ETH Zurich): [[:User:kgf | Dr. Frank K. Gurkaynak]] and [[:User:Herschmi | Michael Hersche]]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian] <br />
: Contact (at IBM): [mailto:abr@zurich.ibm.com Dr. Abbas Rahimi]<br />
: Professor: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]<br />
; In-Memory Computing (IMC) projects<br />
: Contact (at IBM/ETH Zurich): [[:User:rid | Riduan Khaddam-Aljameh]]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian]<br />
: Professor: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]</div>Abbashttp://iis-projects.ee.ethz.ch/index.php?title=File:IBM_Mixer.pdf&diff=6542File:IBM Mixer.pdf2021-05-16T16:28:55Z<p>Abbas: </p>
<hr />
<div></div>Abbashttp://iis-projects.ee.ethz.ch/index.php?title=IBM_Research&diff=6541IBM Research2021-05-16T15:29:18Z<p>Abbas: /* Useful Reading */</p>
<hr />
<div>[[Category:Digital]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]] <br />
[[Category:Available]] <br />
[[Category:2020]]<br />
[[Category:Hot]]<br />
[[File:IBM_ZRLab.png|center]]<br />
<br />
==Short Description==<br />
Today, we are entering the era of cognitive computing, which holds great promise in deriving intelligence and knowledge from huge volumes of data. In today’s computers based on von Neumann architecture, huge amounts of data need to be shuttled back and forth at high speeds, a task at which this architecture is inefficient.<br />
<br />
It is becoming increasingly clear that to build efficient cognitive computers, we need to transition to non-von Neumann architectures in which memory and processing coexist in some form. At IBM Research–Zurich in the [https://www.zurich.ibm.com/sto/memory/ Neuromorphic and In-memory Computing Group], we explore various such computing paradigms from in-memory computing to brain-inspired neuromorphic computing. Our research spans from devices and architectures to algorithms and applications. <!--Here is a list of available projects:<br />
<br />
* [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf '''Artificial general intelligence (AGI):''' lifelong learning challenge] <br />
* [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf '''Machine learning based on optimal transport using in-memory computing''']<br />
* [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Developing Efficient Models of '''Strong AI for Intelligence Quotient (IQ) Test''']<br />
--><br />
<br />
===About the IBM Research–Zurich===<br />
The location in Zurich is one of IBM’s 12 global research labs. IBM has maintained a research laboratory in Switzerland since 1956. As the first European branch of IBM Research, the mission of the Zurich Lab, in addition to pursuing cutting-edge research for tomorrow’s information technology, is to cultivate close relationships with academic and industrial partners, be one of the premier places to work for world-class researchers, to promote women in IT and science, and to help drive Europe’s innovation agenda. [https://www.zurich.ibm.com/pdf/ZRL_Fact_Sheet.pdf Download factsheet]<br />
<br />
==Hybrid AI Systems (HAS)==<br />
<br />
[[File:NatureElectronics20.jpg|thumb|right|200px]]<br />
Neither symbolic AI nor neural networks alone has produced the kind of intelligence expressed in human and animal behavior. Why? Each has a long and rich history, but has addressed a relatively narrow aspect of the problem. Symbolic AI focuses on solving cognitive problems, drawing upon the rich framework of symbolic computation to manipulate internal representations in order to perform reasoning and inference. But it suffers from being non-adaptive, lacking the ability to learn from example or by direct observation of the world. Neural networks on the other hand have the ability to learn from data, and derive much of their power from nonlinear function approximation combined with stochastic gradient descent. But intelligence requires more than modeling input-output relationships. Without the richness of symbolic computation, neural nets lack the simple but powerful operations such as variable binding that allow for analogy making and reasoning, which underlie the ability to generalize from few examples.<br />
<br />
We approach the problem from a very different perspective, inspired by the brain’s high-dimensional circuits and the unique mathematical properties of high-dimensional spaces. ''It leads us to a novel information processing architecture that combines the strengths of symbolic AI and neural networks, and yet has novel emergent properties of its own.'' By combining a small set of basic operations on high-dimensional vectors, we obtain hybrid AI system (HAS) that makes it possible to represent and manipulate data in ways familiar to us from symbolic AI, and to learn from the statistics of data in ways familiar to us from artificial neural networks and deep learning. Further, principles of such HAS allow few-shot learning capabilities, and extremely robust operations against failures, defects, variations, and noise, all of which are complementary to ultra-low energy computation on nanoscale fabrics such as phase-change memory devices. Exciting further research (listed in below table) awaiting in this direction spans high-level algorithmic exploration all the way to efficient hardware design for emerging computational fabrics.<br />
<br />
===Useful Reading===<br />
*[https://www.nature.com/articles/s41467-021-22364-0 Robust high-dimensional memory-augmented neural networks], Nature Communications, 2021.<br />
*[https://www.nature.com/articles/s41928-020-0410-3 In-memory hyperdimensional computing], Nature Electronics, 2020.<br />
*[https://www.nature.com/articles/s41928-020-00510-8 A wearable biosensing system with in-sensor adaptive machine learning for hand gesture recognition], Nature Electronics, 2020. <br />
*[https://link.springer.com/article/10.1007/s12559-009-9009-8 Hyperdimensional computing: an introduction to computing in distributed representation with high-dimensional random vectors], Cognitive Computation, 2009.<br />
<br />
===Prerequisites===<br />
*Python<br />
*Background in machine learning (''recommended'')<br />
*Experience with any deep learning framework such as TensorFlow or PyTorch (''recommended'')<br />
*VLSI I (''recommended'')<br />
<br />
==In-Memory Computing (IMC)==<br />
<br />
[[File:NNcover_imc.jpg|thumb|right|200px]]<br />
For decades, conventional computers based on the von Neumann architecture have performed computation by repeatedly transferring data between their processing and their memory units, which are physically separated. As computation becomes increasingly data-centric and as the scalability limits in terms of performance and power are being reached, alternative computing paradigms are searched for in which computation and storage are collocated. A fascinating new approach is that of computational memory where the physics of nanoscale memory devices are used to perform certain computational tasks within the memory unit in a non-von Neumann manner. '''Computational Memory''' (CM) is finding application in a variety of areas such as machine learning and signal processing. In addition to novel non-volatile memory technologies like '''PCM''' and '''ReRAM''', also conventional '''SRAM''' has been proposed as underlying storage technology in Computational Memories.<br />
<br />
===Useful Reading===<br />
*[https://ieeexplore.ieee.org/document/9288639 An SRAM-Based Multibit In-Memory Matrix-Vector Multiplier With a Precision That Scales Linearly in Area, Time, and Power]<br />
*[https://www.nature.com/articles/s41565-020-0655-z Memory devices and applications for in-memory computing]<br />
*[https://ieeexplore.ieee.org/abstract/document/8865099 Deep learning acceleration based on in-memory computing]<br />
<br />
===Prerequisites===<br />
*General interest in Deep Learning and memory/system design<br />
*VLSI I and VLSI II (''recommended'')<br />
*Analog Circuit Design (''recommended'')<br />
Specific requirements for the different projects vary and are generally negotiable.<br />
<br />
==Available Projects==<br />
We are inviting applications from students to conduct their master’s thesis work or an internship project at the IBM Research lab in Zurich on this exciting new topic. <br />
<!--<br />
The work performed could span low-level hardware experiments on phase-change memory chips comprising more than 1 million devices to high-level algorithmic development in a deep learning framework such as TensorFlow or PyTorch. It also involves interactions with several researchers across IBM research focusing on various aspects of the project. The ideal candidate should have a multi-disciplinary background, strong mathematical aptitude and programming skills. Prior knowledge on emerging memory technologies such as phase-change memory is a bonus but not necessary.<br />
--><br />
<br />
{| class="wikitable" style="text-align: center;"<br />
|-<br />
! style="width: 5%;"|Type !! style="width: 20%"|Project !! style="width: 60%"|Description !! style="width: 5%"|Topic !! style="width: 15%"|Workload Type <br />
|-<br />
<br />
| MA || Accelerating Mixers with Computational Memory || [http://iis-projects.ee.ethz.ch/images/3/3d/IBM_FaceRec_at_Scale.pdf Link to description] || HAS || Hardware design and experiments<br />
|-<br />
<br />
| MA || Accelerating Transformers with Computational Memory || [http://iis-projects.ee.ethz.ch/images/b/be/IBM_TransfAcc.pdf Link to description] || HAS || Hardware/algorithmic design<br />
|-<br />
<br />
| MA || Face Recognition at Scale || [http://iis-projects.ee.ethz.ch/images/3/3d/IBM_FaceRec_at_Scale.pdf Link to description] || HAS || algorithmic design<br />
|-<br />
<br />
| MA || Developing Efficient Models of Strong AI for Intelligence Quotient (IQ) Test || [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Link to description] || HAS || algorithmic design<br />
|-<br />
<br />
| MA|| Lifelong learning challenge || [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf Link to description] || HAS || algorithmic/hardware design<br />
|-<br />
<br />
<br />
| MA || Accurate deep learning inference using computational memory || [https://www.zurich.ibm.com/careers/2020_027.html Project description and application] || IMC || algorithmic design<br />
|-<br />
| MA || ADC design for computational memory || style="text-align: justify;"|Digital-to-Analog converters ('''DAC'''s) and Analog-to-Digital converters ('''ADC'''s) are extensively employed in Computational Memory ('''CM''') to handle the crossing between the digital and analog domain, in which computationally expensive tasks, like Matrix-Vector Multiplications ('''MVM'''), are carried out with O(1) complexity. Each conversion costs a certain amount of energy and its precision can only be guaranteed up to the Effective Number of Bits ('''ENOB''') of the employed data converter. <br /> The research focus will be on understanding the system level requirements on ADC and DAC for optimal performance of Deep Neural Network inference using CM. Furthermore, the effects of noise, non-linearity and manufacturing tolerances shall be examined and counter measurements, like for example periodic digital ADC recalibration and digital post processing, shall be evaluated with regards to effectivity and energy costs.|| IMC || analog circuit design<br />
|}<br />
<br />
==Contact==<br />
: Thesis will be at IBM Zurich in Rüschlikon<br />
; Hybrid AI Systems (HAS) projects<br />
: Contact (at ETH Zurich): [[:User:kgf | Dr. Frank K. Gurkaynak]] and [[:User:Herschmi | Michael Hersche]]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian] <br />
: Contact (at IBM): [mailto:abr@zurich.ibm.com Dr. Abbas Rahimi]<br />
: Professor: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]<br />
; In-Memory Computing (IMC) projects<br />
: Contact (at IBM/ETH Zurich): [[:User:rid | Riduan Khaddam-Aljameh]]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian]<br />
: Professor: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]</div>Abbashttp://iis-projects.ee.ethz.ch/index.php?title=IBM_Research&diff=6540IBM Research2021-05-16T15:28:46Z<p>Abbas: /* Useful Reading */</p>
<hr />
<div>[[Category:Digital]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]] <br />
[[Category:Available]] <br />
[[Category:2020]]<br />
[[Category:Hot]]<br />
[[File:IBM_ZRLab.png|center]]<br />
<br />
==Short Description==<br />
Today, we are entering the era of cognitive computing, which holds great promise in deriving intelligence and knowledge from huge volumes of data. In today’s computers based on von Neumann architecture, huge amounts of data need to be shuttled back and forth at high speeds, a task at which this architecture is inefficient.<br />
<br />
It is becoming increasingly clear that to build efficient cognitive computers, we need to transition to non-von Neumann architectures in which memory and processing coexist in some form. At IBM Research–Zurich in the [https://www.zurich.ibm.com/sto/memory/ Neuromorphic and In-memory Computing Group], we explore various such computing paradigms from in-memory computing to brain-inspired neuromorphic computing. Our research spans from devices and architectures to algorithms and applications. <!--Here is a list of available projects:<br />
<br />
* [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf '''Artificial general intelligence (AGI):''' lifelong learning challenge] <br />
* [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf '''Machine learning based on optimal transport using in-memory computing''']<br />
* [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Developing Efficient Models of '''Strong AI for Intelligence Quotient (IQ) Test''']<br />
--><br />
<br />
===About the IBM Research–Zurich===<br />
The location in Zurich is one of IBM’s 12 global research labs. IBM has maintained a research laboratory in Switzerland since 1956. As the first European branch of IBM Research, the mission of the Zurich Lab, in addition to pursuing cutting-edge research for tomorrow’s information technology, is to cultivate close relationships with academic and industrial partners, be one of the premier places to work for world-class researchers, to promote women in IT and science, and to help drive Europe’s innovation agenda. [https://www.zurich.ibm.com/pdf/ZRL_Fact_Sheet.pdf Download factsheet]<br />
<br />
==Hybrid AI Systems (HAS)==<br />
<br />
[[File:NatureElectronics20.jpg|thumb|right|200px]]<br />
Neither symbolic AI nor neural networks alone has produced the kind of intelligence expressed in human and animal behavior. Why? Each has a long and rich history, but has addressed a relatively narrow aspect of the problem. Symbolic AI focuses on solving cognitive problems, drawing upon the rich framework of symbolic computation to manipulate internal representations in order to perform reasoning and inference. But it suffers from being non-adaptive, lacking the ability to learn from example or by direct observation of the world. Neural networks on the other hand have the ability to learn from data, and derive much of their power from nonlinear function approximation combined with stochastic gradient descent. But intelligence requires more than modeling input-output relationships. Without the richness of symbolic computation, neural nets lack the simple but powerful operations such as variable binding that allow for analogy making and reasoning, which underlie the ability to generalize from few examples.<br />
<br />
We approach the problem from a very different perspective, inspired by the brain’s high-dimensional circuits and the unique mathematical properties of high-dimensional spaces. ''It leads us to a novel information processing architecture that combines the strengths of symbolic AI and neural networks, and yet has novel emergent properties of its own.'' By combining a small set of basic operations on high-dimensional vectors, we obtain hybrid AI system (HAS) that makes it possible to represent and manipulate data in ways familiar to us from symbolic AI, and to learn from the statistics of data in ways familiar to us from artificial neural networks and deep learning. Further, principles of such HAS allow few-shot learning capabilities, and extremely robust operations against failures, defects, variations, and noise, all of which are complementary to ultra-low energy computation on nanoscale fabrics such as phase-change memory devices. Exciting further research (listed in below table) awaiting in this direction spans high-level algorithmic exploration all the way to efficient hardware design for emerging computational fabrics.<br />
<br />
===Useful Reading===<br />
*[https://www.nature.com/articles/s41467-021-22364-0 Robust high-dimensional memory-augmented neural networks], Nature Communications, 2021<br />
*[https://www.nature.com/articles/s41928-020-0410-3 In-memory hyperdimensional computing], Nature Electronics, 2020.<br />
*[https://www.nature.com/articles/s41928-020-00510-8 A wearable biosensing system with in-sensor adaptive machine learning for hand gesture recognition], Nature Electronics, 2020. <br />
*[https://link.springer.com/article/10.1007/s12559-009-9009-8 Hyperdimensional computing: an introduction to computing in distributed representation with high-dimensional random vectors], Cognitive Computation, 2009.<br />
<br />
===Prerequisites===<br />
*Python<br />
*Background in machine learning (''recommended'')<br />
*Experience with any deep learning framework such as TensorFlow or PyTorch (''recommended'')<br />
*VLSI I (''recommended'')<br />
<br />
==In-Memory Computing (IMC)==<br />
<br />
[[File:NNcover_imc.jpg|thumb|right|200px]]<br />
For decades, conventional computers based on the von Neumann architecture have performed computation by repeatedly transferring data between their processing and their memory units, which are physically separated. As computation becomes increasingly data-centric and as the scalability limits in terms of performance and power are being reached, alternative computing paradigms are searched for in which computation and storage are collocated. A fascinating new approach is that of computational memory where the physics of nanoscale memory devices are used to perform certain computational tasks within the memory unit in a non-von Neumann manner. '''Computational Memory''' (CM) is finding application in a variety of areas such as machine learning and signal processing. In addition to novel non-volatile memory technologies like '''PCM''' and '''ReRAM''', also conventional '''SRAM''' has been proposed as underlying storage technology in Computational Memories.<br />
<br />
===Useful Reading===<br />
*[https://ieeexplore.ieee.org/document/9288639 An SRAM-Based Multibit In-Memory Matrix-Vector Multiplier With a Precision That Scales Linearly in Area, Time, and Power]<br />
*[https://www.nature.com/articles/s41565-020-0655-z Memory devices and applications for in-memory computing]<br />
*[https://ieeexplore.ieee.org/abstract/document/8865099 Deep learning acceleration based on in-memory computing]<br />
<br />
===Prerequisites===<br />
*General interest in Deep Learning and memory/system design<br />
*VLSI I and VLSI II (''recommended'')<br />
*Analog Circuit Design (''recommended'')<br />
Specific requirements for the different projects vary and are generally negotiable.<br />
<br />
==Available Projects==<br />
We are inviting applications from students to conduct their master’s thesis work or an internship project at the IBM Research lab in Zurich on this exciting new topic. <br />
<!--<br />
The work performed could span low-level hardware experiments on phase-change memory chips comprising more than 1 million devices to high-level algorithmic development in a deep learning framework such as TensorFlow or PyTorch. It also involves interactions with several researchers across IBM research focusing on various aspects of the project. The ideal candidate should have a multi-disciplinary background, strong mathematical aptitude and programming skills. Prior knowledge on emerging memory technologies such as phase-change memory is a bonus but not necessary.<br />
--><br />
<br />
{| class="wikitable" style="text-align: center;"<br />
|-<br />
! style="width: 5%;"|Type !! style="width: 20%"|Project !! style="width: 60%"|Description !! style="width: 5%"|Topic !! style="width: 15%"|Workload Type <br />
|-<br />
<br />
| MA || Accelerating Mixers with Computational Memory || [http://iis-projects.ee.ethz.ch/images/3/3d/IBM_FaceRec_at_Scale.pdf Link to description] || HAS || Hardware design and experiments<br />
|-<br />
<br />
| MA || Accelerating Transformers with Computational Memory || [http://iis-projects.ee.ethz.ch/images/b/be/IBM_TransfAcc.pdf Link to description] || HAS || Hardware/algorithmic design<br />
|-<br />
<br />
| MA || Face Recognition at Scale || [http://iis-projects.ee.ethz.ch/images/3/3d/IBM_FaceRec_at_Scale.pdf Link to description] || HAS || algorithmic design<br />
|-<br />
<br />
| MA || Developing Efficient Models of Strong AI for Intelligence Quotient (IQ) Test || [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Link to description] || HAS || algorithmic design<br />
|-<br />
<br />
| MA|| Lifelong learning challenge || [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf Link to description] || HAS || algorithmic/hardware design<br />
|-<br />
<br />
<br />
| MA || Accurate deep learning inference using computational memory || [https://www.zurich.ibm.com/careers/2020_027.html Project description and application] || IMC || algorithmic design<br />
|-<br />
| MA || ADC design for computational memory || style="text-align: justify;"|Digital-to-Analog converters ('''DAC'''s) and Analog-to-Digital converters ('''ADC'''s) are extensively employed in Computational Memory ('''CM''') to handle the crossing between the digital and analog domain, in which computationally expensive tasks, like Matrix-Vector Multiplications ('''MVM'''), are carried out with O(1) complexity. Each conversion costs a certain amount of energy and its precision can only be guaranteed up to the Effective Number of Bits ('''ENOB''') of the employed data converter. <br /> The research focus will be on understanding the system level requirements on ADC and DAC for optimal performance of Deep Neural Network inference using CM. Furthermore, the effects of noise, non-linearity and manufacturing tolerances shall be examined and counter measurements, like for example periodic digital ADC recalibration and digital post processing, shall be evaluated with regards to effectivity and energy costs.|| IMC || analog circuit design<br />
|}<br />
<br />
==Contact==<br />
: Thesis will be at IBM Zurich in Rüschlikon<br />
; Hybrid AI Systems (HAS) projects<br />
: Contact (at ETH Zurich): [[:User:kgf | Dr. Frank K. Gurkaynak]] and [[:User:Herschmi | Michael Hersche]]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian] <br />
: Contact (at IBM): [mailto:abr@zurich.ibm.com Dr. Abbas Rahimi]<br />
: Professor: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]<br />
; In-Memory Computing (IMC) projects<br />
: Contact (at IBM/ETH Zurich): [[:User:rid | Riduan Khaddam-Aljameh]]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian]<br />
: Professor: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]</div>Abbashttp://iis-projects.ee.ethz.ch/index.php?title=IBM_Research&diff=6539IBM Research2021-05-16T15:26:50Z<p>Abbas: </p>
<hr />
<div>[[Category:Digital]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]] <br />
[[Category:Available]] <br />
[[Category:2020]]<br />
[[Category:Hot]]<br />
[[File:IBM_ZRLab.png|center]]<br />
<br />
==Short Description==<br />
Today, we are entering the era of cognitive computing, which holds great promise in deriving intelligence and knowledge from huge volumes of data. In today’s computers based on von Neumann architecture, huge amounts of data need to be shuttled back and forth at high speeds, a task at which this architecture is inefficient.<br />
<br />
It is becoming increasingly clear that to build efficient cognitive computers, we need to transition to non-von Neumann architectures in which memory and processing coexist in some form. At IBM Research–Zurich in the [https://www.zurich.ibm.com/sto/memory/ Neuromorphic and In-memory Computing Group], we explore various such computing paradigms from in-memory computing to brain-inspired neuromorphic computing. Our research spans from devices and architectures to algorithms and applications. <!--Here is a list of available projects:<br />
<br />
* [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf '''Artificial general intelligence (AGI):''' lifelong learning challenge] <br />
* [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf '''Machine learning based on optimal transport using in-memory computing''']<br />
* [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Developing Efficient Models of '''Strong AI for Intelligence Quotient (IQ) Test''']<br />
--><br />
<br />
===About the IBM Research–Zurich===<br />
The location in Zurich is one of IBM’s 12 global research labs. IBM has maintained a research laboratory in Switzerland since 1956. As the first European branch of IBM Research, the mission of the Zurich Lab, in addition to pursuing cutting-edge research for tomorrow’s information technology, is to cultivate close relationships with academic and industrial partners, be one of the premier places to work for world-class researchers, to promote women in IT and science, and to help drive Europe’s innovation agenda. [https://www.zurich.ibm.com/pdf/ZRL_Fact_Sheet.pdf Download factsheet]<br />
<br />
==Hybrid AI Systems (HAS)==<br />
<br />
[[File:NatureElectronics20.jpg|thumb|right|200px]]<br />
Neither symbolic AI nor neural networks alone has produced the kind of intelligence expressed in human and animal behavior. Why? Each has a long and rich history, but has addressed a relatively narrow aspect of the problem. Symbolic AI focuses on solving cognitive problems, drawing upon the rich framework of symbolic computation to manipulate internal representations in order to perform reasoning and inference. But it suffers from being non-adaptive, lacking the ability to learn from example or by direct observation of the world. Neural networks on the other hand have the ability to learn from data, and derive much of their power from nonlinear function approximation combined with stochastic gradient descent. But intelligence requires more than modeling input-output relationships. Without the richness of symbolic computation, neural nets lack the simple but powerful operations such as variable binding that allow for analogy making and reasoning, which underlie the ability to generalize from few examples.<br />
<br />
We approach the problem from a very different perspective, inspired by the brain’s high-dimensional circuits and the unique mathematical properties of high-dimensional spaces. ''It leads us to a novel information processing architecture that combines the strengths of symbolic AI and neural networks, and yet has novel emergent properties of its own.'' By combining a small set of basic operations on high-dimensional vectors, we obtain hybrid AI system (HAS) that makes it possible to represent and manipulate data in ways familiar to us from symbolic AI, and to learn from the statistics of data in ways familiar to us from artificial neural networks and deep learning. Further, principles of such HAS allow few-shot learning capabilities, and extremely robust operations against failures, defects, variations, and noise, all of which are complementary to ultra-low energy computation on nanoscale fabrics such as phase-change memory devices. Exciting further research (listed in below table) awaiting in this direction spans high-level algorithmic exploration all the way to efficient hardware design for emerging computational fabrics.<br />
<br />
===Useful Reading===<br />
*[https://arxiv.org/pdf/2010.01939.pdf Robust high-dimensional memory-augmented neural networks], arXiv, 2020<br />
*[https://www.nature.com/articles/s41928-020-0410-3 In-memory hyperdimensional computing], Nature Electronics, 2020.<br />
*[https://www.nature.com/articles/s41928-020-00510-8 A wearable biosensing system with in-sensor adaptive machine learning for hand gesture recognition], Nature Electronics, 2020. <br />
*[https://link.springer.com/article/10.1007/s12559-009-9009-8 Hyperdimensional computing: an introduction to computing in distributed representation with high-dimensional random vectors], Cognitive Computation, 2009.<br />
<br />
===Prerequisites===<br />
*Python<br />
*Background in machine learning (''recommended'')<br />
*Experience with any deep learning framework such as TensorFlow or PyTorch (''recommended'')<br />
*VLSI I (''recommended'')<br />
<br />
==In-Memory Computing (IMC)==<br />
<br />
[[File:NNcover_imc.jpg|thumb|right|200px]]<br />
For decades, conventional computers based on the von Neumann architecture have performed computation by repeatedly transferring data between their processing and their memory units, which are physically separated. As computation becomes increasingly data-centric and as the scalability limits in terms of performance and power are being reached, alternative computing paradigms are searched for in which computation and storage are collocated. A fascinating new approach is that of computational memory where the physics of nanoscale memory devices are used to perform certain computational tasks within the memory unit in a non-von Neumann manner. '''Computational Memory''' (CM) is finding application in a variety of areas such as machine learning and signal processing. In addition to novel non-volatile memory technologies like '''PCM''' and '''ReRAM''', also conventional '''SRAM''' has been proposed as underlying storage technology in Computational Memories.<br />
<br />
===Useful Reading===<br />
*[https://ieeexplore.ieee.org/document/9288639 An SRAM-Based Multibit In-Memory Matrix-Vector Multiplier With a Precision That Scales Linearly in Area, Time, and Power]<br />
*[https://www.nature.com/articles/s41565-020-0655-z Memory devices and applications for in-memory computing]<br />
*[https://ieeexplore.ieee.org/abstract/document/8865099 Deep learning acceleration based on in-memory computing]<br />
<br />
===Prerequisites===<br />
*General interest in Deep Learning and memory/system design<br />
*VLSI I and VLSI II (''recommended'')<br />
*Analog Circuit Design (''recommended'')<br />
Specific requirements for the different projects vary and are generally negotiable.<br />
<br />
==Available Projects==<br />
We are inviting applications from students to conduct their master’s thesis work or an internship project at the IBM Research lab in Zurich on this exciting new topic. <br />
<!--<br />
The work performed could span low-level hardware experiments on phase-change memory chips comprising more than 1 million devices to high-level algorithmic development in a deep learning framework such as TensorFlow or PyTorch. It also involves interactions with several researchers across IBM research focusing on various aspects of the project. The ideal candidate should have a multi-disciplinary background, strong mathematical aptitude and programming skills. Prior knowledge on emerging memory technologies such as phase-change memory is a bonus but not necessary.<br />
--><br />
<br />
{| class="wikitable" style="text-align: center;"<br />
|-<br />
! style="width: 5%;"|Type !! style="width: 20%"|Project !! style="width: 60%"|Description !! style="width: 5%"|Topic !! style="width: 15%"|Workload Type <br />
|-<br />
<br />
| MA || Accelerating Mixers with Computational Memory || [http://iis-projects.ee.ethz.ch/images/3/3d/IBM_FaceRec_at_Scale.pdf Link to description] || HAS || Hardware design and experiments<br />
|-<br />
<br />
| MA || Accelerating Transformers with Computational Memory || [http://iis-projects.ee.ethz.ch/images/b/be/IBM_TransfAcc.pdf Link to description] || HAS || Hardware/algorithmic design<br />
|-<br />
<br />
| MA || Face Recognition at Scale || [http://iis-projects.ee.ethz.ch/images/3/3d/IBM_FaceRec_at_Scale.pdf Link to description] || HAS || algorithmic design<br />
|-<br />
<br />
| MA || Developing Efficient Models of Strong AI for Intelligence Quotient (IQ) Test || [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Link to description] || HAS || algorithmic design<br />
|-<br />
<br />
| MA|| Lifelong learning challenge || [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf Link to description] || HAS || algorithmic/hardware design<br />
|-<br />
<br />
<br />
| MA || Accurate deep learning inference using computational memory || [https://www.zurich.ibm.com/careers/2020_027.html Project description and application] || IMC || algorithmic design<br />
|-<br />
| MA || ADC design for computational memory || style="text-align: justify;"|Digital-to-Analog converters ('''DAC'''s) and Analog-to-Digital converters ('''ADC'''s) are extensively employed in Computational Memory ('''CM''') to handle the crossing between the digital and analog domain, in which computationally expensive tasks, like Matrix-Vector Multiplications ('''MVM'''), are carried out with O(1) complexity. Each conversion costs a certain amount of energy and its precision can only be guaranteed up to the Effective Number of Bits ('''ENOB''') of the employed data converter. <br /> The research focus will be on understanding the system level requirements on ADC and DAC for optimal performance of Deep Neural Network inference using CM. Furthermore, the effects of noise, non-linearity and manufacturing tolerances shall be examined and counter measurements, like for example periodic digital ADC recalibration and digital post processing, shall be evaluated with regards to effectivity and energy costs.|| IMC || analog circuit design<br />
|}<br />
<br />
==Contact==<br />
: Thesis will be at IBM Zurich in Rüschlikon<br />
; Hybrid AI Systems (HAS) projects<br />
: Contact (at ETH Zurich): [[:User:kgf | Dr. Frank K. Gurkaynak]] and [[:User:Herschmi | Michael Hersche]]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian] <br />
: Contact (at IBM): [mailto:abr@zurich.ibm.com Dr. Abbas Rahimi]<br />
: Professor: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]<br />
; In-Memory Computing (IMC) projects<br />
: Contact (at IBM/ETH Zurich): [[:User:rid | Riduan Khaddam-Aljameh]]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian]<br />
: Professor: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]</div>Abbashttp://iis-projects.ee.ethz.ch/index.php?title=IBM_Research&diff=6233IBM Research2021-01-13T18:53:58Z<p>Abbas: /* Hybrid AI Systems (HAS) */</p>
<hr />
<div>[[Category:Digital]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]] <br />
[[Category:Available]] <br />
[[Category:2020]]<br />
[[Category:Hot]]<br />
[[File:IBM_ZRLab.png|center]]<br />
<br />
==Short Description==<br />
Today, we are entering the era of cognitive computing, which holds great promise in deriving intelligence and knowledge from huge volumes of data. In today’s computers based on von Neumann architecture, huge amounts of data need to be shuttled back and forth at high speeds, a task at which this architecture is inefficient.<br />
<br />
It is becoming increasingly clear that to build efficient cognitive computers, we need to transition to non-von Neumann architectures in which memory and processing coexist in some form. At IBM Research–Zurich in the [https://www.zurich.ibm.com/sto/memory/ Neuromorphic and In-memory Computing Group], we explore various such computing paradigms from in-memory computing to brain-inspired neuromorphic computing. Our research spans from devices and architectures to algorithms and applications. <!--Here is a list of available projects:<br />
<br />
* [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf '''Artificial general intelligence (AGI):''' lifelong learning challenge] <br />
* [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf '''Machine learning based on optimal transport using in-memory computing''']<br />
* [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Developing Efficient Models of '''Strong AI for Intelligence Quotient (IQ) Test''']<br />
--><br />
<br />
===About the IBM Research–Zurich===<br />
The location in Zurich is one of IBM’s 12 global research labs. IBM has maintained a research laboratory in Switzerland since 1956. As the first European branch of IBM Research, the mission of the Zurich Lab, in addition to pursuing cutting-edge research for tomorrow’s information technology, is to cultivate close relationships with academic and industrial partners, be one of the premier places to work for world-class researchers, to promote women in IT and science, and to help drive Europe’s innovation agenda. [https://www.zurich.ibm.com/pdf/ZRL_Fact_Sheet.pdf Download factsheet]<br />
<br />
==Hybrid AI Systems (HAS)==<br />
<br />
[[File:NatureElectronics20.jpg|thumb|right|200px]]<br />
Neither symbolic AI nor neural networks alone has produced the kind of intelligence expressed in human and animal behavior. Why? Each has a long and rich history, but has addressed a relatively narrow aspect of the problem. Symbolic AI focuses on solving cognitive problems, drawing upon the rich framework of symbolic computation to manipulate internal representations in order to perform reasoning and inference. But it suffers from being non-adaptive, lacking the ability to learn from example or by direct observation of the world. Neural networks on the other hand have the ability to learn from data, and derive much of their power from nonlinear function approximation combined with stochastic gradient descent. But intelligence requires more than modeling input-output relationships. Without the richness of symbolic computation, neural nets lack the simple but powerful operations such as variable binding that allow for analogy making and reasoning, which underlie the ability to generalize from few examples.<br />
<br />
We approach the problem from a very different perspective, inspired by the brain’s high-dimensional circuits and the unique mathematical properties of high-dimensional spaces. ''It leads us to a novel information processing architecture that combines the strengths of symbolic AI and neural networks, and yet has novel emergent properties of its own.'' By combining a small set of basic operations on high-dimensional vectors, we obtain hybrid AI system (HAS) that makes it possible to represent and manipulate data in ways familiar to us from symbolic AI, and to learn from the statistics of data in ways familiar to us from artificial neural networks and deep learning. Further, principles of such HAS allow few-shot learning capabilities, and extremely robust operations against failures, defects, variations, and noise, all of which are complementary to ultra-low energy computation on nanoscale fabrics such as phase-change memory devices. Exciting further research (listed in below table) awaiting in this direction spans high-level algorithmic exploration all the way to efficient hardware design for emerging computational fabrics.<br />
<br />
===Useful Reading===<br />
*[https://arxiv.org/pdf/2010.01939.pdf Robust high-dimensional memory-augmented neural networks], arXiv, 2020<br />
*[https://www.nature.com/articles/s41928-020-0410-3 In-memory hyperdimensional computing], Nature Electronics, 2020.<br />
*[https://www.nature.com/articles/s41928-020-00510-8 A wearable biosensing system with in-sensor adaptive machine learning for hand gesture recognition], Nature Electronics, 2020. <br />
*[https://link.springer.com/article/10.1007/s12559-009-9009-8 Hyperdimensional computing: an introduction to computing in distributed representation with high-dimensional random vectors], Cognitive Computation, 2009.<br />
<br />
===Prerequisites===<br />
*Python<br />
*Background in machine learning (''recommended'')<br />
*Experience with any deep learning framework such as TensorFlow or PyTorch (''recommended'')<br />
*VLSI I (''recommended'')<br />
<br />
==In-Memory Computing (IMC)==<br />
<br />
[[File:NNcover_imc.jpg|thumb|right|200px]]<br />
For decades, conventional computers based on the von Neumann architecture have performed computation by repeatedly transferring data between their processing and their memory units, which are physically separated. As computation becomes increasingly data-centric and as the scalability limits in terms of performance and power are being reached, alternative computing paradigms are searched for in which computation and storage are collocated. A fascinating new approach is that of computational memory where the physics of nanoscale memory devices are used to perform certain computational tasks within the memory unit in a non-von Neumann manner. '''Computational Memory''' (CM) is finding application in a variety of areas such as machine learning and signal processing. In addition to novel non-volatile memory technologies like '''PCM''' and '''ReRAM''', also conventional '''SRAM''' has been proposed as underlying storage technology in Computational Memories.<br />
<br />
===Useful Reading===<br />
*[https://ieeexplore.ieee.org/document/9288639 An SRAM-Based Multibit In-Memory Matrix-Vector Multiplier With a Precision That Scales Linearly in Area, Time, and Power]<br />
*[https://www.nature.com/articles/s41565-020-0655-z Memory devices and applications for in-memory computing]<br />
*[https://ieeexplore.ieee.org/abstract/document/8865099 Deep learning acceleration based on in-memory computing]<br />
<br />
===Prerequisites===<br />
*General interest in Deep Learning and memory/system design<br />
*VLSI I and VLSI II (''recommended'')<br />
*Analog Circuit Design (''recommended'')<br />
Specific requirements for the different projects vary and are generally negotiable.<br />
<br />
==Available Projects==<br />
We are inviting applications from students to conduct their master’s thesis work or an internship project at the IBM Research lab in Zurich on this exciting new topic. <br />
<!--<br />
The work performed could span low-level hardware experiments on phase-change memory chips comprising more than 1 million devices to high-level algorithmic development in a deep learning framework such as TensorFlow or PyTorch. It also involves interactions with several researchers across IBM research focusing on various aspects of the project. The ideal candidate should have a multi-disciplinary background, strong mathematical aptitude and programming skills. Prior knowledge on emerging memory technologies such as phase-change memory is a bonus but not necessary.<br />
--><br />
<br />
{| class="wikitable" style="text-align: center;"<br />
|-<br />
! style="width: 5%;"|Type !! style="width: 20%"|Project !! style="width: 60%"|Description !! style="width: 5%"|Topic !! style="width: 15%"|Workload Type <br />
|-<br />
<br />
<br />
| MA || Face Recognition at Scale || [http://iis-projects.ee.ethz.ch/images/3/3d/IBM_FaceRec_at_Scale.pdf Link to description] || HAS || algorithmic design<br />
|-<br />
<br />
| MA || Accelerating Transformers with Computational Memory || [http://iis-projects.ee.ethz.ch/images/b/be/IBM_TransfAcc.pdf Link to description] || HAS || Hardware/algorithmic design<br />
|-<br />
<br />
<br />
<br />
<br />
| MA || Developing Efficient Models of Strong AI for Intelligence Quotient (IQ) Test || [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Link to description] || HAS || algorithmic design<br />
|-<br />
<br />
| MA|| Lifelong learning challenge || [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf Link to description] || HAS || algorithmic/hardware design<br />
|-<br />
<br />
| MA|| Machine learning based on optimal transport using in-memory computing' || [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf Link to description] || HAS || hardware design<br />
|-<br />
<br />
<br />
| MA || Accurate deep learning inference using computational memory || [https://www.zurich.ibm.com/careers/2020_027.html Project description and application] || IMC || algorithmic design<br />
|-<br />
| MA || ADC design for computational memory || style="text-align: justify;"|Digital-to-Analog converters ('''DAC'''s) and Analog-to-Digital converters ('''ADC'''s) are extensively employed in Computational Memory ('''CM''') to handle the crossing between the digital and analog domain, in which computationally expensive tasks, like Matrix-Vector Multiplications ('''MVM'''), are carried out with O(1) complexity. Each conversion costs a certain amount of energy and its precision can only be guaranteed up to the Effective Number of Bits ('''ENOB''') of the employed data converter. <br /> The research focus will be on understanding the system level requirements on ADC and DAC for optimal performance of Deep Neural Network inference using CM. Furthermore, the effects of noise, non-linearity and manufacturing tolerances shall be examined and counter measurements, like for example periodic digital ADC recalibration and digital post processing, shall be evaluated with regards to effectivity and energy costs.|| IMC || analog circuit design<br />
|-<br />
| SA || Testing of a computational memory chip || style="text-align: justify;"| This project is about building a Microprocessor/FPGA-based test platform around a novel IMC chip. After commissioning the chip, DL workload tests can be run to characterize its throughput and energy-efficiency.|| IMC || PCB Design <br /> and/or<br /> µ-code implementation <br />
|-<br />
| MA/SA || Neural Network Training on a <br /> computational memory chip || style="text-align: justify;"|TBA || IMC || algorithm/system design<br />and/or<br /> analog circuit design<br />
|}<br />
<br />
==Contact==<br />
: Thesis will be at IBM Zurich in Rüschlikon<br />
; Hybrid AI Systems (HAS) projects<br />
: Contact (at ETH Zurich): [[:User:kgf | Dr. Frank K. Gurkaynak]] and [[:User:Herschmi | Michael Hersche]]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian] <br />
: Contact (at IBM): [mailto:abr@zurich.ibm.com Dr. Abbas Rahimi]<br />
: Professor: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]<br />
; In-Memory Computing (IMC) projects<br />
: Contact (at IBM/ETH Zurich): [[:User:rid | Riduan Khaddam-Aljameh]]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian]<br />
: Professor: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]</div>Abbashttp://iis-projects.ee.ethz.ch/index.php?title=IBM_Research&diff=6232IBM Research2021-01-13T18:52:34Z<p>Abbas: /* Useful Reading */</p>
<hr />
<div>[[Category:Digital]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]] <br />
[[Category:Available]] <br />
[[Category:2020]]<br />
[[Category:Hot]]<br />
[[File:IBM_ZRLab.png|center]]<br />
<br />
==Short Description==<br />
Today, we are entering the era of cognitive computing, which holds great promise in deriving intelligence and knowledge from huge volumes of data. In today’s computers based on von Neumann architecture, huge amounts of data need to be shuttled back and forth at high speeds, a task at which this architecture is inefficient.<br />
<br />
It is becoming increasingly clear that to build efficient cognitive computers, we need to transition to non-von Neumann architectures in which memory and processing coexist in some form. At IBM Research–Zurich in the [https://www.zurich.ibm.com/sto/memory/ Neuromorphic and In-memory Computing Group], we explore various such computing paradigms from in-memory computing to brain-inspired neuromorphic computing. Our research spans from devices and architectures to algorithms and applications. <!--Here is a list of available projects:<br />
<br />
* [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf '''Artificial general intelligence (AGI):''' lifelong learning challenge] <br />
* [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf '''Machine learning based on optimal transport using in-memory computing''']<br />
* [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Developing Efficient Models of '''Strong AI for Intelligence Quotient (IQ) Test''']<br />
--><br />
<br />
===About the IBM Research–Zurich===<br />
The location in Zurich is one of IBM’s 12 global research labs. IBM has maintained a research laboratory in Switzerland since 1956. As the first European branch of IBM Research, the mission of the Zurich Lab, in addition to pursuing cutting-edge research for tomorrow’s information technology, is to cultivate close relationships with academic and industrial partners, be one of the premier places to work for world-class researchers, to promote women in IT and science, and to help drive Europe’s innovation agenda. [https://www.zurich.ibm.com/pdf/ZRL_Fact_Sheet.pdf Download factsheet]<br />
<br />
==Hybrid AI Systems (HAS)==<br />
<br />
[[File:NatureElectronics20.jpg|thumb|right|200px]]<br />
Neither symbolic AI nor neural networks alone has produced the kind of intelligence expressed in human and animal behavior. Why? Each has a long and rich history, but has addressed a relatively narrow aspect of the problem. Symbolic AI focuses on solving cognitive problems, drawing upon the rich framework of symbolic computation to manipulate internal representations in order to perform reasoning and inference. But it suffers from being non-adaptive, lacking the ability to learn from example or by direct observation of the world. Neural networks on the other hand have the ability to learn from data, and derive much of their power from nonlinear function approximation combined with stochastic gradient descent. But intelligence requires more than modeling input-output relationships. Without the richness of symbolic computation, neural nets lack the simple but powerful operations such as variable binding that allow for analogy making and reasoning, which underlie the ability to generalize from few examples.<br />
<br />
We approach the problem from a very different perspective, inspired by the brain’s high-dimensional circuits and the unique mathematical properties of high-dimensional spaces. ''It leads us to a novel information processing architecture that combines the strengths of symbolic AI and neural networks, and yet has novel emergent properties of its own.'' By combining a small set of basic operations on high-dimensional vectors, we obtain hybrid AI system (HAS) that makes it possible to represent and manipulate data in ways familiar to us from symbolic AI, and to learn from the statistics of data in ways familiar to us from artificial neural networks and deep learning. Further, principles of such HAS allow few-shot learning capabilities, and extremely robust operations against failures, defects, variations, and noise, all of which are complementary to ultra-low energy computation on nanoscale fabrics such as phase-change memory devices. Exciting further research (listed in below table) awaiting in this direction spans high-level algorithmic exploration all the way to efficient hardware design for emerging computational fabrics.<br />
<br />
===Useful Reading===<br />
*[https://www.nature.com/articles/s41928-020-0410-3 In-memory hyperdimensional computing], Nature Electronics, 2020.<br />
*[https://arxiv.org/pdf/2010.01939.pdf Robust high-dimensional memory-augmented neural networks], arXiv, 2020<br />
*[https://www.nature.com/articles/s41928-020-00510-8 A wearable biosensing system with in-sensor adaptive machine learning for hand gesture recognition], Nature Electronics, 2020. <br />
*[https://link.springer.com/article/10.1007/s12559-009-9009-8 Hyperdimensional computing: an introduction to computing in distributed representation with high-dimensional random vectors], Cognitive Computation, 2009.<br />
<br />
===Prerequisites===<br />
*Python<br />
*Background in machine learning (''recommended'')<br />
*Experience with any deep learning framework such as TensorFlow or PyTorch (''recommended'')<br />
*VLSI I (''recommended'')<br />
<br />
==In-Memory Computing (IMC)==<br />
<br />
[[File:NNcover_imc.jpg|thumb|right|200px]]<br />
For decades, conventional computers based on the von Neumann architecture have performed computation by repeatedly transferring data between their processing and their memory units, which are physically separated. As computation becomes increasingly data-centric and as the scalability limits in terms of performance and power are being reached, alternative computing paradigms are searched for in which computation and storage are collocated. A fascinating new approach is that of computational memory where the physics of nanoscale memory devices are used to perform certain computational tasks within the memory unit in a non-von Neumann manner. '''Computational Memory''' (CM) is finding application in a variety of areas such as machine learning and signal processing. In addition to novel non-volatile memory technologies like '''PCM''' and '''ReRAM''', also conventional '''SRAM''' has been proposed as underlying storage technology in Computational Memories.<br />
<br />
===Useful Reading===<br />
*[https://ieeexplore.ieee.org/document/9288639 An SRAM-Based Multibit In-Memory Matrix-Vector Multiplier With a Precision That Scales Linearly in Area, Time, and Power]<br />
*[https://www.nature.com/articles/s41565-020-0655-z Memory devices and applications for in-memory computing]<br />
*[https://ieeexplore.ieee.org/abstract/document/8865099 Deep learning acceleration based on in-memory computing]<br />
<br />
===Prerequisites===<br />
*General interest in Deep Learning and memory/system design<br />
*VLSI I and VLSI II (''recommended'')<br />
*Analog Circuit Design (''recommended'')<br />
Specific requirements for the different projects vary and are generally negotiable.<br />
<br />
==Available Projects==<br />
We are inviting applications from students to conduct their master’s thesis work or an internship project at the IBM Research lab in Zurich on this exciting new topic. <br />
<!--<br />
The work performed could span low-level hardware experiments on phase-change memory chips comprising more than 1 million devices to high-level algorithmic development in a deep learning framework such as TensorFlow or PyTorch. It also involves interactions with several researchers across IBM research focusing on various aspects of the project. The ideal candidate should have a multi-disciplinary background, strong mathematical aptitude and programming skills. Prior knowledge on emerging memory technologies such as phase-change memory is a bonus but not necessary.<br />
--><br />
<br />
{| class="wikitable" style="text-align: center;"<br />
|-<br />
! style="width: 5%;"|Type !! style="width: 20%"|Project !! style="width: 60%"|Description !! style="width: 5%"|Topic !! style="width: 15%"|Workload Type <br />
|-<br />
<br />
<br />
| MA || Face Recognition at Scale || [http://iis-projects.ee.ethz.ch/images/3/3d/IBM_FaceRec_at_Scale.pdf Link to description] || HAS || algorithmic design<br />
|-<br />
<br />
| MA || Accelerating Transformers with Computational Memory || [http://iis-projects.ee.ethz.ch/images/b/be/IBM_TransfAcc.pdf Link to description] || HAS || Hardware/algorithmic design<br />
|-<br />
<br />
<br />
<br />
<br />
| MA || Developing Efficient Models of Strong AI for Intelligence Quotient (IQ) Test || [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Link to description] || HAS || algorithmic design<br />
|-<br />
<br />
| MA|| Lifelong learning challenge || [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf Link to description] || HAS || algorithmic/hardware design<br />
|-<br />
<br />
| MA|| Machine learning based on optimal transport using in-memory computing' || [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf Link to description] || HAS || hardware design<br />
|-<br />
<br />
<br />
| MA || Accurate deep learning inference using computational memory || [https://www.zurich.ibm.com/careers/2020_027.html Project description and application] || IMC || algorithmic design<br />
|-<br />
| MA || ADC design for computational memory || style="text-align: justify;"|Digital-to-Analog converters ('''DAC'''s) and Analog-to-Digital converters ('''ADC'''s) are extensively employed in Computational Memory ('''CM''') to handle the crossing between the digital and analog domain, in which computationally expensive tasks, like Matrix-Vector Multiplications ('''MVM'''), are carried out with O(1) complexity. Each conversion costs a certain amount of energy and its precision can only be guaranteed up to the Effective Number of Bits ('''ENOB''') of the employed data converter. <br /> The research focus will be on understanding the system level requirements on ADC and DAC for optimal performance of Deep Neural Network inference using CM. Furthermore, the effects of noise, non-linearity and manufacturing tolerances shall be examined and counter measurements, like for example periodic digital ADC recalibration and digital post processing, shall be evaluated with regards to effectivity and energy costs.|| IMC || analog circuit design<br />
|-<br />
| SA || Testing of a computational memory chip || style="text-align: justify;"| This project is about building a Microprocessor/FPGA-based test platform around a novel IMC chip. After commissioning the chip, DL workload tests can be run to characterize its throughput and energy-efficiency.|| IMC || PCB Design <br /> and/or<br /> µ-code implementation <br />
|-<br />
| MA/SA || Neural Network Training on a <br /> computational memory chip || style="text-align: justify;"|TBA || IMC || algorithm/system design<br />and/or<br /> analog circuit design<br />
|}<br />
<br />
==Contact==<br />
: Thesis will be at IBM Zurich in Rüschlikon<br />
; Hybrid AI Systems (HAS) projects<br />
: Contact (at ETH Zurich): [[:User:kgf | Dr. Frank K. Gurkaynak]] and [[:User:Herschmi | Michael Hersche]]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian] <br />
: Contact (at IBM): [mailto:abr@zurich.ibm.com Dr. Abbas Rahimi]<br />
: Professor: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]<br />
; In-Memory Computing (IMC) projects<br />
: Contact (at IBM/ETH Zurich): [[:User:rid | Riduan Khaddam-Aljameh]]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian]<br />
: Professor: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]</div>Abbashttp://iis-projects.ee.ethz.ch/index.php?title=IBM_Research&diff=6231IBM Research2021-01-13T18:50:39Z<p>Abbas: /* Contact */</p>
<hr />
<div>[[Category:Digital]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]] <br />
[[Category:Available]] <br />
[[Category:2020]]<br />
[[Category:Hot]]<br />
[[File:IBM_ZRLab.png|center]]<br />
<br />
==Short Description==<br />
Today, we are entering the era of cognitive computing, which holds great promise in deriving intelligence and knowledge from huge volumes of data. In today’s computers based on von Neumann architecture, huge amounts of data need to be shuttled back and forth at high speeds, a task at which this architecture is inefficient.<br />
<br />
It is becoming increasingly clear that to build efficient cognitive computers, we need to transition to non-von Neumann architectures in which memory and processing coexist in some form. At IBM Research–Zurich in the [https://www.zurich.ibm.com/sto/memory/ Neuromorphic and In-memory Computing Group], we explore various such computing paradigms from in-memory computing to brain-inspired neuromorphic computing. Our research spans from devices and architectures to algorithms and applications. <!--Here is a list of available projects:<br />
<br />
* [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf '''Artificial general intelligence (AGI):''' lifelong learning challenge] <br />
* [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf '''Machine learning based on optimal transport using in-memory computing''']<br />
* [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Developing Efficient Models of '''Strong AI for Intelligence Quotient (IQ) Test''']<br />
--><br />
<br />
===About the IBM Research–Zurich===<br />
The location in Zurich is one of IBM’s 12 global research labs. IBM has maintained a research laboratory in Switzerland since 1956. As the first European branch of IBM Research, the mission of the Zurich Lab, in addition to pursuing cutting-edge research for tomorrow’s information technology, is to cultivate close relationships with academic and industrial partners, be one of the premier places to work for world-class researchers, to promote women in IT and science, and to help drive Europe’s innovation agenda. [https://www.zurich.ibm.com/pdf/ZRL_Fact_Sheet.pdf Download factsheet]<br />
<br />
==Hybrid AI Systems (HAS)==<br />
<br />
[[File:NatureElectronics20.jpg|thumb|right|200px]]<br />
Neither symbolic AI nor neural networks alone has produced the kind of intelligence expressed in human and animal behavior. Why? Each has a long and rich history, but has addressed a relatively narrow aspect of the problem. Symbolic AI focuses on solving cognitive problems, drawing upon the rich framework of symbolic computation to manipulate internal representations in order to perform reasoning and inference. But it suffers from being non-adaptive, lacking the ability to learn from example or by direct observation of the world. Neural networks on the other hand have the ability to learn from data, and derive much of their power from nonlinear function approximation combined with stochastic gradient descent. But intelligence requires more than modeling input-output relationships. Without the richness of symbolic computation, neural nets lack the simple but powerful operations such as variable binding that allow for analogy making and reasoning, which underlie the ability to generalize from few examples.<br />
<br />
We approach the problem from a very different perspective, inspired by the brain’s high-dimensional circuits and the unique mathematical properties of high-dimensional spaces. ''It leads us to a novel information processing architecture that combines the strengths of symbolic AI and neural networks, and yet has novel emergent properties of its own.'' By combining a small set of basic operations on high-dimensional vectors, we obtain hybrid AI system (HAS) that makes it possible to represent and manipulate data in ways familiar to us from symbolic AI, and to learn from the statistics of data in ways familiar to us from artificial neural networks and deep learning. Further, principles of such HAS allow few-shot learning capabilities, and extremely robust operations against failures, defects, variations, and noise, all of which are complementary to ultra-low energy computation on nanoscale fabrics such as phase-change memory devices. Exciting further research (listed in below table) awaiting in this direction spans high-level algorithmic exploration all the way to efficient hardware design for emerging computational fabrics.<br />
<br />
===Useful Reading===<br />
*[https://www.nature.com/articles/s41928-020-0410-3 In-memory hyperdimensional computing], Nature Electronics, 2020.<br />
*[https://arxiv.org/pdf/2010.01939.pdf Robust High-dimensional Memory-augmented Neural Networks], arXiv, 2020<br />
*[https://www.nature.com/articles/s41928-020-00510-8 A wearable biosensing system with in-sensor adaptive machine learning for hand gesture recognition], Nature Electronics, 2020. <br />
*[https://link.springer.com/article/10.1007/s12559-009-9009-8 Hyperdimensional Computing: An Introduction to Computing in Distributed Representation with High-Dimensional Random Vectors], Cognitive Computation, 2009.<br />
<br />
===Prerequisites===<br />
*Python<br />
*Background in machine learning (''recommended'')<br />
*Experience with any deep learning framework such as TensorFlow or PyTorch (''recommended'')<br />
*VLSI I (''recommended'')<br />
<br />
==In-Memory Computing (IMC)==<br />
<br />
[[File:NNcover_imc.jpg|thumb|right|200px]]<br />
For decades, conventional computers based on the von Neumann architecture have performed computation by repeatedly transferring data between their processing and their memory units, which are physically separated. As computation becomes increasingly data-centric and as the scalability limits in terms of performance and power are being reached, alternative computing paradigms are searched for in which computation and storage are collocated. A fascinating new approach is that of computational memory where the physics of nanoscale memory devices are used to perform certain computational tasks within the memory unit in a non-von Neumann manner. '''Computational Memory''' (CM) is finding application in a variety of areas such as machine learning and signal processing. In addition to novel non-volatile memory technologies like '''PCM''' and '''ReRAM''', also conventional '''SRAM''' has been proposed as underlying storage technology in Computational Memories.<br />
<br />
===Useful Reading===<br />
*[https://ieeexplore.ieee.org/document/9288639 An SRAM-Based Multibit In-Memory Matrix-Vector Multiplier With a Precision That Scales Linearly in Area, Time, and Power]<br />
*[https://www.nature.com/articles/s41565-020-0655-z Memory devices and applications for in-memory computing]<br />
*[https://ieeexplore.ieee.org/abstract/document/8865099 Deep learning acceleration based on in-memory computing]<br />
<br />
===Prerequisites===<br />
*General interest in Deep Learning and memory/system design<br />
*VLSI I and VLSI II (''recommended'')<br />
*Analog Circuit Design (''recommended'')<br />
Specific requirements for the different projects vary and are generally negotiable.<br />
<br />
==Available Projects==<br />
We are inviting applications from students to conduct their master’s thesis work or an internship project at the IBM Research lab in Zurich on this exciting new topic. <br />
<!--<br />
The work performed could span low-level hardware experiments on phase-change memory chips comprising more than 1 million devices to high-level algorithmic development in a deep learning framework such as TensorFlow or PyTorch. It also involves interactions with several researchers across IBM research focusing on various aspects of the project. The ideal candidate should have a multi-disciplinary background, strong mathematical aptitude and programming skills. Prior knowledge on emerging memory technologies such as phase-change memory is a bonus but not necessary.<br />
--><br />
<br />
{| class="wikitable" style="text-align: center;"<br />
|-<br />
! style="width: 5%;"|Type !! style="width: 20%"|Project !! style="width: 60%"|Description !! style="width: 5%"|Topic !! style="width: 15%"|Workload Type <br />
|-<br />
<br />
<br />
| MA || Face Recognition at Scale || [http://iis-projects.ee.ethz.ch/images/3/3d/IBM_FaceRec_at_Scale.pdf Link to description] || HAS || algorithmic design<br />
|-<br />
<br />
| MA || Accelerating Transformers with Computational Memory || [http://iis-projects.ee.ethz.ch/images/b/be/IBM_TransfAcc.pdf Link to description] || HAS || Hardware/algorithmic design<br />
|-<br />
<br />
<br />
<br />
<br />
| MA || Developing Efficient Models of Strong AI for Intelligence Quotient (IQ) Test || [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Link to description] || HAS || algorithmic design<br />
|-<br />
<br />
| MA|| Lifelong learning challenge || [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf Link to description] || HAS || algorithmic/hardware design<br />
|-<br />
<br />
| MA|| Machine learning based on optimal transport using in-memory computing' || [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf Link to description] || HAS || hardware design<br />
|-<br />
<br />
<br />
| MA || Accurate deep learning inference using computational memory || [https://www.zurich.ibm.com/careers/2020_027.html Project description and application] || IMC || algorithmic design<br />
|-<br />
| MA || ADC design for computational memory || style="text-align: justify;"|Digital-to-Analog converters ('''DAC'''s) and Analog-to-Digital converters ('''ADC'''s) are extensively employed in Computational Memory ('''CM''') to handle the crossing between the digital and analog domain, in which computationally expensive tasks, like Matrix-Vector Multiplications ('''MVM'''), are carried out with O(1) complexity. Each conversion costs a certain amount of energy and its precision can only be guaranteed up to the Effective Number of Bits ('''ENOB''') of the employed data converter. <br /> The research focus will be on understanding the system level requirements on ADC and DAC for optimal performance of Deep Neural Network inference using CM. Furthermore, the effects of noise, non-linearity and manufacturing tolerances shall be examined and counter measurements, like for example periodic digital ADC recalibration and digital post processing, shall be evaluated with regards to effectivity and energy costs.|| IMC || analog circuit design<br />
|-<br />
| SA || Testing of a computational memory chip || style="text-align: justify;"| This project is about building a Microprocessor/FPGA-based test platform around a novel IMC chip. After commissioning the chip, DL workload tests can be run to characterize its throughput and energy-efficiency.|| IMC || PCB Design <br /> and/or<br /> µ-code implementation <br />
|-<br />
| MA/SA || Neural Network Training on a <br /> computational memory chip || style="text-align: justify;"|TBA || IMC || algorithm/system design<br />and/or<br /> analog circuit design<br />
|}<br />
<br />
==Contact==<br />
: Thesis will be at IBM Zurich in Rüschlikon<br />
; Hybrid AI Systems (HAS) projects<br />
: Contact (at ETH Zurich): [[:User:kgf | Dr. Frank K. Gurkaynak]] and [[:User:Herschmi | Michael Hersche]]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian] <br />
: Contact (at IBM): [mailto:abr@zurich.ibm.com Dr. Abbas Rahimi]<br />
: Professor: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]<br />
; In-Memory Computing (IMC) projects<br />
: Contact (at IBM/ETH Zurich): [[:User:rid | Riduan Khaddam-Aljameh]]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian]<br />
: Professor: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]</div>Abbashttp://iis-projects.ee.ethz.ch/index.php?title=IBM_Research&diff=6230IBM Research2021-01-13T18:50:07Z<p>Abbas: /* Available Projects */</p>
<hr />
<div>[[Category:Digital]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]] <br />
[[Category:Available]] <br />
[[Category:2020]]<br />
[[Category:Hot]]<br />
[[File:IBM_ZRLab.png|center]]<br />
<br />
==Short Description==<br />
Today, we are entering the era of cognitive computing, which holds great promise in deriving intelligence and knowledge from huge volumes of data. In today’s computers based on von Neumann architecture, huge amounts of data need to be shuttled back and forth at high speeds, a task at which this architecture is inefficient.<br />
<br />
It is becoming increasingly clear that to build efficient cognitive computers, we need to transition to non-von Neumann architectures in which memory and processing coexist in some form. At IBM Research–Zurich in the [https://www.zurich.ibm.com/sto/memory/ Neuromorphic and In-memory Computing Group], we explore various such computing paradigms from in-memory computing to brain-inspired neuromorphic computing. Our research spans from devices and architectures to algorithms and applications. <!--Here is a list of available projects:<br />
<br />
* [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf '''Artificial general intelligence (AGI):''' lifelong learning challenge] <br />
* [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf '''Machine learning based on optimal transport using in-memory computing''']<br />
* [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Developing Efficient Models of '''Strong AI for Intelligence Quotient (IQ) Test''']<br />
--><br />
<br />
===About the IBM Research–Zurich===<br />
The location in Zurich is one of IBM’s 12 global research labs. IBM has maintained a research laboratory in Switzerland since 1956. As the first European branch of IBM Research, the mission of the Zurich Lab, in addition to pursuing cutting-edge research for tomorrow’s information technology, is to cultivate close relationships with academic and industrial partners, be one of the premier places to work for world-class researchers, to promote women in IT and science, and to help drive Europe’s innovation agenda. [https://www.zurich.ibm.com/pdf/ZRL_Fact_Sheet.pdf Download factsheet]<br />
<br />
==Hybrid AI Systems (HAS)==<br />
<br />
[[File:NatureElectronics20.jpg|thumb|right|200px]]<br />
Neither symbolic AI nor neural networks alone has produced the kind of intelligence expressed in human and animal behavior. Why? Each has a long and rich history, but has addressed a relatively narrow aspect of the problem. Symbolic AI focuses on solving cognitive problems, drawing upon the rich framework of symbolic computation to manipulate internal representations in order to perform reasoning and inference. But it suffers from being non-adaptive, lacking the ability to learn from example or by direct observation of the world. Neural networks on the other hand have the ability to learn from data, and derive much of their power from nonlinear function approximation combined with stochastic gradient descent. But intelligence requires more than modeling input-output relationships. Without the richness of symbolic computation, neural nets lack the simple but powerful operations such as variable binding that allow for analogy making and reasoning, which underlie the ability to generalize from few examples.<br />
<br />
We approach the problem from a very different perspective, inspired by the brain’s high-dimensional circuits and the unique mathematical properties of high-dimensional spaces. ''It leads us to a novel information processing architecture that combines the strengths of symbolic AI and neural networks, and yet has novel emergent properties of its own.'' By combining a small set of basic operations on high-dimensional vectors, we obtain hybrid AI system (HAS) that makes it possible to represent and manipulate data in ways familiar to us from symbolic AI, and to learn from the statistics of data in ways familiar to us from artificial neural networks and deep learning. Further, principles of such HAS allow few-shot learning capabilities, and extremely robust operations against failures, defects, variations, and noise, all of which are complementary to ultra-low energy computation on nanoscale fabrics such as phase-change memory devices. Exciting further research (listed in below table) awaiting in this direction spans high-level algorithmic exploration all the way to efficient hardware design for emerging computational fabrics.<br />
<br />
===Useful Reading===<br />
*[https://www.nature.com/articles/s41928-020-0410-3 In-memory hyperdimensional computing], Nature Electronics, 2020.<br />
*[https://arxiv.org/pdf/2010.01939.pdf Robust High-dimensional Memory-augmented Neural Networks], arXiv, 2020<br />
*[https://www.nature.com/articles/s41928-020-00510-8 A wearable biosensing system with in-sensor adaptive machine learning for hand gesture recognition], Nature Electronics, 2020. <br />
*[https://link.springer.com/article/10.1007/s12559-009-9009-8 Hyperdimensional Computing: An Introduction to Computing in Distributed Representation with High-Dimensional Random Vectors], Cognitive Computation, 2009.<br />
<br />
===Prerequisites===<br />
*Python<br />
*Background in machine learning (''recommended'')<br />
*Experience with any deep learning framework such as TensorFlow or PyTorch (''recommended'')<br />
*VLSI I (''recommended'')<br />
<br />
==In-Memory Computing (IMC)==<br />
<br />
[[File:NNcover_imc.jpg|thumb|right|200px]]<br />
For decades, conventional computers based on the von Neumann architecture have performed computation by repeatedly transferring data between their processing and their memory units, which are physically separated. As computation becomes increasingly data-centric and as the scalability limits in terms of performance and power are being reached, alternative computing paradigms are searched for in which computation and storage are collocated. A fascinating new approach is that of computational memory where the physics of nanoscale memory devices are used to perform certain computational tasks within the memory unit in a non-von Neumann manner. '''Computational Memory''' (CM) is finding application in a variety of areas such as machine learning and signal processing. In addition to novel non-volatile memory technologies like '''PCM''' and '''ReRAM''', also conventional '''SRAM''' has been proposed as underlying storage technology in Computational Memories.<br />
<br />
===Useful Reading===<br />
*[https://ieeexplore.ieee.org/document/9288639 An SRAM-Based Multibit In-Memory Matrix-Vector Multiplier With a Precision That Scales Linearly in Area, Time, and Power]<br />
*[https://www.nature.com/articles/s41565-020-0655-z Memory devices and applications for in-memory computing]<br />
*[https://ieeexplore.ieee.org/abstract/document/8865099 Deep learning acceleration based on in-memory computing]<br />
<br />
===Prerequisites===<br />
*General interest in Deep Learning and memory/system design<br />
*VLSI I and VLSI II (''recommended'')<br />
*Analog Circuit Design (''recommended'')<br />
Specific requirements for the different projects vary and are generally negotiable.<br />
<br />
==Available Projects==<br />
We are inviting applications from students to conduct their master’s thesis work or an internship project at the IBM Research lab in Zurich on this exciting new topic. <br />
<!--<br />
The work performed could span low-level hardware experiments on phase-change memory chips comprising more than 1 million devices to high-level algorithmic development in a deep learning framework such as TensorFlow or PyTorch. It also involves interactions with several researchers across IBM research focusing on various aspects of the project. The ideal candidate should have a multi-disciplinary background, strong mathematical aptitude and programming skills. Prior knowledge on emerging memory technologies such as phase-change memory is a bonus but not necessary.<br />
--><br />
<br />
{| class="wikitable" style="text-align: center;"<br />
|-<br />
! style="width: 5%;"|Type !! style="width: 20%"|Project !! style="width: 60%"|Description !! style="width: 5%"|Topic !! style="width: 15%"|Workload Type <br />
|-<br />
<br />
<br />
| MA || Face Recognition at Scale || [http://iis-projects.ee.ethz.ch/images/3/3d/IBM_FaceRec_at_Scale.pdf Link to description] || HAS || algorithmic design<br />
|-<br />
<br />
| MA || Accelerating Transformers with Computational Memory || [http://iis-projects.ee.ethz.ch/images/b/be/IBM_TransfAcc.pdf Link to description] || HAS || Hardware/algorithmic design<br />
|-<br />
<br />
<br />
<br />
<br />
| MA || Developing Efficient Models of Strong AI for Intelligence Quotient (IQ) Test || [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Link to description] || HAS || algorithmic design<br />
|-<br />
<br />
| MA|| Lifelong learning challenge || [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf Link to description] || HAS || algorithmic/hardware design<br />
|-<br />
<br />
| MA|| Machine learning based on optimal transport using in-memory computing' || [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf Link to description] || HAS || hardware design<br />
|-<br />
<br />
<br />
| MA || Accurate deep learning inference using computational memory || [https://www.zurich.ibm.com/careers/2020_027.html Project description and application] || IMC || algorithmic design<br />
|-<br />
| MA || ADC design for computational memory || style="text-align: justify;"|Digital-to-Analog converters ('''DAC'''s) and Analog-to-Digital converters ('''ADC'''s) are extensively employed in Computational Memory ('''CM''') to handle the crossing between the digital and analog domain, in which computationally expensive tasks, like Matrix-Vector Multiplications ('''MVM'''), are carried out with O(1) complexity. Each conversion costs a certain amount of energy and its precision can only be guaranteed up to the Effective Number of Bits ('''ENOB''') of the employed data converter. <br /> The research focus will be on understanding the system level requirements on ADC and DAC for optimal performance of Deep Neural Network inference using CM. Furthermore, the effects of noise, non-linearity and manufacturing tolerances shall be examined and counter measurements, like for example periodic digital ADC recalibration and digital post processing, shall be evaluated with regards to effectivity and energy costs.|| IMC || analog circuit design<br />
|-<br />
| SA || Testing of a computational memory chip || style="text-align: justify;"| This project is about building a Microprocessor/FPGA-based test platform around a novel IMC chip. After commissioning the chip, DL workload tests can be run to characterize its throughput and energy-efficiency.|| IMC || PCB Design <br /> and/or<br /> µ-code implementation <br />
|-<br />
| MA/SA || Neural Network Training on a <br /> computational memory chip || style="text-align: justify;"|TBA || IMC || algorithm/system design<br />and/or<br /> analog circuit design<br />
|}<br />
<br />
==Contact==<br />
: Thesis will be at IBM Zurich in Rüschlikon<br />
; Hyperdimensional Computing (HDC) projects<br />
: Contact (at ETH Zurich): [[:User:kgf | Dr. Frank K. Gurkaynak]] and [[:User:Herschmi | Michael Hersche]]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian] <br />
: Contact (at IBM): [mailto:abr@zurich.ibm.com Dr. Abbas Rahimi]<br />
: Professor: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]<br />
; In-Memory Computing (IMC) projects<br />
: Contact (at IBM/ETH Zurich): [[:User:rid | Riduan Khaddam-Aljameh]]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian]<br />
: Professor: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]</div>Abbashttp://iis-projects.ee.ethz.ch/index.php?title=File:IBM_TransfAcc.pdf&diff=6229File:IBM TransfAcc.pdf2021-01-13T18:48:52Z<p>Abbas: </p>
<hr />
<div></div>Abbashttp://iis-projects.ee.ethz.ch/index.php?title=IBM_Research&diff=6228IBM Research2021-01-13T18:47:21Z<p>Abbas: /* Available Projects */</p>
<hr />
<div>[[Category:Digital]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]] <br />
[[Category:Available]] <br />
[[Category:2020]]<br />
[[Category:Hot]]<br />
[[File:IBM_ZRLab.png|center]]<br />
<br />
==Short Description==<br />
Today, we are entering the era of cognitive computing, which holds great promise in deriving intelligence and knowledge from huge volumes of data. In today’s computers based on von Neumann architecture, huge amounts of data need to be shuttled back and forth at high speeds, a task at which this architecture is inefficient.<br />
<br />
It is becoming increasingly clear that to build efficient cognitive computers, we need to transition to non-von Neumann architectures in which memory and processing coexist in some form. At IBM Research–Zurich in the [https://www.zurich.ibm.com/sto/memory/ Neuromorphic and In-memory Computing Group], we explore various such computing paradigms from in-memory computing to brain-inspired neuromorphic computing. Our research spans from devices and architectures to algorithms and applications. <!--Here is a list of available projects:<br />
<br />
* [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf '''Artificial general intelligence (AGI):''' lifelong learning challenge] <br />
* [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf '''Machine learning based on optimal transport using in-memory computing''']<br />
* [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Developing Efficient Models of '''Strong AI for Intelligence Quotient (IQ) Test''']<br />
--><br />
<br />
===About the IBM Research–Zurich===<br />
The location in Zurich is one of IBM’s 12 global research labs. IBM has maintained a research laboratory in Switzerland since 1956. As the first European branch of IBM Research, the mission of the Zurich Lab, in addition to pursuing cutting-edge research for tomorrow’s information technology, is to cultivate close relationships with academic and industrial partners, be one of the premier places to work for world-class researchers, to promote women in IT and science, and to help drive Europe’s innovation agenda. [https://www.zurich.ibm.com/pdf/ZRL_Fact_Sheet.pdf Download factsheet]<br />
<br />
==Hybrid AI Systems (HAS)==<br />
<br />
[[File:NatureElectronics20.jpg|thumb|right|200px]]<br />
Neither symbolic AI nor neural networks alone has produced the kind of intelligence expressed in human and animal behavior. Why? Each has a long and rich history, but has addressed a relatively narrow aspect of the problem. Symbolic AI focuses on solving cognitive problems, drawing upon the rich framework of symbolic computation to manipulate internal representations in order to perform reasoning and inference. But it suffers from being non-adaptive, lacking the ability to learn from example or by direct observation of the world. Neural networks on the other hand have the ability to learn from data, and derive much of their power from nonlinear function approximation combined with stochastic gradient descent. But intelligence requires more than modeling input-output relationships. Without the richness of symbolic computation, neural nets lack the simple but powerful operations such as variable binding that allow for analogy making and reasoning, which underlie the ability to generalize from few examples.<br />
<br />
We approach the problem from a very different perspective, inspired by the brain’s high-dimensional circuits and the unique mathematical properties of high-dimensional spaces. ''It leads us to a novel information processing architecture that combines the strengths of symbolic AI and neural networks, and yet has novel emergent properties of its own.'' By combining a small set of basic operations on high-dimensional vectors, we obtain hybrid AI system (HAS) that makes it possible to represent and manipulate data in ways familiar to us from symbolic AI, and to learn from the statistics of data in ways familiar to us from artificial neural networks and deep learning. Further, principles of such HAS allow few-shot learning capabilities, and extremely robust operations against failures, defects, variations, and noise, all of which are complementary to ultra-low energy computation on nanoscale fabrics such as phase-change memory devices. Exciting further research (listed in below table) awaiting in this direction spans high-level algorithmic exploration all the way to efficient hardware design for emerging computational fabrics.<br />
<br />
===Useful Reading===<br />
*[https://www.nature.com/articles/s41928-020-0410-3 In-memory hyperdimensional computing], Nature Electronics, 2020.<br />
*[https://arxiv.org/pdf/2010.01939.pdf Robust High-dimensional Memory-augmented Neural Networks], arXiv, 2020<br />
*[https://www.nature.com/articles/s41928-020-00510-8 A wearable biosensing system with in-sensor adaptive machine learning for hand gesture recognition], Nature Electronics, 2020. <br />
*[https://link.springer.com/article/10.1007/s12559-009-9009-8 Hyperdimensional Computing: An Introduction to Computing in Distributed Representation with High-Dimensional Random Vectors], Cognitive Computation, 2009.<br />
<br />
===Prerequisites===<br />
*Python<br />
*Background in machine learning (''recommended'')<br />
*Experience with any deep learning framework such as TensorFlow or PyTorch (''recommended'')<br />
*VLSI I (''recommended'')<br />
<br />
==In-Memory Computing (IMC)==<br />
<br />
[[File:NNcover_imc.jpg|thumb|right|200px]]<br />
For decades, conventional computers based on the von Neumann architecture have performed computation by repeatedly transferring data between their processing and their memory units, which are physically separated. As computation becomes increasingly data-centric and as the scalability limits in terms of performance and power are being reached, alternative computing paradigms are searched for in which computation and storage are collocated. A fascinating new approach is that of computational memory where the physics of nanoscale memory devices are used to perform certain computational tasks within the memory unit in a non-von Neumann manner. '''Computational Memory''' (CM) is finding application in a variety of areas such as machine learning and signal processing. In addition to novel non-volatile memory technologies like '''PCM''' and '''ReRAM''', also conventional '''SRAM''' has been proposed as underlying storage technology in Computational Memories.<br />
<br />
===Useful Reading===<br />
*[https://ieeexplore.ieee.org/document/9288639 An SRAM-Based Multibit In-Memory Matrix-Vector Multiplier With a Precision That Scales Linearly in Area, Time, and Power]<br />
*[https://www.nature.com/articles/s41565-020-0655-z Memory devices and applications for in-memory computing]<br />
*[https://ieeexplore.ieee.org/abstract/document/8865099 Deep learning acceleration based on in-memory computing]<br />
<br />
===Prerequisites===<br />
*General interest in Deep Learning and memory/system design<br />
*VLSI I and VLSI II (''recommended'')<br />
*Analog Circuit Design (''recommended'')<br />
Specific requirements for the different projects vary and are generally negotiable.<br />
<br />
==Available Projects==<br />
We are inviting applications from students to conduct their master’s thesis work or an internship project at the IBM Research lab in Zurich on this exciting new topic. <br />
<!--<br />
The work performed could span low-level hardware experiments on phase-change memory chips comprising more than 1 million devices to high-level algorithmic development in a deep learning framework such as TensorFlow or PyTorch. It also involves interactions with several researchers across IBM research focusing on various aspects of the project. The ideal candidate should have a multi-disciplinary background, strong mathematical aptitude and programming skills. Prior knowledge on emerging memory technologies such as phase-change memory is a bonus but not necessary.<br />
--><br />
<br />
{| class="wikitable" style="text-align: center;"<br />
|-<br />
! style="width: 5%;"|Type !! style="width: 20%"|Project !! style="width: 60%"|Description !! style="width: 5%"|Topic !! style="width: 15%"|Workload Type <br />
|-<br />
<br />
<br />
| MA || Face Recognition at Scale || [http://iis-projects.ee.ethz.ch/images/3/3d/IBM_FaceRec_at_Scale.pdf Link to description] || HAS || algorithmic design<br />
|-<br />
<br />
<br />
| MA || Developing Efficient Models of Strong AI for Intelligence Quotient (IQ) Test || [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Developing Efficient Models of '''Strong AI for Intelligence Quotient (IQ) Test'''] || HAS || algorithmic design<br />
|-<br />
<br />
| MA|| Lifelong learning challenge || [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf '''Artificial general intelligence (AGI):''' lifelong learning challenge] || HAS || algorithmic/hardware design<br />
|-<br />
<br />
| MA|| Machine learning based on optimal transport using in-memory computing' || [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf '''Machine learning based on optimal transport using in-memory computing'''] || HAS || hardware design<br />
|-<br />
<br />
<br />
| MA || Accurate deep learning inference using computational memory || [https://www.zurich.ibm.com/careers/2020_027.html Project description and application] || IMC || algorithmic design<br />
|-<br />
| MA || ADC design for computational memory || style="text-align: justify;"|Digital-to-Analog converters ('''DAC'''s) and Analog-to-Digital converters ('''ADC'''s) are extensively employed in Computational Memory ('''CM''') to handle the crossing between the digital and analog domain, in which computationally expensive tasks, like Matrix-Vector Multiplications ('''MVM'''), are carried out with O(1) complexity. Each conversion costs a certain amount of energy and its precision can only be guaranteed up to the Effective Number of Bits ('''ENOB''') of the employed data converter. <br /> The research focus will be on understanding the system level requirements on ADC and DAC for optimal performance of Deep Neural Network inference using CM. Furthermore, the effects of noise, non-linearity and manufacturing tolerances shall be examined and counter measurements, like for example periodic digital ADC recalibration and digital post processing, shall be evaluated with regards to effectivity and energy costs.|| IMC || analog circuit design<br />
|-<br />
| SA || Testing of a computational memory chip || style="text-align: justify;"| This project is about building a Microprocessor/FPGA-based test platform around a novel IMC chip. After commissioning the chip, DL workload tests can be run to characterize its throughput and energy-efficiency.|| IMC || PCB Design <br /> and/or<br /> µ-code implementation <br />
|-<br />
| MA/SA || Neural Network Training on a <br /> computational memory chip || style="text-align: justify;"|TBA || IMC || algorithm/system design<br />and/or<br /> analog circuit design<br />
|}<br />
<br />
==Contact==<br />
: Thesis will be at IBM Zurich in Rüschlikon<br />
; Hyperdimensional Computing (HDC) projects<br />
: Contact (at ETH Zurich): [[:User:kgf | Dr. Frank K. Gurkaynak]] and [[:User:Herschmi | Michael Hersche]]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian] <br />
: Contact (at IBM): [mailto:abr@zurich.ibm.com Dr. Abbas Rahimi]<br />
: Professor: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]<br />
; In-Memory Computing (IMC) projects<br />
: Contact (at IBM/ETH Zurich): [[:User:rid | Riduan Khaddam-Aljameh]]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian]<br />
: Professor: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]</div>Abbashttp://iis-projects.ee.ethz.ch/index.php?title=File:IBM_FaceRec_at_Scale.pdf&diff=6227File:IBM FaceRec at Scale.pdf2021-01-13T18:45:25Z<p>Abbas: </p>
<hr />
<div></div>Abbashttp://iis-projects.ee.ethz.ch/index.php?title=IBM_Research&diff=6226IBM Research2021-01-13T18:41:56Z<p>Abbas: /* Available Projects */</p>
<hr />
<div>[[Category:Digital]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]] <br />
[[Category:Available]] <br />
[[Category:2020]]<br />
[[Category:Hot]]<br />
[[File:IBM_ZRLab.png|center]]<br />
<br />
==Short Description==<br />
Today, we are entering the era of cognitive computing, which holds great promise in deriving intelligence and knowledge from huge volumes of data. In today’s computers based on von Neumann architecture, huge amounts of data need to be shuttled back and forth at high speeds, a task at which this architecture is inefficient.<br />
<br />
It is becoming increasingly clear that to build efficient cognitive computers, we need to transition to non-von Neumann architectures in which memory and processing coexist in some form. At IBM Research–Zurich in the [https://www.zurich.ibm.com/sto/memory/ Neuromorphic and In-memory Computing Group], we explore various such computing paradigms from in-memory computing to brain-inspired neuromorphic computing. Our research spans from devices and architectures to algorithms and applications. <!--Here is a list of available projects:<br />
<br />
* [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf '''Artificial general intelligence (AGI):''' lifelong learning challenge] <br />
* [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf '''Machine learning based on optimal transport using in-memory computing''']<br />
* [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Developing Efficient Models of '''Strong AI for Intelligence Quotient (IQ) Test''']<br />
--><br />
<br />
===About the IBM Research–Zurich===<br />
The location in Zurich is one of IBM’s 12 global research labs. IBM has maintained a research laboratory in Switzerland since 1956. As the first European branch of IBM Research, the mission of the Zurich Lab, in addition to pursuing cutting-edge research for tomorrow’s information technology, is to cultivate close relationships with academic and industrial partners, be one of the premier places to work for world-class researchers, to promote women in IT and science, and to help drive Europe’s innovation agenda. [https://www.zurich.ibm.com/pdf/ZRL_Fact_Sheet.pdf Download factsheet]<br />
<br />
==Hybrid AI Systems (HAS)==<br />
<br />
[[File:NatureElectronics20.jpg|thumb|right|200px]]<br />
Neither symbolic AI nor neural networks alone has produced the kind of intelligence expressed in human and animal behavior. Why? Each has a long and rich history, but has addressed a relatively narrow aspect of the problem. Symbolic AI focuses on solving cognitive problems, drawing upon the rich framework of symbolic computation to manipulate internal representations in order to perform reasoning and inference. But it suffers from being non-adaptive, lacking the ability to learn from example or by direct observation of the world. Neural networks on the other hand have the ability to learn from data, and derive much of their power from nonlinear function approximation combined with stochastic gradient descent. But intelligence requires more than modeling input-output relationships. Without the richness of symbolic computation, neural nets lack the simple but powerful operations such as variable binding that allow for analogy making and reasoning, which underlie the ability to generalize from few examples.<br />
<br />
We approach the problem from a very different perspective, inspired by the brain’s high-dimensional circuits and the unique mathematical properties of high-dimensional spaces. ''It leads us to a novel information processing architecture that combines the strengths of symbolic AI and neural networks, and yet has novel emergent properties of its own.'' By combining a small set of basic operations on high-dimensional vectors, we obtain hybrid AI system (HAS) that makes it possible to represent and manipulate data in ways familiar to us from symbolic AI, and to learn from the statistics of data in ways familiar to us from artificial neural networks and deep learning. Further, principles of such HAS allow few-shot learning capabilities, and extremely robust operations against failures, defects, variations, and noise, all of which are complementary to ultra-low energy computation on nanoscale fabrics such as phase-change memory devices. Exciting further research (listed in below table) awaiting in this direction spans high-level algorithmic exploration all the way to efficient hardware design for emerging computational fabrics.<br />
<br />
===Useful Reading===<br />
*[https://www.nature.com/articles/s41928-020-0410-3 In-memory hyperdimensional computing], Nature Electronics, 2020.<br />
*[https://arxiv.org/pdf/2010.01939.pdf Robust High-dimensional Memory-augmented Neural Networks], arXiv, 2020<br />
*[https://www.nature.com/articles/s41928-020-00510-8 A wearable biosensing system with in-sensor adaptive machine learning for hand gesture recognition], Nature Electronics, 2020. <br />
*[https://link.springer.com/article/10.1007/s12559-009-9009-8 Hyperdimensional Computing: An Introduction to Computing in Distributed Representation with High-Dimensional Random Vectors], Cognitive Computation, 2009.<br />
<br />
===Prerequisites===<br />
*Python<br />
*Background in machine learning (''recommended'')<br />
*Experience with any deep learning framework such as TensorFlow or PyTorch (''recommended'')<br />
*VLSI I (''recommended'')<br />
<br />
==In-Memory Computing (IMC)==<br />
<br />
[[File:NNcover_imc.jpg|thumb|right|200px]]<br />
For decades, conventional computers based on the von Neumann architecture have performed computation by repeatedly transferring data between their processing and their memory units, which are physically separated. As computation becomes increasingly data-centric and as the scalability limits in terms of performance and power are being reached, alternative computing paradigms are searched for in which computation and storage are collocated. A fascinating new approach is that of computational memory where the physics of nanoscale memory devices are used to perform certain computational tasks within the memory unit in a non-von Neumann manner. '''Computational Memory''' (CM) is finding application in a variety of areas such as machine learning and signal processing. In addition to novel non-volatile memory technologies like '''PCM''' and '''ReRAM''', also conventional '''SRAM''' has been proposed as underlying storage technology in Computational Memories.<br />
<br />
===Useful Reading===<br />
*[https://ieeexplore.ieee.org/document/9288639 An SRAM-Based Multibit In-Memory Matrix-Vector Multiplier With a Precision That Scales Linearly in Area, Time, and Power]<br />
*[https://www.nature.com/articles/s41565-020-0655-z Memory devices and applications for in-memory computing]<br />
*[https://ieeexplore.ieee.org/abstract/document/8865099 Deep learning acceleration based on in-memory computing]<br />
<br />
===Prerequisites===<br />
*General interest in Deep Learning and memory/system design<br />
*VLSI I and VLSI II (''recommended'')<br />
*Analog Circuit Design (''recommended'')<br />
Specific requirements for the different projects vary and are generally negotiable.<br />
<br />
==Available Projects==<br />
We are inviting applications from students to conduct their master’s thesis work or an internship project at the IBM Research lab in Zurich on this exciting new topic. <br />
<!--<br />
The work performed could span low-level hardware experiments on phase-change memory chips comprising more than 1 million devices to high-level algorithmic development in a deep learning framework such as TensorFlow or PyTorch. It also involves interactions with several researchers across IBM research focusing on various aspects of the project. The ideal candidate should have a multi-disciplinary background, strong mathematical aptitude and programming skills. Prior knowledge on emerging memory technologies such as phase-change memory is a bonus but not necessary.<br />
--><br />
<br />
{| class="wikitable" style="text-align: center;"<br />
|-<br />
! style="width: 5%;"|Type !! style="width: 20%"|Project !! style="width: 60%"|Description !! style="width: 5%"|Topic !! style="width: 15%"|Workload Type <br />
|-<br />
<br />
<br />
| MA || Developing Efficient Models of Strong AI for Intelligence Quotient (IQ) Test || [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Developing Efficient Models of '''Strong AI for Intelligence Quotient (IQ) Test'''] || HAS || algorithmic design<br />
|-<br />
<br />
| MA|| Lifelong learning challenge || [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf '''Artificial general intelligence (AGI):''' lifelong learning challenge] || HAS || algorithmic/hardware design<br />
|-<br />
<br />
| MA|| Machine learning based on optimal transport using in-memory computing' || [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf '''Machine learning based on optimal transport using in-memory computing'''] || HAS || hardware design<br />
|-<br />
<br />
<br />
| MA || Accurate deep learning inference using computational memory || [https://www.zurich.ibm.com/careers/2020_027.html Project description and application] || IMC || algorithmic design<br />
|-<br />
| MA || ADC design for computational memory || style="text-align: justify;"|Digital-to-Analog converters ('''DAC'''s) and Analog-to-Digital converters ('''ADC'''s) are extensively employed in Computational Memory ('''CM''') to handle the crossing between the digital and analog domain, in which computationally expensive tasks, like Matrix-Vector Multiplications ('''MVM'''), are carried out with O(1) complexity. Each conversion costs a certain amount of energy and its precision can only be guaranteed up to the Effective Number of Bits ('''ENOB''') of the employed data converter. <br /> The research focus will be on understanding the system level requirements on ADC and DAC for optimal performance of Deep Neural Network inference using CM. Furthermore, the effects of noise, non-linearity and manufacturing tolerances shall be examined and counter measurements, like for example periodic digital ADC recalibration and digital post processing, shall be evaluated with regards to effectivity and energy costs.|| IMC || analog circuit design<br />
|-<br />
| SA || Testing of a computational memory chip || style="text-align: justify;"| This project is about building a Microprocessor/FPGA-based test platform around a novel IMC chip. After commissioning the chip, DL workload tests can be run to characterize its throughput and energy-efficiency.|| IMC || PCB Design <br /> and/or<br /> µ-code implementation <br />
|-<br />
| MA/SA || Neural Network Training on a <br /> computational memory chip || style="text-align: justify;"|TBA || IMC || algorithm/system design<br />and/or<br /> analog circuit design<br />
|}<br />
<br />
==Contact==<br />
: Thesis will be at IBM Zurich in Rüschlikon<br />
; Hyperdimensional Computing (HDC) projects<br />
: Contact (at ETH Zurich): [[:User:kgf | Dr. Frank K. Gurkaynak]] and [[:User:Herschmi | Michael Hersche]]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian] <br />
: Contact (at IBM): [mailto:abr@zurich.ibm.com Dr. Abbas Rahimi]<br />
: Professor: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]<br />
; In-Memory Computing (IMC) projects<br />
: Contact (at IBM/ETH Zurich): [[:User:rid | Riduan Khaddam-Aljameh]]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian]<br />
: Professor: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]</div>Abbashttp://iis-projects.ee.ethz.ch/index.php?title=IBM_Research&diff=6225IBM Research2021-01-13T18:37:37Z<p>Abbas: /* Hybrid AI Systems (HAS) */</p>
<hr />
<div>[[Category:Digital]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]] <br />
[[Category:Available]] <br />
[[Category:2020]]<br />
[[Category:Hot]]<br />
[[File:IBM_ZRLab.png|center]]<br />
<br />
==Short Description==<br />
Today, we are entering the era of cognitive computing, which holds great promise in deriving intelligence and knowledge from huge volumes of data. In today’s computers based on von Neumann architecture, huge amounts of data need to be shuttled back and forth at high speeds, a task at which this architecture is inefficient.<br />
<br />
It is becoming increasingly clear that to build efficient cognitive computers, we need to transition to non-von Neumann architectures in which memory and processing coexist in some form. At IBM Research–Zurich in the [https://www.zurich.ibm.com/sto/memory/ Neuromorphic and In-memory Computing Group], we explore various such computing paradigms from in-memory computing to brain-inspired neuromorphic computing. Our research spans from devices and architectures to algorithms and applications. <!--Here is a list of available projects:<br />
<br />
* [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf '''Artificial general intelligence (AGI):''' lifelong learning challenge] <br />
* [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf '''Machine learning based on optimal transport using in-memory computing''']<br />
* [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Developing Efficient Models of '''Strong AI for Intelligence Quotient (IQ) Test''']<br />
--><br />
<br />
===About the IBM Research–Zurich===<br />
The location in Zurich is one of IBM’s 12 global research labs. IBM has maintained a research laboratory in Switzerland since 1956. As the first European branch of IBM Research, the mission of the Zurich Lab, in addition to pursuing cutting-edge research for tomorrow’s information technology, is to cultivate close relationships with academic and industrial partners, be one of the premier places to work for world-class researchers, to promote women in IT and science, and to help drive Europe’s innovation agenda. [https://www.zurich.ibm.com/pdf/ZRL_Fact_Sheet.pdf Download factsheet]<br />
<br />
==Hybrid AI Systems (HAS)==<br />
<br />
[[File:NatureElectronics20.jpg|thumb|right|200px]]<br />
Neither symbolic AI nor neural networks alone has produced the kind of intelligence expressed in human and animal behavior. Why? Each has a long and rich history, but has addressed a relatively narrow aspect of the problem. Symbolic AI focuses on solving cognitive problems, drawing upon the rich framework of symbolic computation to manipulate internal representations in order to perform reasoning and inference. But it suffers from being non-adaptive, lacking the ability to learn from example or by direct observation of the world. Neural networks on the other hand have the ability to learn from data, and derive much of their power from nonlinear function approximation combined with stochastic gradient descent. But intelligence requires more than modeling input-output relationships. Without the richness of symbolic computation, neural nets lack the simple but powerful operations such as variable binding that allow for analogy making and reasoning, which underlie the ability to generalize from few examples.<br />
<br />
We approach the problem from a very different perspective, inspired by the brain’s high-dimensional circuits and the unique mathematical properties of high-dimensional spaces. ''It leads us to a novel information processing architecture that combines the strengths of symbolic AI and neural networks, and yet has novel emergent properties of its own.'' By combining a small set of basic operations on high-dimensional vectors, we obtain hybrid AI system (HAS) that makes it possible to represent and manipulate data in ways familiar to us from symbolic AI, and to learn from the statistics of data in ways familiar to us from artificial neural networks and deep learning. Further, principles of such HAS allow few-shot learning capabilities, and extremely robust operations against failures, defects, variations, and noise, all of which are complementary to ultra-low energy computation on nanoscale fabrics such as phase-change memory devices. Exciting further research (listed in below table) awaiting in this direction spans high-level algorithmic exploration all the way to efficient hardware design for emerging computational fabrics.<br />
<br />
===Useful Reading===<br />
*[https://www.nature.com/articles/s41928-020-0410-3 In-memory hyperdimensional computing], Nature Electronics, 2020.<br />
*[https://arxiv.org/pdf/2010.01939.pdf Robust High-dimensional Memory-augmented Neural Networks], arXiv, 2020<br />
*[https://www.nature.com/articles/s41928-020-00510-8 A wearable biosensing system with in-sensor adaptive machine learning for hand gesture recognition], Nature Electronics, 2020. <br />
*[https://link.springer.com/article/10.1007/s12559-009-9009-8 Hyperdimensional Computing: An Introduction to Computing in Distributed Representation with High-Dimensional Random Vectors], Cognitive Computation, 2009.<br />
<br />
===Prerequisites===<br />
*Python<br />
*Background in machine learning (''recommended'')<br />
*Experience with any deep learning framework such as TensorFlow or PyTorch (''recommended'')<br />
*VLSI I (''recommended'')<br />
<br />
==In-Memory Computing (IMC)==<br />
<br />
[[File:NNcover_imc.jpg|thumb|right|200px]]<br />
For decades, conventional computers based on the von Neumann architecture have performed computation by repeatedly transferring data between their processing and their memory units, which are physically separated. As computation becomes increasingly data-centric and as the scalability limits in terms of performance and power are being reached, alternative computing paradigms are searched for in which computation and storage are collocated. A fascinating new approach is that of computational memory where the physics of nanoscale memory devices are used to perform certain computational tasks within the memory unit in a non-von Neumann manner. '''Computational Memory''' (CM) is finding application in a variety of areas such as machine learning and signal processing. In addition to novel non-volatile memory technologies like '''PCM''' and '''ReRAM''', also conventional '''SRAM''' has been proposed as underlying storage technology in Computational Memories.<br />
<br />
===Useful Reading===<br />
*[https://ieeexplore.ieee.org/document/9288639 An SRAM-Based Multibit In-Memory Matrix-Vector Multiplier With a Precision That Scales Linearly in Area, Time, and Power]<br />
*[https://www.nature.com/articles/s41565-020-0655-z Memory devices and applications for in-memory computing]<br />
*[https://ieeexplore.ieee.org/abstract/document/8865099 Deep learning acceleration based on in-memory computing]<br />
<br />
===Prerequisites===<br />
*General interest in Deep Learning and memory/system design<br />
*VLSI I and VLSI II (''recommended'')<br />
*Analog Circuit Design (''recommended'')<br />
Specific requirements for the different projects vary and are generally negotiable.<br />
<br />
==Available Projects==<br />
We are inviting applications from students to conduct their master’s thesis work or an internship project at the IBM Research lab in Zurich on this exciting new topic. <br />
<!--<br />
The work performed could span low-level hardware experiments on phase-change memory chips comprising more than 1 million devices to high-level algorithmic development in a deep learning framework such as TensorFlow or PyTorch. It also involves interactions with several researchers across IBM research focusing on various aspects of the project. The ideal candidate should have a multi-disciplinary background, strong mathematical aptitude and programming skills. Prior knowledge on emerging memory technologies such as phase-change memory is a bonus but not necessary.<br />
--><br />
<br />
{| class="wikitable" style="text-align: center;"<br />
|-<br />
! style="width: 5%;"|Type !! style="width: 20%"|Project !! style="width: 60%"|Description !! style="width: 5%"|Topic !! style="width: 15%"|Workload Type <br />
|-<br />
| MA|| Lifelong learning challenge || [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf '''Artificial general intelligence (AGI):''' lifelong learning challenge] || HDC || algorithmic design<br />
|-<br />
| MA|| Machine learning based on optimal transport using in-memory computing' || [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf '''Machine learning based on optimal transport using in-memory computing'''] || HDC || algorithmic design<br />
|-<br />
| MA || Developing Efficient Models of Strong AI for Intelligence Quotient (IQ) Test || [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Developing Efficient Models of '''Strong AI for Intelligence Quotient (IQ) Test'''] || HDC || algorithmic design<br />
|-<br />
| MA || Accurate deep learning inference using computational memory || [https://www.zurich.ibm.com/careers/2020_027.html Project description and application] || IMC || algorithmic design<br />
|-<br />
| MA || ADC design for computational memory || style="text-align: justify;"|Digital-to-Analog converters ('''DAC'''s) and Analog-to-Digital converters ('''ADC'''s) are extensively employed in Computational Memory ('''CM''') to handle the crossing between the digital and analog domain, in which computationally expensive tasks, like Matrix-Vector Multiplications ('''MVM'''), are carried out with O(1) complexity. Each conversion costs a certain amount of energy and its precision can only be guaranteed up to the Effective Number of Bits ('''ENOB''') of the employed data converter. <br /> The research focus will be on understanding the system level requirements on ADC and DAC for optimal performance of Deep Neural Network inference using CM. Furthermore, the effects of noise, non-linearity and manufacturing tolerances shall be examined and counter measurements, like for example periodic digital ADC recalibration and digital post processing, shall be evaluated with regards to effectivity and energy costs.|| IMC || analog circuit design<br />
|-<br />
| SA || Testing of a computational memory chip || style="text-align: justify;"| This project is about building a Microprocessor/FPGA-based test platform around a novel IMC chip. After commissioning the chip, DL workload tests can be run to characterize its throughput and energy-efficiency.|| IMC || PCB Design <br /> and/or<br /> µ-code implementation <br />
|-<br />
| MA/SA || Neural Network Training on a <br /> computational memory chip || style="text-align: justify;"|TBA || IMC || algorithm/system design<br />and/or<br /> analog circuit design<br />
|}<br />
<br />
==Contact==<br />
: Thesis will be at IBM Zurich in Rüschlikon<br />
; Hyperdimensional Computing (HDC) projects<br />
: Contact (at ETH Zurich): [[:User:kgf | Dr. Frank K. Gurkaynak]] and [[:User:Herschmi | Michael Hersche]]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian] <br />
: Contact (at IBM): [mailto:abr@zurich.ibm.com Dr. Abbas Rahimi]<br />
: Professor: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]<br />
; In-Memory Computing (IMC) projects<br />
: Contact (at IBM/ETH Zurich): [[:User:rid | Riduan Khaddam-Aljameh]]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian]<br />
: Professor: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]</div>Abbashttp://iis-projects.ee.ethz.ch/index.php?title=IBM_Research&diff=6224IBM Research2021-01-13T18:36:54Z<p>Abbas: /* Useful Reading */</p>
<hr />
<div>[[Category:Digital]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]] <br />
[[Category:Available]] <br />
[[Category:2020]]<br />
[[Category:Hot]]<br />
[[File:IBM_ZRLab.png|center]]<br />
<br />
==Short Description==<br />
Today, we are entering the era of cognitive computing, which holds great promise in deriving intelligence and knowledge from huge volumes of data. In today’s computers based on von Neumann architecture, huge amounts of data need to be shuttled back and forth at high speeds, a task at which this architecture is inefficient.<br />
<br />
It is becoming increasingly clear that to build efficient cognitive computers, we need to transition to non-von Neumann architectures in which memory and processing coexist in some form. At IBM Research–Zurich in the [https://www.zurich.ibm.com/sto/memory/ Neuromorphic and In-memory Computing Group], we explore various such computing paradigms from in-memory computing to brain-inspired neuromorphic computing. Our research spans from devices and architectures to algorithms and applications. <!--Here is a list of available projects:<br />
<br />
* [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf '''Artificial general intelligence (AGI):''' lifelong learning challenge] <br />
* [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf '''Machine learning based on optimal transport using in-memory computing''']<br />
* [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Developing Efficient Models of '''Strong AI for Intelligence Quotient (IQ) Test''']<br />
--><br />
<br />
===About the IBM Research–Zurich===<br />
The location in Zurich is one of IBM’s 12 global research labs. IBM has maintained a research laboratory in Switzerland since 1956. As the first European branch of IBM Research, the mission of the Zurich Lab, in addition to pursuing cutting-edge research for tomorrow’s information technology, is to cultivate close relationships with academic and industrial partners, be one of the premier places to work for world-class researchers, to promote women in IT and science, and to help drive Europe’s innovation agenda. [https://www.zurich.ibm.com/pdf/ZRL_Fact_Sheet.pdf Download factsheet]<br />
<br />
==Hybrid AI Systems (HAS)==<br />
<br />
[[File:NatureElectronics20.jpg|thumb|right|200px]]<br />
Neither symbolic AI nor neural networks alone has produced the kind of intelligence expressed in human and animal behavior. Why? Each has a long and rich history, but has addressed a relatively narrow aspect of the problem. Symbolic AI focuses on solving cognitive problems, drawing upon the rich framework of symbolic computation to manipulate internal representations in order to perform reasoning and inference. But it suffers from being non-adaptive, lacking the ability to learn from example or by direct observation of the world. Neural networks on the other hand have the ability to learn from data, and derive much of their power from nonlinear function approximation combined with stochastic gradient descent. But intelligence requires more than modeling input-output relationships. Without the richness of symbolic computation, neural nets lack the simple but powerful operations such as variable binding that allow for analogy making and reasoning, which underlie the ability to generalize from few examples.<br />
<br />
We approach the problem from a very different perspective, inspired by the brain’s high-dimensional circuits and the unique mathematical properties of high-dimensional spaces. ''It leads us to a novel information processing architecture that combines the strengths of symbolic AI and neural networks, and yet has novel emergent properties of its own.'' By combining a small set of basic operations on high-dimensional vectors, we obtain hybrid AI system (HAS) that makes it possible to represent and manipulate data in ways familiar to us from symbolic AI, and to learn from the statistics of data in ways familiar to us from artificial neural networks and deep learning. Further, principles of such HAS allow few-shot learning capabilities, and extremely robust operations against failures, defects, variations, and noise, all of which are complementary to ultra-low energy computation on nanoscale fabrics such as phase-change memory devices. Exciting further research awaiting in this direction spans high-level algorithmic exploration all the way to efficient hardware design for emerging computational fabrics.<br />
<br />
===Useful Reading===<br />
*[https://www.nature.com/articles/s41928-020-0410-3 In-memory hyperdimensional computing], Nature Electronics, 2020.<br />
*[https://arxiv.org/pdf/2010.01939.pdf Robust High-dimensional Memory-augmented Neural Networks], arXiv, 2020<br />
*[https://www.nature.com/articles/s41928-020-00510-8 A wearable biosensing system with in-sensor adaptive machine learning for hand gesture recognition], Nature Electronics, 2020. <br />
*[https://link.springer.com/article/10.1007/s12559-009-9009-8 Hyperdimensional Computing: An Introduction to Computing in Distributed Representation with High-Dimensional Random Vectors], Cognitive Computation, 2009.<br />
<br />
===Prerequisites===<br />
*Python<br />
*Background in machine learning (''recommended'')<br />
*Experience with any deep learning framework such as TensorFlow or PyTorch (''recommended'')<br />
*VLSI I (''recommended'')<br />
<br />
==In-Memory Computing (IMC)==<br />
<br />
[[File:NNcover_imc.jpg|thumb|right|200px]]<br />
For decades, conventional computers based on the von Neumann architecture have performed computation by repeatedly transferring data between their processing and their memory units, which are physically separated. As computation becomes increasingly data-centric and as the scalability limits in terms of performance and power are being reached, alternative computing paradigms are searched for in which computation and storage are collocated. A fascinating new approach is that of computational memory where the physics of nanoscale memory devices are used to perform certain computational tasks within the memory unit in a non-von Neumann manner. '''Computational Memory''' (CM) is finding application in a variety of areas such as machine learning and signal processing. In addition to novel non-volatile memory technologies like '''PCM''' and '''ReRAM''', also conventional '''SRAM''' has been proposed as underlying storage technology in Computational Memories.<br />
<br />
===Useful Reading===<br />
*[https://ieeexplore.ieee.org/document/9288639 An SRAM-Based Multibit In-Memory Matrix-Vector Multiplier With a Precision That Scales Linearly in Area, Time, and Power]<br />
*[https://www.nature.com/articles/s41565-020-0655-z Memory devices and applications for in-memory computing]<br />
*[https://ieeexplore.ieee.org/abstract/document/8865099 Deep learning acceleration based on in-memory computing]<br />
<br />
===Prerequisites===<br />
*General interest in Deep Learning and memory/system design<br />
*VLSI I and VLSI II (''recommended'')<br />
*Analog Circuit Design (''recommended'')<br />
Specific requirements for the different projects vary and are generally negotiable.<br />
<br />
==Available Projects==<br />
We are inviting applications from students to conduct their master’s thesis work or an internship project at the IBM Research lab in Zurich on this exciting new topic. <br />
<!--<br />
The work performed could span low-level hardware experiments on phase-change memory chips comprising more than 1 million devices to high-level algorithmic development in a deep learning framework such as TensorFlow or PyTorch. It also involves interactions with several researchers across IBM research focusing on various aspects of the project. The ideal candidate should have a multi-disciplinary background, strong mathematical aptitude and programming skills. Prior knowledge on emerging memory technologies such as phase-change memory is a bonus but not necessary.<br />
--><br />
<br />
{| class="wikitable" style="text-align: center;"<br />
|-<br />
! style="width: 5%;"|Type !! style="width: 20%"|Project !! style="width: 60%"|Description !! style="width: 5%"|Topic !! style="width: 15%"|Workload Type <br />
|-<br />
| MA|| Lifelong learning challenge || [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf '''Artificial general intelligence (AGI):''' lifelong learning challenge] || HDC || algorithmic design<br />
|-<br />
| MA|| Machine learning based on optimal transport using in-memory computing' || [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf '''Machine learning based on optimal transport using in-memory computing'''] || HDC || algorithmic design<br />
|-<br />
| MA || Developing Efficient Models of Strong AI for Intelligence Quotient (IQ) Test || [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Developing Efficient Models of '''Strong AI for Intelligence Quotient (IQ) Test'''] || HDC || algorithmic design<br />
|-<br />
| MA || Accurate deep learning inference using computational memory || [https://www.zurich.ibm.com/careers/2020_027.html Project description and application] || IMC || algorithmic design<br />
|-<br />
| MA || ADC design for computational memory || style="text-align: justify;"|Digital-to-Analog converters ('''DAC'''s) and Analog-to-Digital converters ('''ADC'''s) are extensively employed in Computational Memory ('''CM''') to handle the crossing between the digital and analog domain, in which computationally expensive tasks, like Matrix-Vector Multiplications ('''MVM'''), are carried out with O(1) complexity. Each conversion costs a certain amount of energy and its precision can only be guaranteed up to the Effective Number of Bits ('''ENOB''') of the employed data converter. <br /> The research focus will be on understanding the system level requirements on ADC and DAC for optimal performance of Deep Neural Network inference using CM. Furthermore, the effects of noise, non-linearity and manufacturing tolerances shall be examined and counter measurements, like for example periodic digital ADC recalibration and digital post processing, shall be evaluated with regards to effectivity and energy costs.|| IMC || analog circuit design<br />
|-<br />
| SA || Testing of a computational memory chip || style="text-align: justify;"| This project is about building a Microprocessor/FPGA-based test platform around a novel IMC chip. After commissioning the chip, DL workload tests can be run to characterize its throughput and energy-efficiency.|| IMC || PCB Design <br /> and/or<br /> µ-code implementation <br />
|-<br />
| MA/SA || Neural Network Training on a <br /> computational memory chip || style="text-align: justify;"|TBA || IMC || algorithm/system design<br />and/or<br /> analog circuit design<br />
|}<br />
<br />
==Contact==<br />
: Thesis will be at IBM Zurich in Rüschlikon<br />
; Hyperdimensional Computing (HDC) projects<br />
: Contact (at ETH Zurich): [[:User:kgf | Dr. Frank K. Gurkaynak]] and [[:User:Herschmi | Michael Hersche]]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian] <br />
: Contact (at IBM): [mailto:abr@zurich.ibm.com Dr. Abbas Rahimi]<br />
: Professor: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]<br />
; In-Memory Computing (IMC) projects<br />
: Contact (at IBM/ETH Zurich): [[:User:rid | Riduan Khaddam-Aljameh]]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian]<br />
: Professor: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]</div>Abbashttp://iis-projects.ee.ethz.ch/index.php?title=IBM_Research&diff=6223IBM Research2021-01-13T18:35:36Z<p>Abbas: /* Useful Reading */</p>
<hr />
<div>[[Category:Digital]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]] <br />
[[Category:Available]] <br />
[[Category:2020]]<br />
[[Category:Hot]]<br />
[[File:IBM_ZRLab.png|center]]<br />
<br />
==Short Description==<br />
Today, we are entering the era of cognitive computing, which holds great promise in deriving intelligence and knowledge from huge volumes of data. In today’s computers based on von Neumann architecture, huge amounts of data need to be shuttled back and forth at high speeds, a task at which this architecture is inefficient.<br />
<br />
It is becoming increasingly clear that to build efficient cognitive computers, we need to transition to non-von Neumann architectures in which memory and processing coexist in some form. At IBM Research–Zurich in the [https://www.zurich.ibm.com/sto/memory/ Neuromorphic and In-memory Computing Group], we explore various such computing paradigms from in-memory computing to brain-inspired neuromorphic computing. Our research spans from devices and architectures to algorithms and applications. <!--Here is a list of available projects:<br />
<br />
* [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf '''Artificial general intelligence (AGI):''' lifelong learning challenge] <br />
* [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf '''Machine learning based on optimal transport using in-memory computing''']<br />
* [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Developing Efficient Models of '''Strong AI for Intelligence Quotient (IQ) Test''']<br />
--><br />
<br />
===About the IBM Research–Zurich===<br />
The location in Zurich is one of IBM’s 12 global research labs. IBM has maintained a research laboratory in Switzerland since 1956. As the first European branch of IBM Research, the mission of the Zurich Lab, in addition to pursuing cutting-edge research for tomorrow’s information technology, is to cultivate close relationships with academic and industrial partners, be one of the premier places to work for world-class researchers, to promote women in IT and science, and to help drive Europe’s innovation agenda. [https://www.zurich.ibm.com/pdf/ZRL_Fact_Sheet.pdf Download factsheet]<br />
<br />
==Hybrid AI Systems (HAS)==<br />
<br />
[[File:NatureElectronics20.jpg|thumb|right|200px]]<br />
Neither symbolic AI nor neural networks alone has produced the kind of intelligence expressed in human and animal behavior. Why? Each has a long and rich history, but has addressed a relatively narrow aspect of the problem. Symbolic AI focuses on solving cognitive problems, drawing upon the rich framework of symbolic computation to manipulate internal representations in order to perform reasoning and inference. But it suffers from being non-adaptive, lacking the ability to learn from example or by direct observation of the world. Neural networks on the other hand have the ability to learn from data, and derive much of their power from nonlinear function approximation combined with stochastic gradient descent. But intelligence requires more than modeling input-output relationships. Without the richness of symbolic computation, neural nets lack the simple but powerful operations such as variable binding that allow for analogy making and reasoning, which underlie the ability to generalize from few examples.<br />
<br />
We approach the problem from a very different perspective, inspired by the brain’s high-dimensional circuits and the unique mathematical properties of high-dimensional spaces. ''It leads us to a novel information processing architecture that combines the strengths of symbolic AI and neural networks, and yet has novel emergent properties of its own.'' By combining a small set of basic operations on high-dimensional vectors, we obtain hybrid AI system (HAS) that makes it possible to represent and manipulate data in ways familiar to us from symbolic AI, and to learn from the statistics of data in ways familiar to us from artificial neural networks and deep learning. Further, principles of such HAS allow few-shot learning capabilities, and extremely robust operations against failures, defects, variations, and noise, all of which are complementary to ultra-low energy computation on nanoscale fabrics such as phase-change memory devices. Exciting further research awaiting in this direction spans high-level algorithmic exploration all the way to efficient hardware design for emerging computational fabrics.<br />
<br />
===Useful Reading===<br />
*[https://www.nature.com/articles/s41928-020-0410-3 In-memory hyperdimensional computing], Nature Electronics, 2020.<br />
*[https://arxiv.org/pdf/2010.01939.pdf Robust High-dimensional Memory-augmented Neural Networks]<br />
*[https://www.nature.com/articles/s41928-020-00510-8 A wearable biosensing system with in-sensor adaptive machine learning for hand gesture recognition]<br />
*[https://link.springer.com/article/10.1007/s12559-009-9009-8 Hyperdimensional Computing: An Introduction to Computing in Distributed Representation with High-Dimensional Random Vectors]<br />
<br />
===Prerequisites===<br />
*Python<br />
*Background in machine learning (''recommended'')<br />
*Experience with any deep learning framework such as TensorFlow or PyTorch (''recommended'')<br />
*VLSI I (''recommended'')<br />
<br />
==In-Memory Computing (IMC)==<br />
<br />
[[File:NNcover_imc.jpg|thumb|right|200px]]<br />
For decades, conventional computers based on the von Neumann architecture have performed computation by repeatedly transferring data between their processing and their memory units, which are physically separated. As computation becomes increasingly data-centric and as the scalability limits in terms of performance and power are being reached, alternative computing paradigms are searched for in which computation and storage are collocated. A fascinating new approach is that of computational memory where the physics of nanoscale memory devices are used to perform certain computational tasks within the memory unit in a non-von Neumann manner. '''Computational Memory''' (CM) is finding application in a variety of areas such as machine learning and signal processing. In addition to novel non-volatile memory technologies like '''PCM''' and '''ReRAM''', also conventional '''SRAM''' has been proposed as underlying storage technology in Computational Memories.<br />
<br />
===Useful Reading===<br />
*[https://ieeexplore.ieee.org/document/9288639 An SRAM-Based Multibit In-Memory Matrix-Vector Multiplier With a Precision That Scales Linearly in Area, Time, and Power]<br />
*[https://www.nature.com/articles/s41565-020-0655-z Memory devices and applications for in-memory computing]<br />
*[https://ieeexplore.ieee.org/abstract/document/8865099 Deep learning acceleration based on in-memory computing]<br />
<br />
===Prerequisites===<br />
*General interest in Deep Learning and memory/system design<br />
*VLSI I and VLSI II (''recommended'')<br />
*Analog Circuit Design (''recommended'')<br />
Specific requirements for the different projects vary and are generally negotiable.<br />
<br />
==Available Projects==<br />
We are inviting applications from students to conduct their master’s thesis work or an internship project at the IBM Research lab in Zurich on this exciting new topic. <br />
<!--<br />
The work performed could span low-level hardware experiments on phase-change memory chips comprising more than 1 million devices to high-level algorithmic development in a deep learning framework such as TensorFlow or PyTorch. It also involves interactions with several researchers across IBM research focusing on various aspects of the project. The ideal candidate should have a multi-disciplinary background, strong mathematical aptitude and programming skills. Prior knowledge on emerging memory technologies such as phase-change memory is a bonus but not necessary.<br />
--><br />
<br />
{| class="wikitable" style="text-align: center;"<br />
|-<br />
! style="width: 5%;"|Type !! style="width: 20%"|Project !! style="width: 60%"|Description !! style="width: 5%"|Topic !! style="width: 15%"|Workload Type <br />
|-<br />
| MA|| Lifelong learning challenge || [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf '''Artificial general intelligence (AGI):''' lifelong learning challenge] || HDC || algorithmic design<br />
|-<br />
| MA|| Machine learning based on optimal transport using in-memory computing' || [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf '''Machine learning based on optimal transport using in-memory computing'''] || HDC || algorithmic design<br />
|-<br />
| MA || Developing Efficient Models of Strong AI for Intelligence Quotient (IQ) Test || [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Developing Efficient Models of '''Strong AI for Intelligence Quotient (IQ) Test'''] || HDC || algorithmic design<br />
|-<br />
| MA || Accurate deep learning inference using computational memory || [https://www.zurich.ibm.com/careers/2020_027.html Project description and application] || IMC || algorithmic design<br />
|-<br />
| MA || ADC design for computational memory || style="text-align: justify;"|Digital-to-Analog converters ('''DAC'''s) and Analog-to-Digital converters ('''ADC'''s) are extensively employed in Computational Memory ('''CM''') to handle the crossing between the digital and analog domain, in which computationally expensive tasks, like Matrix-Vector Multiplications ('''MVM'''), are carried out with O(1) complexity. Each conversion costs a certain amount of energy and its precision can only be guaranteed up to the Effective Number of Bits ('''ENOB''') of the employed data converter. <br /> The research focus will be on understanding the system level requirements on ADC and DAC for optimal performance of Deep Neural Network inference using CM. Furthermore, the effects of noise, non-linearity and manufacturing tolerances shall be examined and counter measurements, like for example periodic digital ADC recalibration and digital post processing, shall be evaluated with regards to effectivity and energy costs.|| IMC || analog circuit design<br />
|-<br />
| SA || Testing of a computational memory chip || style="text-align: justify;"| This project is about building a Microprocessor/FPGA-based test platform around a novel IMC chip. After commissioning the chip, DL workload tests can be run to characterize its throughput and energy-efficiency.|| IMC || PCB Design <br /> and/or<br /> µ-code implementation <br />
|-<br />
| MA/SA || Neural Network Training on a <br /> computational memory chip || style="text-align: justify;"|TBA || IMC || algorithm/system design<br />and/or<br /> analog circuit design<br />
|}<br />
<br />
==Contact==<br />
: Thesis will be at IBM Zurich in Rüschlikon<br />
; Hyperdimensional Computing (HDC) projects<br />
: Contact (at ETH Zurich): [[:User:kgf | Dr. Frank K. Gurkaynak]] and [[:User:Herschmi | Michael Hersche]]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian] <br />
: Contact (at IBM): [mailto:abr@zurich.ibm.com Dr. Abbas Rahimi]<br />
: Professor: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]<br />
; In-Memory Computing (IMC) projects<br />
: Contact (at IBM/ETH Zurich): [[:User:rid | Riduan Khaddam-Aljameh]]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian]<br />
: Professor: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]</div>Abbashttp://iis-projects.ee.ethz.ch/index.php?title=IBM_Research&diff=6222IBM Research2021-01-13T18:35:19Z<p>Abbas: /* Useful Reading */</p>
<hr />
<div>[[Category:Digital]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]] <br />
[[Category:Available]] <br />
[[Category:2020]]<br />
[[Category:Hot]]<br />
[[File:IBM_ZRLab.png|center]]<br />
<br />
==Short Description==<br />
Today, we are entering the era of cognitive computing, which holds great promise in deriving intelligence and knowledge from huge volumes of data. In today’s computers based on von Neumann architecture, huge amounts of data need to be shuttled back and forth at high speeds, a task at which this architecture is inefficient.<br />
<br />
It is becoming increasingly clear that to build efficient cognitive computers, we need to transition to non-von Neumann architectures in which memory and processing coexist in some form. At IBM Research–Zurich in the [https://www.zurich.ibm.com/sto/memory/ Neuromorphic and In-memory Computing Group], we explore various such computing paradigms from in-memory computing to brain-inspired neuromorphic computing. Our research spans from devices and architectures to algorithms and applications. <!--Here is a list of available projects:<br />
<br />
* [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf '''Artificial general intelligence (AGI):''' lifelong learning challenge] <br />
* [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf '''Machine learning based on optimal transport using in-memory computing''']<br />
* [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Developing Efficient Models of '''Strong AI for Intelligence Quotient (IQ) Test''']<br />
--><br />
<br />
===About the IBM Research–Zurich===<br />
The location in Zurich is one of IBM’s 12 global research labs. IBM has maintained a research laboratory in Switzerland since 1956. As the first European branch of IBM Research, the mission of the Zurich Lab, in addition to pursuing cutting-edge research for tomorrow’s information technology, is to cultivate close relationships with academic and industrial partners, be one of the premier places to work for world-class researchers, to promote women in IT and science, and to help drive Europe’s innovation agenda. [https://www.zurich.ibm.com/pdf/ZRL_Fact_Sheet.pdf Download factsheet]<br />
<br />
==Hybrid AI Systems (HAS)==<br />
<br />
[[File:NatureElectronics20.jpg|thumb|right|200px]]<br />
Neither symbolic AI nor neural networks alone has produced the kind of intelligence expressed in human and animal behavior. Why? Each has a long and rich history, but has addressed a relatively narrow aspect of the problem. Symbolic AI focuses on solving cognitive problems, drawing upon the rich framework of symbolic computation to manipulate internal representations in order to perform reasoning and inference. But it suffers from being non-adaptive, lacking the ability to learn from example or by direct observation of the world. Neural networks on the other hand have the ability to learn from data, and derive much of their power from nonlinear function approximation combined with stochastic gradient descent. But intelligence requires more than modeling input-output relationships. Without the richness of symbolic computation, neural nets lack the simple but powerful operations such as variable binding that allow for analogy making and reasoning, which underlie the ability to generalize from few examples.<br />
<br />
We approach the problem from a very different perspective, inspired by the brain’s high-dimensional circuits and the unique mathematical properties of high-dimensional spaces. ''It leads us to a novel information processing architecture that combines the strengths of symbolic AI and neural networks, and yet has novel emergent properties of its own.'' By combining a small set of basic operations on high-dimensional vectors, we obtain hybrid AI system (HAS) that makes it possible to represent and manipulate data in ways familiar to us from symbolic AI, and to learn from the statistics of data in ways familiar to us from artificial neural networks and deep learning. Further, principles of such HAS allow few-shot learning capabilities, and extremely robust operations against failures, defects, variations, and noise, all of which are complementary to ultra-low energy computation on nanoscale fabrics such as phase-change memory devices. Exciting further research awaiting in this direction spans high-level algorithmic exploration all the way to efficient hardware design for emerging computational fabrics.<br />
<br />
===Useful Reading===<br />
*[https://www.nature.com/articles/s41928-020-0410-3 In-memory hyperdimensional computing, Nature Electronics, 2020.]<br />
*[https://arxiv.org/pdf/2010.01939.pdf Robust High-dimensional Memory-augmented Neural Networks]<br />
*[https://www.nature.com/articles/s41928-020-00510-8 A wearable biosensing system with in-sensor adaptive machine learning for hand gesture recognition]<br />
*[https://link.springer.com/article/10.1007/s12559-009-9009-8 Hyperdimensional Computing: An Introduction to Computing in Distributed Representation with High-Dimensional Random Vectors]<br />
<br />
===Prerequisites===<br />
*Python<br />
*Background in machine learning (''recommended'')<br />
*Experience with any deep learning framework such as TensorFlow or PyTorch (''recommended'')<br />
*VLSI I (''recommended'')<br />
<br />
==In-Memory Computing (IMC)==<br />
<br />
[[File:NNcover_imc.jpg|thumb|right|200px]]<br />
For decades, conventional computers based on the von Neumann architecture have performed computation by repeatedly transferring data between their processing and their memory units, which are physically separated. As computation becomes increasingly data-centric and as the scalability limits in terms of performance and power are being reached, alternative computing paradigms are searched for in which computation and storage are collocated. A fascinating new approach is that of computational memory where the physics of nanoscale memory devices are used to perform certain computational tasks within the memory unit in a non-von Neumann manner. '''Computational Memory''' (CM) is finding application in a variety of areas such as machine learning and signal processing. In addition to novel non-volatile memory technologies like '''PCM''' and '''ReRAM''', also conventional '''SRAM''' has been proposed as underlying storage technology in Computational Memories.<br />
<br />
===Useful Reading===<br />
*[https://ieeexplore.ieee.org/document/9288639 An SRAM-Based Multibit In-Memory Matrix-Vector Multiplier With a Precision That Scales Linearly in Area, Time, and Power]<br />
*[https://www.nature.com/articles/s41565-020-0655-z Memory devices and applications for in-memory computing]<br />
*[https://ieeexplore.ieee.org/abstract/document/8865099 Deep learning acceleration based on in-memory computing]<br />
<br />
===Prerequisites===<br />
*General interest in Deep Learning and memory/system design<br />
*VLSI I and VLSI II (''recommended'')<br />
*Analog Circuit Design (''recommended'')<br />
Specific requirements for the different projects vary and are generally negotiable.<br />
<br />
==Available Projects==<br />
We are inviting applications from students to conduct their master’s thesis work or an internship project at the IBM Research lab in Zurich on this exciting new topic. <br />
<!--<br />
The work performed could span low-level hardware experiments on phase-change memory chips comprising more than 1 million devices to high-level algorithmic development in a deep learning framework such as TensorFlow or PyTorch. It also involves interactions with several researchers across IBM research focusing on various aspects of the project. The ideal candidate should have a multi-disciplinary background, strong mathematical aptitude and programming skills. Prior knowledge on emerging memory technologies such as phase-change memory is a bonus but not necessary.<br />
--><br />
<br />
{| class="wikitable" style="text-align: center;"<br />
|-<br />
! style="width: 5%;"|Type !! style="width: 20%"|Project !! style="width: 60%"|Description !! style="width: 5%"|Topic !! style="width: 15%"|Workload Type <br />
|-<br />
| MA|| Lifelong learning challenge || [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf '''Artificial general intelligence (AGI):''' lifelong learning challenge] || HDC || algorithmic design<br />
|-<br />
| MA|| Machine learning based on optimal transport using in-memory computing' || [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf '''Machine learning based on optimal transport using in-memory computing'''] || HDC || algorithmic design<br />
|-<br />
| MA || Developing Efficient Models of Strong AI for Intelligence Quotient (IQ) Test || [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Developing Efficient Models of '''Strong AI for Intelligence Quotient (IQ) Test'''] || HDC || algorithmic design<br />
|-<br />
| MA || Accurate deep learning inference using computational memory || [https://www.zurich.ibm.com/careers/2020_027.html Project description and application] || IMC || algorithmic design<br />
|-<br />
| MA || ADC design for computational memory || style="text-align: justify;"|Digital-to-Analog converters ('''DAC'''s) and Analog-to-Digital converters ('''ADC'''s) are extensively employed in Computational Memory ('''CM''') to handle the crossing between the digital and analog domain, in which computationally expensive tasks, like Matrix-Vector Multiplications ('''MVM'''), are carried out with O(1) complexity. Each conversion costs a certain amount of energy and its precision can only be guaranteed up to the Effective Number of Bits ('''ENOB''') of the employed data converter. <br /> The research focus will be on understanding the system level requirements on ADC and DAC for optimal performance of Deep Neural Network inference using CM. Furthermore, the effects of noise, non-linearity and manufacturing tolerances shall be examined and counter measurements, like for example periodic digital ADC recalibration and digital post processing, shall be evaluated with regards to effectivity and energy costs.|| IMC || analog circuit design<br />
|-<br />
| SA || Testing of a computational memory chip || style="text-align: justify;"| This project is about building a Microprocessor/FPGA-based test platform around a novel IMC chip. After commissioning the chip, DL workload tests can be run to characterize its throughput and energy-efficiency.|| IMC || PCB Design <br /> and/or<br /> µ-code implementation <br />
|-<br />
| MA/SA || Neural Network Training on a <br /> computational memory chip || style="text-align: justify;"|TBA || IMC || algorithm/system design<br />and/or<br /> analog circuit design<br />
|}<br />
<br />
==Contact==<br />
: Thesis will be at IBM Zurich in Rüschlikon<br />
; Hyperdimensional Computing (HDC) projects<br />
: Contact (at ETH Zurich): [[:User:kgf | Dr. Frank K. Gurkaynak]] and [[:User:Herschmi | Michael Hersche]]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian] <br />
: Contact (at IBM): [mailto:abr@zurich.ibm.com Dr. Abbas Rahimi]<br />
: Professor: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]<br />
; In-Memory Computing (IMC) projects<br />
: Contact (at IBM/ETH Zurich): [[:User:rid | Riduan Khaddam-Aljameh]]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian]<br />
: Professor: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]</div>Abbashttp://iis-projects.ee.ethz.ch/index.php?title=IBM_Research&diff=6221IBM Research2021-01-13T18:34:29Z<p>Abbas: /* Hyperdimensional Computing (HDC) */</p>
<hr />
<div>[[Category:Digital]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]] <br />
[[Category:Available]] <br />
[[Category:2020]]<br />
[[Category:Hot]]<br />
[[File:IBM_ZRLab.png|center]]<br />
<br />
==Short Description==<br />
Today, we are entering the era of cognitive computing, which holds great promise in deriving intelligence and knowledge from huge volumes of data. In today’s computers based on von Neumann architecture, huge amounts of data need to be shuttled back and forth at high speeds, a task at which this architecture is inefficient.<br />
<br />
It is becoming increasingly clear that to build efficient cognitive computers, we need to transition to non-von Neumann architectures in which memory and processing coexist in some form. At IBM Research–Zurich in the [https://www.zurich.ibm.com/sto/memory/ Neuromorphic and In-memory Computing Group], we explore various such computing paradigms from in-memory computing to brain-inspired neuromorphic computing. Our research spans from devices and architectures to algorithms and applications. <!--Here is a list of available projects:<br />
<br />
* [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf '''Artificial general intelligence (AGI):''' lifelong learning challenge] <br />
* [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf '''Machine learning based on optimal transport using in-memory computing''']<br />
* [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Developing Efficient Models of '''Strong AI for Intelligence Quotient (IQ) Test''']<br />
--><br />
<br />
===About the IBM Research–Zurich===<br />
The location in Zurich is one of IBM’s 12 global research labs. IBM has maintained a research laboratory in Switzerland since 1956. As the first European branch of IBM Research, the mission of the Zurich Lab, in addition to pursuing cutting-edge research for tomorrow’s information technology, is to cultivate close relationships with academic and industrial partners, be one of the premier places to work for world-class researchers, to promote women in IT and science, and to help drive Europe’s innovation agenda. [https://www.zurich.ibm.com/pdf/ZRL_Fact_Sheet.pdf Download factsheet]<br />
<br />
==Hybrid AI Systems (HAS)==<br />
<br />
[[File:NatureElectronics20.jpg|thumb|right|200px]]<br />
Neither symbolic AI nor neural networks alone has produced the kind of intelligence expressed in human and animal behavior. Why? Each has a long and rich history, but has addressed a relatively narrow aspect of the problem. Symbolic AI focuses on solving cognitive problems, drawing upon the rich framework of symbolic computation to manipulate internal representations in order to perform reasoning and inference. But it suffers from being non-adaptive, lacking the ability to learn from example or by direct observation of the world. Neural networks on the other hand have the ability to learn from data, and derive much of their power from nonlinear function approximation combined with stochastic gradient descent. But intelligence requires more than modeling input-output relationships. Without the richness of symbolic computation, neural nets lack the simple but powerful operations such as variable binding that allow for analogy making and reasoning, which underlie the ability to generalize from few examples.<br />
<br />
We approach the problem from a very different perspective, inspired by the brain’s high-dimensional circuits and the unique mathematical properties of high-dimensional spaces. ''It leads us to a novel information processing architecture that combines the strengths of symbolic AI and neural networks, and yet has novel emergent properties of its own.'' By combining a small set of basic operations on high-dimensional vectors, we obtain hybrid AI system (HAS) that makes it possible to represent and manipulate data in ways familiar to us from symbolic AI, and to learn from the statistics of data in ways familiar to us from artificial neural networks and deep learning. Further, principles of such HAS allow few-shot learning capabilities, and extremely robust operations against failures, defects, variations, and noise, all of which are complementary to ultra-low energy computation on nanoscale fabrics such as phase-change memory devices. Exciting further research awaiting in this direction spans high-level algorithmic exploration all the way to efficient hardware design for emerging computational fabrics.<br />
<br />
===Useful Reading===<br />
*[https://www.nature.com/articles/s41928-020-0410-3 In-memory hyperdimensional computing]<br />
*[https://arxiv.org/pdf/2010.01939.pdf Robust High-dimensional Memory-augmented Neural Networks]<br />
*[https://www.nature.com/articles/s41928-020-00510-8 A wearable biosensing system with in-sensor adaptive machine learning for hand gesture recognition]<br />
*[https://link.springer.com/article/10.1007/s12559-009-9009-8 Hyperdimensional Computing: An Introduction to Computing in Distributed Representation with High-Dimensional Random Vectors]<br />
<br />
===Prerequisites===<br />
*Python<br />
*Background in machine learning (''recommended'')<br />
*Experience with any deep learning framework such as TensorFlow or PyTorch (''recommended'')<br />
*VLSI I (''recommended'')<br />
<br />
==In-Memory Computing (IMC)==<br />
<br />
[[File:NNcover_imc.jpg|thumb|right|200px]]<br />
For decades, conventional computers based on the von Neumann architecture have performed computation by repeatedly transferring data between their processing and their memory units, which are physically separated. As computation becomes increasingly data-centric and as the scalability limits in terms of performance and power are being reached, alternative computing paradigms are searched for in which computation and storage are collocated. A fascinating new approach is that of computational memory where the physics of nanoscale memory devices are used to perform certain computational tasks within the memory unit in a non-von Neumann manner. '''Computational Memory''' (CM) is finding application in a variety of areas such as machine learning and signal processing. In addition to novel non-volatile memory technologies like '''PCM''' and '''ReRAM''', also conventional '''SRAM''' has been proposed as underlying storage technology in Computational Memories.<br />
<br />
===Useful Reading===<br />
*[https://ieeexplore.ieee.org/document/9288639 An SRAM-Based Multibit In-Memory Matrix-Vector Multiplier With a Precision That Scales Linearly in Area, Time, and Power]<br />
*[https://www.nature.com/articles/s41565-020-0655-z Memory devices and applications for in-memory computing]<br />
*[https://ieeexplore.ieee.org/abstract/document/8865099 Deep learning acceleration based on in-memory computing]<br />
<br />
===Prerequisites===<br />
*General interest in Deep Learning and memory/system design<br />
*VLSI I and VLSI II (''recommended'')<br />
*Analog Circuit Design (''recommended'')<br />
Specific requirements for the different projects vary and are generally negotiable.<br />
<br />
==Available Projects==<br />
We are inviting applications from students to conduct their master’s thesis work or an internship project at the IBM Research lab in Zurich on this exciting new topic. <br />
<!--<br />
The work performed could span low-level hardware experiments on phase-change memory chips comprising more than 1 million devices to high-level algorithmic development in a deep learning framework such as TensorFlow or PyTorch. It also involves interactions with several researchers across IBM research focusing on various aspects of the project. The ideal candidate should have a multi-disciplinary background, strong mathematical aptitude and programming skills. Prior knowledge on emerging memory technologies such as phase-change memory is a bonus but not necessary.<br />
--><br />
<br />
{| class="wikitable" style="text-align: center;"<br />
|-<br />
! style="width: 5%;"|Type !! style="width: 20%"|Project !! style="width: 60%"|Description !! style="width: 5%"|Topic !! style="width: 15%"|Workload Type <br />
|-<br />
| MA|| Lifelong learning challenge || [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf '''Artificial general intelligence (AGI):''' lifelong learning challenge] || HDC || algorithmic design<br />
|-<br />
| MA|| Machine learning based on optimal transport using in-memory computing' || [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf '''Machine learning based on optimal transport using in-memory computing'''] || HDC || algorithmic design<br />
|-<br />
| MA || Developing Efficient Models of Strong AI for Intelligence Quotient (IQ) Test || [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Developing Efficient Models of '''Strong AI for Intelligence Quotient (IQ) Test'''] || HDC || algorithmic design<br />
|-<br />
| MA || Accurate deep learning inference using computational memory || [https://www.zurich.ibm.com/careers/2020_027.html Project description and application] || IMC || algorithmic design<br />
|-<br />
| MA || ADC design for computational memory || style="text-align: justify;"|Digital-to-Analog converters ('''DAC'''s) and Analog-to-Digital converters ('''ADC'''s) are extensively employed in Computational Memory ('''CM''') to handle the crossing between the digital and analog domain, in which computationally expensive tasks, like Matrix-Vector Multiplications ('''MVM'''), are carried out with O(1) complexity. Each conversion costs a certain amount of energy and its precision can only be guaranteed up to the Effective Number of Bits ('''ENOB''') of the employed data converter. <br /> The research focus will be on understanding the system level requirements on ADC and DAC for optimal performance of Deep Neural Network inference using CM. Furthermore, the effects of noise, non-linearity and manufacturing tolerances shall be examined and counter measurements, like for example periodic digital ADC recalibration and digital post processing, shall be evaluated with regards to effectivity and energy costs.|| IMC || analog circuit design<br />
|-<br />
| SA || Testing of a computational memory chip || style="text-align: justify;"| This project is about building a Microprocessor/FPGA-based test platform around a novel IMC chip. After commissioning the chip, DL workload tests can be run to characterize its throughput and energy-efficiency.|| IMC || PCB Design <br /> and/or<br /> µ-code implementation <br />
|-<br />
| MA/SA || Neural Network Training on a <br /> computational memory chip || style="text-align: justify;"|TBA || IMC || algorithm/system design<br />and/or<br /> analog circuit design<br />
|}<br />
<br />
==Contact==<br />
: Thesis will be at IBM Zurich in Rüschlikon<br />
; Hyperdimensional Computing (HDC) projects<br />
: Contact (at ETH Zurich): [[:User:kgf | Dr. Frank K. Gurkaynak]] and [[:User:Herschmi | Michael Hersche]]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian] <br />
: Contact (at IBM): [mailto:abr@zurich.ibm.com Dr. Abbas Rahimi]<br />
: Professor: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]<br />
; In-Memory Computing (IMC) projects<br />
: Contact (at IBM/ETH Zurich): [[:User:rid | Riduan Khaddam-Aljameh]]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian]<br />
: Professor: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]</div>Abbashttp://iis-projects.ee.ethz.ch/index.php?title=IBM_Research&diff=5434IBM Research2020-08-26T09:35:20Z<p>Abbas: /* Short Description */</p>
<hr />
<div>[[Category:Digital]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]] <br />
[[Category:Available]] <br />
[[Category:2020]]<br />
[[Category:Hot]]<br />
[[File:IBM_ZRLab.png|center]]<br />
<br />
==Short Description==<br />
Today, we are entering the era of cognitive computing, which holds great promise in deriving intelligence and knowledge from huge volumes of data. In today’s computers based on von Neumann architecture, huge amounts of data need to be shuttled back and forth at high speeds, a task at which this architecture is inefficient.<br />
<br />
It is becoming increasingly clear that to build efficient cognitive computers, we need to transition to non-von Neumann architectures in which memory and processing coexist in some form. At IBM Research–Zurich in the [https://www.zurich.ibm.com/sto/memory/ Neuromorphic and In-memory Computing Group], we explore various such computing paradigms from in-memory computing to brain-inspired neuromorphic computing. Our research spans from devices and architectures to algorithms and applications. Here is a list of available projects:<br />
<br />
* [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf '''Artificial general intelligence (AGI):''' lifelong learning challenge] <br />
* [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf '''Machine learning based on optimal transport using in-memory computing''']<br />
* [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Developing Efficient Models of '''Strong AI for Intelligence Quotient (IQ) Test''']<br />
<br />
===About the IBM Research–Zurich===<br />
The location in Zurich is one of IBM’s 12 global research labs. IBM has maintained a research laboratory in Switzerland since 1956. As the first European branch of IBM Research, the mission of the Zurich Lab, in addition to pursuing cutting-edge research for tomorrow’s information technology, is to cultivate close relationships with academic and industrial partners, be one of the premier places to work for world-class researchers, to promote women in IT and science, and to help drive Europe’s innovation agenda. [https://www.zurich.ibm.com/pdf/ZRL_Fact_Sheet.pdf Download factsheet]<br />
<br />
<br />
===Status: Available===<br />
: Looking for 2 Master students<br />
: Thesis will be at IBM Zurich in Rüschlikon<br />
: Contact (at ETH Zurich): [[:User:kgf | Dr. Frank K. Gurkaynak]] and [[:User:Herschmi | Michael Hersche]]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian] <br />
: Contact (at IBM): [mailto:abr@zurich.ibm.com Dr. Abbas Rahimi]<br />
<br />
===Professor===<br />
: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]</div>Abbashttp://iis-projects.ee.ethz.ch/index.php?title=IBM_Research&diff=5433IBM Research2020-08-25T15:28:01Z<p>Abbas: /* Short Description */</p>
<hr />
<div>[[Category:Digital]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]] <br />
[[Category:Available]] <br />
[[Category:2020]]<br />
[[Category:Hot]]<br />
[[File:IBM_ZRLab.png|center]]<br />
<br />
==Short Description==<br />
Today, we are entering the era of cognitive computing, which holds great promise in deriving intelligence and knowledge from huge volumes of data. In today’s computers based on von Neumann architecture, huge amounts of data need to be shuttled back and forth at high speeds, a task at which this architecture is inefficient.<br />
<br />
It is becoming increasingly clear that to build efficient cognitive computers, we need to transition to non-von Neumann architectures in which memory and processing coexist in some form. At IBM Research–Zurich in the [https://www.zurich.ibm.com/sto/memory/ Neuromorphic and In-memory Computing Group], we explore various such computing paradigms from in-memory computing to brain-inspired neuromorphic computing. Our research spans from devices and architectures to algorithms and applications. Here is a list of available projects:<br />
<br />
* [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf Artificial general intelligence (AGI): lifelong learning challenge] <br />
* [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf Machine learning based on optimal transport using in-memory computing]<br />
* [http://iis-projects.ee.ethz.ch/images/4/4b/IBM_RPM.pdf Developing Efficient Models of Strong AI for Intelligence Quotient (IQ) Test]<br />
<br />
===About the IBM Research–Zurich===<br />
The location in Zurich is one of IBM’s 12 global research labs. IBM has maintained a research laboratory in Switzerland since 1956. As the first European branch of IBM Research, the mission of the Zurich Lab, in addition to pursuing cutting-edge research for tomorrow’s information technology, is to cultivate close relationships with academic and industrial partners, be one of the premier places to work for world-class researchers, to promote women in IT and science, and to help drive Europe’s innovation agenda. [https://www.zurich.ibm.com/pdf/ZRL_Fact_Sheet.pdf Download factsheet]<br />
<br />
<br />
===Status: Available===<br />
: Looking for 2 Master students<br />
: Thesis will be at IBM Zurich in Rüschlikon<br />
: Contact (at ETH Zurich): [[:User:kgf | Dr. Frank K. Gurkaynak]] and [[:User:Herschmi | Michael Hersche]]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian] <br />
: Contact (at IBM): [mailto:abr@zurich.ibm.com Dr. Abbas Rahimi]<br />
<br />
===Professor===<br />
: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]</div>Abbashttp://iis-projects.ee.ethz.ch/index.php?title=File:IBM_RPM.pdf&diff=5432File:IBM RPM.pdf2020-08-25T15:27:33Z<p>Abbas: </p>
<hr />
<div></div>Abbashttp://iis-projects.ee.ethz.ch/index.php?title=IBM_Research&diff=5417IBM Research2020-08-19T18:59:12Z<p>Abbas: /* Short Description */</p>
<hr />
<div>[[Category:Digital]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]] <br />
[[Category:Available]] <br />
[[Category:2020]]<br />
[[Category:Hot]]<br />
[[File:IBM_ZRLab.png|center]]<br />
<br />
==Short Description==<br />
Today, we are entering the era of cognitive computing, which holds great promise in deriving intelligence and knowledge from huge volumes of data. In today’s computers based on von Neumann architecture, huge amounts of data need to be shuttled back and forth at high speeds, a task at which this architecture is inefficient.<br />
<br />
It is becoming increasingly clear that to build efficient cognitive computers, we need to transition to non-von Neumann architectures in which memory and processing coexist in some form. At IBM Research–Zurich in the [https://www.zurich.ibm.com/sto/memory/ Neuromorphic and In-memory Computing Group], we explore various such computing paradigms from in-memory computing to brain-inspired neuromorphic computing. Our research spans from devices and architectures to algorithms and applications. Here is a list of available projects:<br />
<br />
* [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf Artificial general intelligence (AGI): lifelong learning challenge] <br />
* [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf Machine learning based on optimal transport using in-memory computing]<br />
<br />
===About the IBM Research–Zurich===<br />
The location in Zurich is one of IBM’s 12 global research labs. IBM has maintained a research laboratory in Switzerland since 1956. As the first European branch of IBM Research, the mission of the Zurich Lab, in addition to pursuing cutting-edge research for tomorrow’s information technology, is to cultivate close relationships with academic and industrial partners, be one of the premier places to work for world-class researchers, to promote women in IT and science, and to help drive Europe’s innovation agenda. [https://www.zurich.ibm.com/pdf/ZRL_Fact_Sheet.pdf Download factsheet]<br />
<br />
<br />
===Status: Available===<br />
: Looking for 2 Master students<br />
: Thesis will be at IBM Zurich in Rüschlikon<br />
: Contact (at ETH Zurich): [[:User:kgf | Dr. Frank K. Gurkaynak]] and [[:User:Herschmi | Michael Hersche]]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian] <br />
: Contact (at IBM): [mailto:abr@zurich.ibm.com Dr. Abbas Rahimi]<br />
<br />
===Professor===<br />
: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]</div>Abbashttp://iis-projects.ee.ethz.ch/index.php?title=IBM_Research&diff=5367IBM Research2020-08-18T10:10:03Z<p>Abbas: /* Short Description */</p>
<hr />
<div>[[Category:Digital]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]] <br />
[[Category:Available]] <br />
[[Category:2020]]<br />
[[Category:Hot]]<br />
[[File:IBM_ZRLab.png|center]]<br />
<br />
==Short Description==<br />
The location in Zurich is one of IBM’s 12 global research labs. IBM has maintained a research laboratory in Switzerland since 1956. As the first European branch of IBM Research, the mission of the Zurich Lab, in addition to pursuing cutting-edge research for tomorrow’s information technology, is to cultivate close relationships with academic and industrial partners, be one of the premier places to work for world-class researchers, to promote women in IT and science, and to help drive Europe’s innovation agenda. [https://www.zurich.ibm.com/pdf/ZRL_Fact_Sheet.pdf Download factsheet]<br />
<br />
At IBM Research–Zurich, the [https://www.zurich.ibm.com/sto/memory/ Neuromorphic and In-memory Computing Group] is working on devising novel computing systems with a focus on AI and Cloud applications. Here is a list of available projects:<br />
<br />
* [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf Artificial general intelligence (AGI): lifelong learning challenge] <br />
* [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf Machine learning based on optimal transport using in-memory computing]<br />
<br />
<br />
<br />
===Status: Available===<br />
: Looking for 2 Master students<br />
: Thesis will be at IBM Zurich in Rüschlikon<br />
: Contact (at ETH Zurich): [[:User:kgf | Dr. Frank K. Gurkaynak]] and [[:User:Herschmi | Michael Hersche]]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian] <br />
: Contact (at IBM): [mailto:abr@zurich.ibm.com Dr. Abbas Rahimi]<br />
<br />
===Professor===<br />
: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]</div>Abbashttp://iis-projects.ee.ethz.ch/index.php?title=IBM_Research&diff=5366IBM Research2020-08-18T10:09:37Z<p>Abbas: /* Status: Available */</p>
<hr />
<div>[[Category:Digital]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]] <br />
[[Category:Available]] <br />
[[Category:2020]]<br />
[[Category:Hot]]<br />
[[File:IBM_ZRLab.png|center]]<br />
<br />
==Short Description==<br />
The location in Zurich is one of IBM’s 12 global research labs. IBM has maintained a research laboratory in Switzerland since 1956. As the first European branch of IBM Research, the mission of the Zurich Lab, in addition to pursuing cutting-edge research for tomorrow’s information technology, is to cultivate close relationships with academic and industrial partners, be one of the premier places to work for world-class researchers, to promote women in IT and science, and to help drive Europe’s innovation agenda. [https://www.zurich.ibm.com/pdf/ZRL_Fact_Sheet.pdf Download factsheet]<br />
<br />
At IBM Research–Zurich, the [https://www.zurich.ibm.com/sto/memory/ Neuromorphic and In-memory Computing Group] is working on devising novel computing systems with a focus on AI and Cloud applications. Here is a list of available projects:<br />
<br />
* [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf Artificial general intelligence (AGI): lifelong learning challenge] <br />
* [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf Machine learning based on optimal transport using in-memory computing]<br />
<br />
<br />
<br />
===Status: Available===<br />
: Looking for 2 Master students<br />
: Thesis will be at IBM Zurich in Rüschlikon<br />
: Contact (at ETH Zurich): [[:User:kgf | Dr. Frank K. Gurkaynak]] [[:User:Herschmi | Michael Hersche]]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian] <br />
: Contact (at IBM): [mailto:abr@zurich.ibm.com Dr. Abbas Rahimi]<br />
<br />
===Professor===<br />
: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]</div>Abbashttp://iis-projects.ee.ethz.ch/index.php?title=Main_Page&diff=5360Main Page2020-08-11T14:52:13Z<p>Abbas: /* Digital Circuits and Systems Group (Prof. Benini) */</p>
<hr />
<div>__NOTOC__<br />
<CENTER><H1> Welcome to IIS-Projects</H1></CENTER><br />
In this page you will find student and research projects at the [http://www.iis.ee.ethz.ch Integrated Systems Laboratory] of the [http://www.ethz.ch ETH Zurich].<br />
<br />
==Institute Organization==<br />
The IIS Consists of 5 main research groups<br />
* [[Analog| Analog and Mixed Signal Design]]<br />
* [[Digital| Digital Circuits and Systems]]<br />
* [[Energy Efficient Circuits and IoT Systems Group| Energy Efficient Circuits and IoT Systems]]<br />
* [[:Category:Nano-TCAD|Nano-TCAD]]<br />
* [[:Category:Physical Characterization|Physical Characterization]]<br />
<br />
===[[Analog|Analog and Mixed Signal Design Group (Prof. Huang)]]===<br />
* [[Analog IC Design]]<br />
* [[Biomedical System on Chips]]<br />
* [[RF SoCs for the Internet of Things]]<br />
* [[High-Performance & V2X Cellular Communications]]<br />
<br />
===[[Digital|Digital Circuits and Systems Group (Prof. Benini)]]===<br />
* [[High Performance SoCs]]<br />
* [[Computer Architecture]]<br />
* [[Acceleration and Transprecision]]<br />
* [[Heterogeneous Acceleration Systems]]<br />
* [[Event-Driven Computing]]<br />
* [[Predictable Execution]]<br />
* [[Low Power Embedded Systems and Wireless Sensors Networks]]<br />
* [[Embedded Artificial Intelligence:Systems And Applications]]<br />
* [[Students' International Competitions: F1(AMZ), Swissloop, Educational Rockets]]<br />
* [[Transient Computing]]<br />
* [[RF SoCs for the Internet of Things]]<br />
* [[Energy Efficient Autonomous UAVs]]<br />
* [[Biomedical System on Chips]]<br />
* [[Digital Medical Ultrasound Imaging]]<br />
* [[Cryptography|Cryptographic Hardware]]<br />
* [[Deep Learning Projects|Deep Learning Acceleration]]<br />
* [[Human Intranet]]<br />
* [[IBM Research]]<br />
<br />
===[[Energy Efficient Circuits and IoT Systems Group| Energy Efficient Circuits and IoT Systems Group (Prof. Jang)]]===<br />
<DynamicPageList><br />
category = EECIS<br />
category = Available<br />
category = Hot<br />
</DynamicPageList><br />
<br />
===[[:Category:Nano-TCAD|Nano-TCAD Group (Prof. Luisier)]]===<br />
<DynamicPageList><br />
category = Nano-TCAD<br />
category = Available<br />
category = Hot<br />
</DynamicPageList><br />
<br />
===[[:Category:Physical Characterization|Physical Characterization Group (Dr.Ciappa)]]===<br />
<DynamicPageList><br />
category = Physical Characterization<br />
category = Available<br />
category = Hot<br />
</DynamicPageList><br />
<br />
===[[:Category:Collaboration|Collaborations with other groups/departments]]===<br />
<DynamicPageList><br />
category = Collaboration<br />
category = Available<br />
</DynamicPageList><br />
<br />
==Selected Projects in Progress==<br />
''For a complete list, see [[:Category:In progress|Projects in Progress]].''<br />
<DynamicPageList><br />
count = 5<br />
category = In progress<br />
</DynamicPageList><br />
<br />
==Selected Completed Projects==<br />
''For a complete list, see [[:Category:Completed|Completed Projects]].''<br />
<DynamicPageList><br />
count = 5<br />
category = Completed<br />
</DynamicPageList><br />
<br />
==Selected Research Projects==<br />
''For a complete list, see [[:Category:Research|Research Projects]].''<br />
<DynamicPageList><br />
count = 5<br />
category = Completed<br />
</DynamicPageList><br />
<br />
==Links to Other IIS Webpages==<br />
; [http://www.iis.ee.ethz.ch http://www.iis.ee.ethz.ch] <br />
: Integrated Systems Laboratory Main homepage<br />
; [http://www.nano-tcad.ethz.ch http://www.nano-tcad.ethz.ch] <br />
:Nano-TCAD group homepage<br />
; [http://www.dz.ee.ethz.ch http://www.dz.ee.ethz.ch]<br />
: Microelectronics Design Center<br />
; [http://asic.ethz.ch/cg http://asic.ethz.ch/cg]<br />
: The IIS-ASIC Chip Gallery<br />
; [http://eda.ee.ethz.ch http://eda.ee.ethz.ch]<br />
: EDA Wiki (''ETH Zurich internal access only!'')</div>Abbashttp://iis-projects.ee.ethz.ch/index.php?title=IBM_Research&diff=5359IBM Research2020-08-11T14:50:06Z<p>Abbas: /* Short Description */</p>
<hr />
<div>[[Category:Digital]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]] <br />
[[Category:Available]] <br />
[[Category:2020]]<br />
[[Category:Hot]]<br />
[[File:IBM_ZRLab.png|center]]<br />
<br />
==Short Description==<br />
The location in Zurich is one of IBM’s 12 global research labs. IBM has maintained a research laboratory in Switzerland since 1956. As the first European branch of IBM Research, the mission of the Zurich Lab, in addition to pursuing cutting-edge research for tomorrow’s information technology, is to cultivate close relationships with academic and industrial partners, be one of the premier places to work for world-class researchers, to promote women in IT and science, and to help drive Europe’s innovation agenda. [https://www.zurich.ibm.com/pdf/ZRL_Fact_Sheet.pdf Download factsheet]<br />
<br />
At IBM Research–Zurich, the [https://www.zurich.ibm.com/sto/memory/ Neuromorphic and In-memory Computing Group] is working on devising novel computing systems with a focus on AI and Cloud applications. Here is a list of available projects:<br />
<br />
* [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf Artificial general intelligence (AGI): lifelong learning challenge] <br />
* [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf Machine learning based on optimal transport using in-memory computing]<br />
<br />
<br />
<br />
===Status: Available===<br />
: Looking for 2 Master students<br />
: Thesis will be at IBM Zurich in Rüschlikon<br />
: Contact (at ETH Zurich): [[:User:kgf | Dr. Frank K. Gurkaynak]]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian] <br />
: Contact (at IBM): [mailto:abr@zurich.ibm.com Dr. Abbas Rahimi] <br />
===Professor===<br />
: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]</div>Abbashttp://iis-projects.ee.ethz.ch/index.php?title=IBM_Research%E2%80%93Zurich&diff=5358IBM Research–Zurich2020-08-10T15:40:00Z<p>Abbas: Blanked the page</p>
<hr />
<div></div>Abbashttp://iis-projects.ee.ethz.ch/index.php?title=IBM_Research&diff=5357IBM Research2020-08-10T15:39:47Z<p>Abbas: Created page with "Category:Digital Category:Semester Thesis Category:Master Thesis Category:Available Category:2020 Category:Hot center ==Short..."</p>
<hr />
<div>[[Category:Digital]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]] <br />
[[Category:Available]] <br />
[[Category:2020]]<br />
[[Category:Hot]]<br />
[[File:IBM_ZRLab.png|center]]<br />
<br />
==Short Description==<br />
The location in Zurich is one of IBM’s 12 global research labs. IBM has maintained a research laboratory in Switzerland since 1956. As the first European branch of IBM Research, the mission of the Zurich Lab, in addition to pursuing cutting-edge research for tomorrow’s information technology, is to cultivate close relationships with academic and industrial partners, be one of the premier places to work for world-class researchers, to promote women in IT and science, and to help drive Europe’s innovation agenda. [https://www.zurich.ibm.com/pdf/ZRL_Fact_Sheet.pdf Download factsheet]<br />
<br />
At IBM Research–Zurich the In-memory Computing and Neuromorphic Devices group is working on devising novel computing systems with a focus on AI and Cloud applications. Here is a list of available projects:<br />
<br />
* [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf Artificial general intelligence (AGI): lifelong learning challenge] <br />
* [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf Machine learning based on optimal transport using in-memory computing]<br />
<br />
<br />
<br />
===Status: Available===<br />
: Looking for 2 Master students<br />
: Thesis will be at IBM Zurich in Rüschlikon<br />
: Contact (at ETH Zurich): [mailto:lbenini@iis.ee.ethz.ch Prof. Dr. Luca Benini]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian] <br />
: Contact (at IBM): [mailto:abr@zurich.ibm.com Dr. Abbas Rahimi] <br />
===Professor===<br />
: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]</div>Abbashttp://iis-projects.ee.ethz.ch/index.php?title=IBM_Research%E2%80%93Zurich&diff=5356IBM Research–Zurich2020-08-10T15:37:20Z<p>Abbas: </p>
<hr />
<div>[[Category:Digital]]<br />
[[Category:IBM Research-Zurich]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]] <br />
[[Category:Available]] <br />
[[Category:2020]]<br />
[[Category:Hot]]<br />
[[File:IBM_ZRLab.png|center]]<br />
<br />
==Short Description==<br />
The location in Zurich is one of IBM’s 12 global research labs. IBM has maintained a research laboratory in Switzerland since 1956. As the first European branch of IBM Research, the mission of the Zurich Lab, in addition to pursuing cutting-edge research for tomorrow’s information technology, is to cultivate close relationships with academic and industrial partners, be one of the premier places to work for world-class researchers, to promote women in IT and science, and to help drive Europe’s innovation agenda. [https://www.zurich.ibm.com/pdf/ZRL_Fact_Sheet.pdf Download factsheet]<br />
<br />
At IBM Research–Zurich the In-memory Computing and Neuromorphic Devices group is working on devising novel computing systems with a focus on AI and Cloud applications. Here is a list of available projects:<br />
<br />
* [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf Artificial general intelligence (AGI): lifelong learning challenge] <br />
* [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf Machine learning based on optimal transport using in-memory computing]<br />
<br />
<br />
<br />
===Status: Available===<br />
: Looking for 2 Master students<br />
: Thesis will be at IBM Zurich in Rüschlikon<br />
: Contact (at ETH Zurich): [mailto:lbenini@iis.ee.ethz.ch Prof. Dr. Luca Benini]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian] <br />
: Contact (at IBM): [mailto:abr@zurich.ibm.com Dr. Abbas Rahimi] <br />
===Professor===<br />
: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]</div>Abbashttp://iis-projects.ee.ethz.ch/index.php?title=IBM_Research%E2%80%93Zurich&diff=5355IBM Research–Zurich2020-08-10T15:36:17Z<p>Abbas: </p>
<hr />
<div>[[Category:Digital]]<br />
[[Category:IBM Research-Zurich]]<br />
[[Category:Semester Thesis]]<br />
[[Category:Master Thesis]] <br />
[[Category:Available]] <br />
[[Category:2020]]<br />
[[Category:Hot]]<br />
[[Category:Human Intranet]]<br />
[[File:IBM_ZRLab.png|center]]<br />
<br />
==Short Description==<br />
The location in Zurich is one of IBM’s 12 global research labs. IBM has maintained a research laboratory in Switzerland since 1956. As the first European branch of IBM Research, the mission of the Zurich Lab, in addition to pursuing cutting-edge research for tomorrow’s information technology, is to cultivate close relationships with academic and industrial partners, be one of the premier places to work for world-class researchers, to promote women in IT and science, and to help drive Europe’s innovation agenda. [https://www.zurich.ibm.com/pdf/ZRL_Fact_Sheet.pdf Download factsheet]<br />
<br />
At IBM Research–Zurich the In-memory Computing and Neuromorphic Devices group is working on devising novel computing systems with a focus on AI and Cloud applications. Here is a list of available projects:<br />
<br />
* [http://iis-projects.ee.ethz.ch/images/0/01/IBM_MANN_Y2020.pdf Artificial general intelligence (AGI): lifelong learning challenge] <br />
* [http://iis-projects.ee.ethz.ch/images/3/33/IBM_OT_Y2020.pdf Machine learning based on optimal transport using in-memory computing]<br />
<br />
<br />
<br />
===Status: Available===<br />
: Looking for 2 Master students<br />
: Thesis will be at IBM Zurich in Rüschlikon<br />
: Contact (at ETH Zurich): [mailto:lbenini@iis.ee.ethz.ch Prof. Dr. Luca Benini]<br />
: Contact (at IBM): [mailto:ase@zurich.ibm.com Dr. Abu Sebastian] <br />
: Contact (at IBM): [mailto:abr@zurich.ibm.com Dr. Abbas Rahimi] <br />
===Professor===<br />
: [http://www.iis.ee.ethz.ch/portrait/staff/lbenini.en.html Luca Benini]</div>Abbashttp://iis-projects.ee.ethz.ch/index.php?title=Digital&diff=5354Digital2020-08-10T15:35:24Z<p>Abbas: /* Topics for Projects */</p>
<hr />
<div>__NOTOC__<br />
<imagemap><br />
Image:Project_Map_2018_08.png|1024px<br />
rect 10 12 350 182 [[Computer Architecture]]<br />
rect 350 12 690 182 [[Acceleration and Transprecision]]<br />
rect 690 12 1030 182 [[Heterogeneous Acceleration Systems]]<br />
rect 1030 12 1370 182 [[Event-Driven Computing]]<br />
rect 1370 12 1710 182 [[Predictable Execution]]<br />
rect 10 182 350 352 [[Embedded Artificial Intelligence:Systems And Applications]]<br />
rect 350 182 690 352 [[Transient Computing]]<br />
rect 1030 182 1370 352 [[RF SoCs for the Internet of Things]]<br />
rect 1370 182 1710 352 [[Energy Efficient Autonomous UAVs]]<br />
rect 10 352 350 522 [[Biomedical System on Chips]]<br />
rect 350 352 690 522 [[Digital Medical Ultrasound Imaging]]<br />
rect 690 352 1030 522 [[Cryptography|Cryptographic Hardware]]<br />
rect 1030 352 1370 522 [[Deep Learning Projects|Deep Learning Acceleration]]<br />
rect 1370 352 1710 522 [[Human Intranet]]<br />
default [[Digital]]<br />
</imagemap><br />
<br />
<br />
==Topics for Projects==<br />
* '''[[High Performance SoCs]]'''<br />
* '''[[Computer Architecture]]'''<br />
* '''[[Acceleration and Transprecision]]'''<br />
* '''[[Heterogeneous Acceleration Systems]]'''<br />
* '''[[Event-Driven Computing]]'''<br />
* '''[[Predictable Execution]]'''<br />
* '''[[Low Power Embedded Systems]]'''<br />
* '''[[Embedded Artificial Intelligence:Systems And Applications]]'''<br />
* '''[[Transient Computing]]'''<br />
* '''[[RF SoCs for the Internet of Things]]'''<br />
* '''[[Energy Efficient Autonomous UAVs]]'''<br />
* '''[[Biomedical System on Chips]]'''<br />
* '''[[Digital Medical Ultrasound Imaging]]'''<br />
* '''[[Cryptography|Cryptographic Hardware]]'''<br />
* '''[[Deep Learning Projects|Deep Learning Acceleration]]'''<br />
* '''[[Human Intranet]]'''<br />
* '''[[IBM Research]]'''<br />
<br />
==Active Projects==<br />
These are the projects that are currently active<br />
<DynamicPageList><br />
category = In progress<br />
category = Digital<br />
</DynamicPageList><br />
<br />
==Completed Projects==<br />
These are projects that were completed in the last few years<br />
===2019===<br />
<DynamicPageList><br />
category = Completed<br />
category = Digital<br />
category = 2019<br />
suppresserrors=true<br />
</DynamicPageList><br />
===2018===<br />
<DynamicPageList><br />
category = Completed<br />
category = Digital<br />
category = 2018<br />
suppresserrors=true<br />
</DynamicPageList><br />
===2017===<br />
<DynamicPageList><br />
category = Completed<br />
category = Digital<br />
category = 2017<br />
suppresserrors=true<br />
</DynamicPageList><br />
===2016===<br />
<DynamicPageList><br />
category = Completed<br />
category = Digital<br />
category = 2016<br />
suppresserrors=true<br />
</DynamicPageList><br />
===2015===<br />
<DynamicPageList><br />
category = Completed<br />
category = Digital<br />
category = 2015<br />
suppresserrors=true<br />
</DynamicPageList><br />
===2014===<br />
<DynamicPageList><br />
category = Completed<br />
category = Digital<br />
category = 2014<br />
</DynamicPageList><br />
===2013===<br />
<DynamicPageList><br />
category = Completed<br />
category = Digital<br />
category = 2013<br />
</DynamicPageList><br />
===2012===<br />
<DynamicPageList><br />
category = Completed<br />
category = Digital<br />
category = 2012<br />
</DynamicPageList><br />
===2011===<br />
<DynamicPageList><br />
category = Completed<br />
category = Digital<br />
category = 2011<br />
</DynamicPageList><br />
[[Category:Computer Architecture]]<br />
[[Category:Acceleration and Transprecision]]<br />
[[Category:Heterogeneous Acceleration Systems]]<br />
[[Category:Event-Driven Computing]]<br />
[[Category:Predictable Execution]]<br />
[[Category:Low Power Embedded Systems]]<br />
[[Category:Embedded Artificial Intelligence:Systems And Applications]]<br />
[[Category:Transient Computing]]<br />
[[Category:System on Chips for IoTs]]<br />
[[Category:Energy Efficient Autonomous UAVs]]<br />
[[Category:Biomedical System on Chips]]<br />
[[Category:Digital Medical Ultrasound Imaging]]<br />
[[Category:Cryptography]]<br />
[[Category:Deep Learning Acceleration]]<br />
[[Category:Human Intranet]]</div>Abbas