Today, we are entering the era of cognitive computing, which holds great promise in deriving intelligence and knowledge from huge volumes of data. In today’s computers based on von Neumann architecture, huge amounts of data need to be shuttled back and forth at high speeds, a task at which this architecture is inefficient.
It is becoming increasingly clear that to build efficient cognitive computers, we need to transition to non-von Neumann architectures in which memory and processing coexist in some form. At IBM Research–Zurich in the Neuromorphic and In-memory Computing Group, we explore various such computing paradigms from in-memory computing to brain-inspired neuromorphic computing. Our research spans from devices and architectures to algorithms and applications.
About the IBM Research–Zurich
The location in Zurich is one of IBM’s 12 global research labs. IBM has maintained a research laboratory in Switzerland since 1956. As the first European branch of IBM Research, the mission of the Zurich Lab, in addition to pursuing cutting-edge research for tomorrow’s information technology, is to cultivate close relationships with academic and industrial partners, be one of the premier places to work for world-class researchers, to promote women in IT and science, and to help drive Europe’s innovation agenda. Download factsheet
Hybrid AI Systems (HAS)
Neither symbolic AI nor neural networks alone has produced the kind of intelligence expressed in human and animal behavior. Why? Each has a long and rich history, but has addressed a relatively narrow aspect of the problem. Symbolic AI focuses on solving cognitive problems, drawing upon the rich framework of symbolic computation to manipulate internal representations in order to perform reasoning and inference. But it suffers from being non-adaptive, lacking the ability to learn from example or by direct observation of the world (aka symbol grounding problem). Neural networks on the other hand have the ability to learn from data, and derive much of their power from nonlinear function approximation combined with stochastic gradient descent. But intelligence requires more than modeling input-output relationships. Without the richness of symbolic computation, neural nets lack the simple but powerful operations such as variable binding that allow for analogy making and reasoning, which underlie the ability to generalize from few examples. To address this gap, neurosymbolic AI aims to combine the best of both worlds to approach human-level intelligence.
We approach the problem from a very different perspective, inspired by the brain’s high-dimensional circuits and the unique mathematical properties of high-dimensional spaces. It leads us to a novel information processing architecture that combines the strengths of symbolic AI and neural networks, and yet has novel emergent properties of its own. By combining a small set of basic operations on high-dimensional vectors, we obtain hybrid AI system (HAS) that makes it possible to represent and manipulate data in ways familiar to us from symbolic AI, and to learn from the statistics of data in ways familiar to us from artificial neural networks and deep learning. Further, principles of such HAS allow few-shot learning capabilities, and extremely robust operations against failures, defects, variations, and noise, all of which are complementary to ultra-low energy computation on nanoscale fabrics such as phase-change memory devices. Exciting further research (listed in below table) awaiting in this direction spans high-level algorithmic exploration all the way to efficient hardware design for emerging computational fabrics.
- Neurosymbolic AI Explained, IBM-Research
- Neurosymbolic AI, Invited Talk by David Cox, IAAI /AAAI 2020
- Robust high-dimensional memory-augmented neural networks, Nature Communications, 2021.
- In-memory hyperdimensional computing, Nature Electronics, 2020.
- A wearable biosensing system with in-sensor adaptive machine learning for hand gesture recognition, Nature Electronics, 2020.
- Hyperdimensional computing: an introduction to computing in distributed representation with high-dimensional random vectors, Cognitive Computation, 2009.
- Background in machine learning (recommended)
- Experience with any deep learning framework such as TensorFlow or PyTorch (recommended)
- VLSI I (recommended)
We are inviting applications from students to conduct their thesis (bachelor, semester, and master) or an internship project at the IBM Research lab in Zurich on this exciting new topic.
|MA/SA/BA||Neurosymbolic Architectures to Approach Human-like AI||Link to description||algorithmic design|
|MA/SA/BA||Developing Efficient Models for Solving Intelligence Quotient (IQ) Test||Link to description||algorithmic/hardware design|
|MA/SA/BA||Zero-shot learning||TBD||algorithmic design|
|MA/SA||Accelerating Mixers with Computational Memory||Link to description||Hardware design and experiments|
|MA/SA||Accelerating Transformers with Computational Memory||Link to description||Hardware/algorithmic design|
|MA/SA/BA||Face Recognition at Scale||Link to description||algorithmic design|
|MA||Lifelong learning challenge||Link to description||algorithmic/hardware design|
|MA||Crytography meets in-memory computing||Link to description||algorithmic design and hardware experiments|