Personal tools

Difference between revisions of "Application optimized data convertors for computational memory"

From iis-projects

Jump to: navigation, search
(Created page with "thumb ==Short Description== For decades, conventional computers based on the von Neumann architecture have performed computation by repeatedly...")
 
Line 54: Line 54:
  
 
==Links==  
 
==Links==  
 +
 +
[[Category:Master Thesis]]
 +
  [[Category:Available]]
 +
[[Category:Digital]]
 +
    [[Category:Computer Architecture]]
 +
    [[Category:System Design]]
 +
  
 
[[#top|↑ top]]
 
[[#top|↑ top]]

Revision as of 15:58, 20 February 2020

Variation Tolerant.jpg

Short Description

For decades, conventional computers based on the von Neumann architecture have performed computation by repeatedly transferring data between their processing and their memory units, which are physically separated. As computation becomes increasingly data-centric and as the scalability limits in terms of performance and power are being reached, alternative computing paradigms are searched for in which computation and storage are collocated. A fascinating new approach is that of computational memory where the physics of nanoscale memory devices are used to perform certain computational tasks within the memory unit in a non-von Neumann manner.

Computational Memory (CM) is finding application in a variety of areas such as machine learning and signal processing. In IBM Research - Zurich we have shown experimental demonstrations of this concept using up to a million phase-change memory (PCM) devices. Unsupervised learning of temporal correlations, solution of linear equations and deep neural network inference and training are the most prominent applications that can benefit from a CM-based data-flow architecture.

Digital-to-Analog converters (DACs) and Analog-to-Digital converters (ADCs) are extensively employed in CM to handle the crossing between the digital and analog domain, in which computationally expensive tasks, like Matrix-Vector Multiplications (MVM), are carried out with O(1) complexity. Each conversion costs a certain amount of energy and its precision can only be guaranteed up to the Effective Number of Bits (ENOB) of the employed data converter.

The research focus will be on understanding the system level requirements on ADC and DAC for optimal performance of Deep Neural Network inference using CM. Furthermore, the effects of noise, non-linearity and manufacturing tolerances shall be examined and counter measurements, like for example periodic digital ADC recalibration and digital post processing, shall be evaluated with regards to effectivity and energy costs.

The ideal candidate should be well proficient in digital design for ASICs and/or FPGAs using Verilog or VHDL. Theoretical understanding of Deep Neural Network inference and training as well as practical experience with the related python frameworks is required. Detailed knowledge of common ADC topologies and ADC performance characterization is recommended and can be acquired in the early phase of the project. A strong mathematical background and programming skills will be a significant bonus. Prior knowledge on emerging memory technologies such as phase- change memory is not necessary. This work involves interactions with several researchers focusing on various aspects of the project and the thesis will be carried out at IBM Research in Rüschlikon.

Status: Available

Looking for 1-2 Master students
Contact: Riduan Khaddam-Aljameh

Prerequisites

VLSI I
VLSI II (recommended)

Character

40% Theory
20% Design
40% EDA tools

Professor

Luca Benini

↑ top

Detailed Task Description

Goals

Practical Details

Results

Links

↑ top