Personal tools

Difference between revisions of "Hardware Acceleration"

From iis-projects

Jump to: navigation, search
(Created page with "File:NVIDIA Tesla V100.jpg|thumb|right|A NVIDIA Tesla V100 GP-GPU. This cutting-edge accelerator provides huge computational power on a [https://arstechnica.com/gadgets/2017...")
 
Line 72: Line 72:
  
  
===Projects Overview===
+
==Projects Overview==
 
===Available Projects===
 
===Available Projects===
 
<DynamicPageList>
 
<DynamicPageList>

Revision as of 16:26, 10 November 2020

A NVIDIA Tesla V100 GP-GPU. This cutting-edge accelerator provides huge computational power on a massive 800 mm² die.
Google's Cloud TPU (Tensor Processing Unit). This machine learning accelerator can do one thing extremely well: multiply-accumulate operations.

Accelerators are the backbone of big data and scientific computing. While general-purpose processor architectures such as Intel's x86 provide good performance across a wide variety of applications, it is only since the advent of general-purpose GPUs that many computationally demanding tasks have become feasible. Since these GPUs support a much narrower set of operations, it is easier to optimize the architecture to make them more efficient. Such accelerators are not limited to the high-performance sector alone. In low power computing, they allow complex tasks such as computer vision or cryptography to be performed under a very tight power budget. Without a dedicated accelerator, these tasks would not be feasible.

General-Purpose Computing

TBA

Nils Wistoff

Paul Scheffler

Manuel Eggimann

Fabian Schuiki

Computational Units

TBA

Luca Bertaccini

Matteo Perotti

Stefan Mach


Hardware Acceleration of DNNs and QNNs

Deep Learning (DL) and Artificial Intelligence (AI) are quickly becoming dominant paradigms for all kinds of analytics, complementing or replacing traditional data science methods. Successful at-scale deployment of these algorithms requires deploying them directly at the data source, i.e. in the IoT end-nodes collecting data. However, due to the extreme constraints of these devices (in terms of power, memory footprint, area cost), performing full DL inference in-situ in low-power end-nodes requires a breakthrough in computational performance and efficiency. It is widely known that the numerical representation typically used when developing DL algorithms (single-precision floating-point) encodes a higher precision than what is actually required to achieve high quality-of-results in inference (Courbariaux et al. 2016); this fact can be exploited in the design of energy-efficient hardware for DL. For example, by using ternary weights, which means all network weights are quantized to {-1,0,1}, we can design the fundamental compute units in hardware without using an HW-expensive multiplication unit. Additionally, it allows us to store the weights much more compact on-chip.

Gianna.jpg

Gianna Paulin

Georg Rutishauser

Moritz scherer.jpg

Moritz Scherer


Projects Overview

Available Projects


Projects In Progress


Completed Projects

The Logarithmic Number Unit chip Selene.