Personal tools

Difference between revisions of "Acceleration and Transprecision"

From iis-projects

Jump to: navigation, search
 
(2 intermediate revisions by 2 users not shown)
Line 9: Line 9:
 
* ETZ J78
 
* ETZ J78
  
====Stefan Mach====
+
====Luca Bertaccini====
* [mailto:smach@iis.ee.ethz.ch smach@iis.ee.ethz.ch]
+
* [mailto:lbertaccini@iis.ee.ethz.ch lbertaccini@iis.ee.ethz.ch]
* ETZ J89
+
* ETZ J78
 
 
====Fabian Schuiki====
 
* [mailto:fschuiki@iis.ee.ethz.ch fschuiki@iis.ee.ethz.ch]
 
* ETZ J89
 
 
 
====Manuel Eggimann====
 
* [mailto:meggiman@iis.ee.ethz.ch meggiman@iis.ee.ethz.ch]
 
* ETZ J68
 
  
 +
====Matteo Perotti====
 +
* [mailto:mperotti@iis.ee.ethz.ch mperotti@iis.ee.ethz.ch]
 +
* ETZ J85
  
 
===Available Projects===
 
===Available Projects===

Latest revision as of 16:05, 24 November 2023

A NVIDIA Tesla V100 GP-GPU. This cutting-edge accelerator provides huge computational power on a massive 800 mm² die.
Google's Cloud TPU (Tensor Processing Unit). This machine learning accelerator can do one thing extremely well: multiply-accumulate operations.

Accelerators are the backbone of big data and scientific computing. While general purpose processor architectures such as Intel's x86 provide good performance across a wide variety of applications, it is only since the advent of general purpose GPUs that many computationally demanding tasks have become feasible. Since these GPUs support a much narrower set of operations, it is easier to optimize the architecture to make them more efficient. Such accelerators are not limited to high performance sector alone. In low power computing, they allow complex tasks such as computer vision or cryptography to be performed under a very tight power budget. Without a dedicated accelerator, these tasks would not be feasible.

Who We Are

Francesco Conti

Luca Bertaccini

Matteo Perotti

Available Projects


Projects In Progress


Completed Projects

The Logarithmic Number Unit chip Selene.