Personal tools

Difference between revisions of "Flexfloat DL Training Framework"

From iis-projects

Jump to: navigation, search
(Status: Available)
Line 57: Line 57:
  
  
 
+
[[Category:Deep Learning Acceleration]]
 +
[[Category:Deep Learning Projects]]
 
[[Category:Available]]
 
[[Category:Available]]
[[Category:Deep Learning Acceleration]]
+
[[Category:Software]]
 
[[Category:Digital]]
 
[[Category:Digital]]
[[Category:Software]]
+
[[Category:Paulin]]
 +
[[Category:Fischeti]]
 
[[Category:Semester Thesis]]
 
[[Category:Semester Thesis]]
 
[[Category:Master Thesis]]
 
[[Category:Master Thesis]]

Revision as of 15:27, 19 November 2021

Manticore concept.png

Short Description

So far, we have implemented inference of various smaller networks on our PULP-based systems ([pulp-nn]). The data-intensive training of DNN was too memory-hungry to be implemented on our systems. Our latest architecture concept called Manticore includes 4096 snitch cores, is chiplet-based and includes HBM2 memory (see Image).


Recently, industry and academia have started exploring the required computational precision for training. Many state-of-the-art training hardware platforms support by now not only 64-bit and 32-bit floating-point formats, but also 16-bit floating-point formats (binary16 by IEEE and brainfloat). Recent work proposes various training formats such as 8-bit floats.


Status: Available

Looking for 1 Semester/Master student
Contact: Gianna Paulin, Tim Fischer

Prerequisites

Machine Learning
Python
C


Prerequisites

  • Machine Learning
  • Python
  • C

Character

25% Theory
75% Implementation

Professor

Luca Benini

↑ top

Detailed Task Description

Goals

Practical Details

Results

Links

↑ top