Personal tools

Difference between revisions of "Flexfloat DL Training Framework"

From iis-projects

Jump to: navigation, search
(Detailed Task Description)
Line 1: Line 1:
 
[[File:Manticore concept.png|thumb]]
 
[[File:Manticore concept.png|thumb]]
==Short Description==
+
==Project Overview==
So far, we have implemented inference of various smaller networks on our PULP-based systems ([pulp-nn]). The data-intensive training of DNN was too memory-hungry to be implemented on our systems. Our latest architecture concept called Manticore includes 4096 snitch cores, is chiplet-based and includes HBM2 memory (see Image).
+
So far, we have implemented inference of various smaller networks on our PULP-based systems ([pulp-nn]). The data-intensive training of DNN was too memory-hungry to be implemented on our systems. Our latest architecture concept called Manticore includes 4096 snitch cores (snitch, manticore paper), is chiplet-based and includes HBM2 memory (see Image).
  
 +
Recently, industry and academia have started exploring the required computational precision for training. Many state-of-the-art training hardware platforms support by now not only 64-bit and 32-bit floating-point formats, but also 16-bit floating-point formats (binary16 by IEEE and brainfloat). Recent work proposes various training formats such as 8-bit floats.
  
Recently, industry and academia have started exploring the required computational precision for training. Many state-of-the-art training hardware platforms support by now not only 64-bit and 32-bit floating-point formats, but also 16-bit floating-point formats (binary16 by IEEE and brainfloat). Recent work proposes various training formats such as 8-bit floats.
+
Most available DL frameworks allow to train networks with 64-bit, 32-bit or 16-bit FP formats. However, the FPU of our Occamy project (scaled-down version of Manticore) supports two different types of 16-bit FP formats and two different types of 8-bit FP formats. Therefore, we would like to extend an available DL training framework (e.g., Pytorch) with a library (e.g., flexfloat [ref]) capable of emulating various FP formats
  
  
 
===Status: Available ===
 
===Status: Available ===
: Looking for 1 Semester/Master student
+
: Looking for 1 Semester or 1 Master student
 
: Contact: [[:User:Paulin | Gianna Paulin]], [[:User:fischeti|Tim Fischer]]
 
: Contact: [[:User:Paulin | Gianna Paulin]], [[:User:fischeti|Tim Fischer]]
  
Line 29: Line 30:
  
 
===Prerequisites===
 
===Prerequisites===
* Machine Learning
+
* Deep Learning
 
* Python
 
* Python
 
* C
 
* C
Line 40: Line 41:
 
: [http://www.iis.ee.ethz.ch/people/person-detail.html?persid=194234 Luca Benini]
 
: [http://www.iis.ee.ethz.ch/people/person-detail.html?persid=194234 Luca Benini]
  
[[#top|↑ top]]
 
  
 
== Project Organization ==
 
== Project Organization ==
Line 60: Line 60:
 
At the end of the project, the outcome of the thesis will be presented in a 15 (SA) or 20-minutes (MA) talk and 5 minutes of discussion in front of interested people of the Integrated Systems Laboratory. The presentation is open to the public, so you are welcome to invite interested friends. The exact date will be determined towards the end of the work.
 
At the end of the project, the outcome of the thesis will be presented in a 15 (SA) or 20-minutes (MA) talk and 5 minutes of discussion in front of interested people of the Integrated Systems Laboratory. The presentation is open to the public, so you are welcome to invite interested friends. The exact date will be determined towards the end of the work.
  
==Results==
 
  
==Links==  
+
==Links==
 +
 
  
  
Line 76: Line 76:
  
 
[[#top|↑ top]]
 
[[#top|↑ top]]
<!--
 
 
COPY PASTE FROM THE LIST BELOW TO ADD TO CATEGORIES
 
 
GROUP
 
[[Category:IIP]]
 
      [[Category:cat1]]
 
      [[Category:cat2]]
 
      [[Category:cat3]]
 
      [[Category:cat4]]
 
      [[Category:cat5]]
 
 
 
[[Category:Digital]]
 
    SUB CATEGORIES
 
    NEW CATEGORIES
 
      [[Category:Computer Architecture]]
 
      [[Category:Acceleration and Transprecision]]
 
      [[Category:Heterogeneous Acceleration Systems]]
 
      [[Category:Event-Driven Computing]]
 
      [[Category:Predictable Execution]]
 
      [[Category:SmartSensors]]
 
      [[Category:Transient Computing]]
 
      [[Category:System on Chips for IoTs]]
 
      [[Category:Energy Efficient Autonomous UAVs]]
 
      [[Category:Biomedical System on Chips]]
 
      [[Category:Digital Medical Ultrasound Imaging]]
 
      [[Category:Cryptography]]
 
      [[Category:Deep Learning Acceleration]]
 
      [[Category:Hyperdimensional Computing]]   
 
 
      [[Category:Competition]]   
 
      [[Category:EmbeddedAI]]   
 
 
 
    [[Category:ASIC]]
 
    [[Category:FPGA]]
 
   
 
    [[Category:System Design]]
 
    [[Category:Processor]]
 
    [[Category:Telecommunications]]
 
    [[Category:Modelling]]
 
    [[Category:Software]]
 
    [[Category:Audio]]
 
 
[[Category:Analog]]
 
[[Category:Nano-TCAD]]
 
 
[[Category:AnalogInt]]
 
  SUB CATEGORIES
 
  [[Category:Telecommunications]]
 
 
 
STATUS
 
[[Category:Available]]
 
[[Category:In progress]]
 
[[Category:Completed]]
 
[[Category:Hot]]
 
 
TYPE OF WORK
 
[[Category:Group Work]]
 
[[Category:Semester Thesis]]
 
[[Category:Master Thesis]]
 
[[Category:PhD Thesis]]
 
[[Category:Research]]
 
 
NAMES OF EU/CTI/NT PROJECTS
 
[[Category:Oprecomp]]
 
[[Category:Antarex]]
 
[[Category:Hercules]]
 
[[Category:Icarium]]
 
[[Category:PULP]]
 
[[Category:ArmaSuisse]]
 
[[Category:Mnemosene]]
 
[[Category:Aloha]]
 
[[Category:Ampere]]
 
[[Category:ExaNode]]
 
[[Category:EPI]]
 
[[Category:Fractal]]
 
 
 
YEAR (IF FINISHED)
 
[[Category:2020]]
 
 
 
--->
 

Revision as of 15:35, 19 November 2021

Manticore concept.png

Project Overview

So far, we have implemented inference of various smaller networks on our PULP-based systems ([pulp-nn]). The data-intensive training of DNN was too memory-hungry to be implemented on our systems. Our latest architecture concept called Manticore includes 4096 snitch cores (snitch, manticore paper), is chiplet-based and includes HBM2 memory (see Image).

Recently, industry and academia have started exploring the required computational precision for training. Many state-of-the-art training hardware platforms support by now not only 64-bit and 32-bit floating-point formats, but also 16-bit floating-point formats (binary16 by IEEE and brainfloat). Recent work proposes various training formats such as 8-bit floats.

Most available DL frameworks allow to train networks with 64-bit, 32-bit or 16-bit FP formats. However, the FPU of our Occamy project (scaled-down version of Manticore) supports two different types of 16-bit FP formats and two different types of 8-bit FP formats. Therefore, we would like to extend an available DL training framework (e.g., Pytorch) with a library (e.g., flexfloat [ref]) capable of emulating various FP formats


Status: Available

Looking for 1 Semester or 1 Master student
Contact: Gianna Paulin, Tim Fischer

Prerequisites

Machine Learning
Python
C


Prerequisites

  • Deep Learning
  • Python
  • C

Character

25% Theory
75% Implementation

Professor

Luca Benini


Project Organization

Weekly Meetings

The student shall meet with the advisor(s) every week in order to discuss any issues/problems that may have persisted during the previous week and with a suggestion of next steps. These meetings are meant to provide a guaranteed time slot for mutual exchange of information on how to proceed, clear out any questions from either side and to ensure the student’s progress.

Report / Presentation

Documentation is an important and often overlooked aspect of engineering. One final report has to be completed within this project. Any form of word processing software is allowed for writing the reports, nevertheless, the use of LaTeX with Tgif, drawoio (See: http://bourbon.usc.edu:8001/tgif/index.html and http://www.dz.ee.ethz.ch/en/information/how-to/drawing-schematics.html) or any other vector drawing software (for block diagrams) is strongly encouraged by the IIS staff.

Final Report

A digital copy of the report, the presentation, the developed software, build script/project files, drawings/illustrations, acquired data, etc. needs to be handed in at the end of the project. Note that this task description is part of your report and has to be attached to your final report.

Presentation

At the end of the project, the outcome of the thesis will be presented in a 15 (SA) or 20-minutes (MA) talk and 5 minutes of discussion in front of interested people of the Integrated Systems Laboratory. The presentation is open to the public, so you are welcome to invite interested friends. The exact date will be determined towards the end of the work.


Links

↑ top