Personal tools

Difference between revisions of "Approximate Matrix Multiplication based Hardware Accelerator to achieve the next 10x in Energy Efficiency: Training Strategy And Algorithmic optimizations"

From iis-projects

Jump to: navigation, search
(Project Details)
Line 9: Line 9:
 
[[Category:Master Thesis]]
 
[[Category:Master Thesis]]
 
[[Category:Available]]
 
[[Category:Available]]
 +
[[Category:Hot]]
 +
[[Category:andrire]]
  
  

Revision as of 10:17, 13 September 2022


Overview

Status: Available

Introduction

Figure 1: Clock layout of the MADDness accelerator using ASAP7 technology

The continued growth in DNN model parameter count, application domains and general adoption led to an explosion of the needed computing power and energy. Especially the energy needs have become large enough to be economically unviable or extremely difficult to cool down. That led to a push for more energy-efficient solutions. Energy efficient accelerator solutions have a long tradition in IIS, with a multitude of proven accelerators published in the past. Standard accelerator architectures try to increase throughput via higher memory bandwidth, improved memory hierarchy or reduced precision (FP16, INT8, INT4). The approach of the accelerator used in the project is a different one. It uses an approximate matrix multiplication (AMM) algorithm called MADDness, which replaces the matrix multiplication with a lookup into a look-up-table (LUT) and an addition. That can significantly reduce the overall computing and energy needs.

Project Details

The MADDness algorithm is split into two parts. We have an encoding part, which translates the input matrix A into the addresses of the LUT. After the translation follows a decoding part which adds the corresponding LUT values together to calculate the approximate output of the matrix multiplication. MADDness is then integrated into deep neural networks. The most common seen layers in DNNs are convolutional layers and linear layers, both can be replaced by MADDness. Fully tested drop-in PyTorch layers have already been developed and used. Currently, only a single layer replacement analysis has been done rigorously. So far the layers have only been replaced with the MADDness algorithm and the network has not been retrained with the corresponding new outputs of the layers. Energy estimates with the current implementation using GF 22nm FDX technology suggest an energy efficiency of up to 32 TMACs/W compared to a state-of-the-art datacenter NVIDIA A100 (TSMC 7nm FinFET) at around 0.7 TMACs/W (FP16). In this project, we would like to investigate if we can improve our accelerator’s accuracy by implementing a retraining strategy and framework. The goal would be to be able to replace multiple layers of a DNN without a significant drop in accuracy. A new realm of possible inter-layer optimization can then be analyzed afterwards. For example: Only calculating the needed dimensions for the next MADDness layer or including the activation layer into the MADDness algorithm. More information can be found here:

Project Plan

1. Acquire background knowledge & familiarize with the project 3 weeks

  • Read up on the MADDness algorithm and product quantization methods
  • Familiarize yourself with the current state of the project
  • Familiarize with the IIS compute environment

2. Setup the project & rerun single layer analysis 2 weeks

  • Setup the project and rerun a single layer analysis (for example for ResNet-50)
  • Update the single layer analysis with a larger than previously used LeViT model

3. Set up and evaluate a first retraining pipeline 8 weeks

  • Using the simple method of replacing one layer with MADDness and then retrain the following layers. After that we freeze that layer and proceed with the next one.
  • Evaluate and optimize the pipeline including a detailed analysis of the accuracy development for the ResNet-50, LeViT and DS-CNN networks
  • Include the developed framework into the already developed learning framework

4. Extend the MADDness algorithm with intra-layer optimizations 10 weeks

  • Include the activation function into the MADDness algorithm
  • Can we optimize memory bandwidth and/or compute by only computing the dimensions needed for the following MADDness layer.
  • Is the encoding function that we are using the most accurate? Can we improve it?

5. Project finalization 3 weeks

  • Prepare final report
  • Prepare project presentation
  • Clean up code


Character

  • 20% Literature / project review
  • 40% Retraining pipeline implementation in Python
  • 30% Algorithmic optimisations
  • 10% Detailed analysis and preparation of results

Prerequisites

  • Strong interest in Deep Learning and Hardware accelerators
  • Experience with Python and preferable with PyTorch or a similar machine learning framework (e.g. TensorFlow)


If you want to work on this project, but you think that you do not match some of the required skills, please get in touch with us and we can provide preliminary exercises to help you fill in the gap.


Status: Available