Personal tools

Difference between revisions of "Deconvolution Accelerator for On-Chip Semi-Supervised Learning"

From iis-projects

Jump to: navigation, search
Line 9: Line 9:
 
: Looking for 2-3 students for a semester project (or 1 semester student FPGA only) or 1 student for a master thesis.  
 
: Looking for 2-3 students for a semester project (or 1 semester student FPGA only) or 1 student for a master thesis.  
 
: Supervision: [[:User:Andrire | Renzo Andri]], [[:User:Lukasc | Lukas Cavigelli]]
 
: Supervision: [[:User:Andrire | Renzo Andri]], [[:User:Lukasc | Lukas Cavigelli]]
[[Category:Digital]] [[Category:Available]] [[Category:Semester Thesis]] [[Category:Master Thesis]] [[Category:2016]][[Category:Hot]]
+
[[Category:Digital]] [[Category:Available]] [[Category:Semester Thesis]] [[Category:Master Thesis]] [[Category:2016]][[Category:Hot]][[Category:ASIC]]
  
 
===Prerequisites===
 
===Prerequisites===

Revision as of 12:10, 8 August 2016

Description

Neural Networks and especially Convolution Neural Networks showed record-braking results in all image classification task (e.g. ImageNet). Even though publicly available data sets increased to 100’000s of images, it is still interesting to train on unlabeled data, either because not enough data for a specific task is available or the network can still be improved with more (unlabeled) data.[2] One promising approach is based on Deconvolution Neural Network[2]. These networks connect intermediate states in the forward path by a mirrored network back to the input and can be used for supervised and unsupervised learning. Deconvolution Neural Networks were also used successfully in scene labeling [3] and Optical Flow [4]. Usually, training of neural networks is done on servers and all the collected data has to be transmitted to a server, but in several cases this is not really desirable. E.g. for autonomous IoT systems or data which should not be sent to a server for security or privacy reason (e.g. Siri could be trained on a server and fine-tuned with the user's voice while keeping the voice data on the device). Thus, it is desirable to be able to learn on a low-energy budget device. Convolutions and Deconvolutions are generally computational intensive, but they need a regular memory access which can be exploited by an application-specific integrated circuit ASIC, in contrast to CPUs or even GPUs.

In this thesis, the students will develop an optimized Deconvolution Accelerator which can be used to implement state-of-the-art neural networks with a deconvolution backwards path.


Status: Available

Looking for 2-3 students for a semester project (or 1 semester student FPGA only) or 1 student for a master thesis.
Supervision: Renzo Andri, Lukas Cavigelli

Prerequisites

  • Knowledge of a hardware design language: e.g. (System)Verilog or VHDL
  • Visited VLSI1 or equivalent
  • Enrolled or visited VLSI2 lecture (automn sem.) or equivalent
  • In case of a tape out: at least one student will have to enroll in VLSI3 and test the chip during the lecture


Character

20% Theory
60% RTL Architecture, HW Design and Verification
35% ASIC Back-end Design

Professor

Luca Benini

↑ top

Detailed Task Description

Meetings & Presentations

The student(s) and advisor(s) agree on weekly meetings to discuss all relevant decisions and decide on how to proceed. Of course, additional meetings can be organized to address urgent issues. Around the middle of the project there is a design review, where senior members of the lab review your work (bring all the relevant information, such as prelim. specifications, block diagrams, synthesis reports, testing strategy, ...) to make sure everything is on track and decide whether further support is necessary. They also make the definite decision on whether the chip is actually manufactured (no reason to worry, if the project is on track) and whether more chip area, a different package, ... is provided. For more details confer to [1]. At the end of the project, you have to present/defend your work during a 15 min. presentation and 5 min. of discussion as part of the IIS colloquium (as required for any semester or master thesis at D-ITET).


Literature

  • Yuting Zhang, Augmenting Supervised Neural Networks with Unsupervised Objectibes for Large-scale Image Classification [2]
  • Jonathan Long et al., Fully Convolutional Networks for Semantic Segmentation [3]
  • Philipp Fischer, Alexey Dosovithskiy, Eddy Ilg, et al., FlowNet: Learning Optical Flow with Convolutional Networks [4]

Practical Details

↑ top