Personal tools

Difference between revisions of "Improving Scene Labeling with Hyperspectral Data"

From iis-projects

Jump to: navigation, search
(Created page with "200px|thumb 250px|thumb 200px|thumb ==Short Description== Hyperspectral imaging is different from n...")
 
m (Short Description)
Line 4: Line 4:
  
 
==Short Description==
 
==Short Description==
Hyperspectral imaging is different from normal RGB imaging in the sense that it does not capture the amount of light within three spectral bins (red, green, blue), but many more (e.g. 16 or 25) and not necessarily in the visual range of the spectrum. This allows the camera to capture more information than humans can process visually with their eyes and thus gives way to very interesting opportunities to outperform even the best-trained humans, in fact you can see it as a step towards spectroscopic analysis of materials.  
+
Hyperspectral imaging is different from normal RGB imaging in the sense that it does not capture the amount of light within three spectral bins (red, green, blue), but many more (e.g. 16 or 25) and not necessarily in the visual range of the spectrum. This allows the camera to capture more information than humans can process visually with their eyes and thus gives way to very interesting opportunities to outperform even the best-trained humans; in fact you can see it as a step towards spectroscopic analysis of materials.  
  
Recently, a novel hyperspectral imaging sensor has been presented [[https://vimeo.com/73617050 video], [http://www2.imec.be/content/user/File/Brochures/2015/IMEC%20HYPERSPECTRAL%20SNAPSHOT%20MOSAIC%20IMAGER%2020150421.pdf pdf]] and adopted in first industrial computer vision cameras [[http://www.ximea.com/en/usb3-vision-camera/hyperspectral-usb3-cameras link]]. These new cameras are only 31 grams w/o the lens as opposed to the old cameras which very very heavy and not mobile.
+
Recently, a novel hyperspectral imaging sensor has been presented [[https://vimeo.com/73617050 video], [http://www2.imec.be/content/user/File/Brochures/2015/IMEC%20HYPERSPECTRAL%20SNAPSHOT%20MOSAIC%20IMAGER%2020150421.pdf pdf]] and adopted in first industrial computer vision cameras [[http://www.ximea.com/en/usb3-vision-camera/hyperspectral-usb3-cameras link]]. These new cameras are only 31 grams w/o the lens as opposed to the old cameras which used complex optics with beam splitters, could not provide a large number of channels (of non-usual spectral ranges) and were very heavy, ultra expensive and not mobile.  
Imaging sensor networks, UAVs, smartphones, driver assistance appliances, and other embedded computer vision systems require power-efficient, low-cost and high-speed implementations of synthetic vision systems capable of recognizing and classifying objects in a scene. Many popular algorithms in this area require the evaluations of multiple layers of filter banks. Almost all state-of-the-art synthetic vision systems are based on features extracted using multi-layer convolutional networks (ConvNets).  
 
  
To be power efficient and achieve a high throughput at the same time, we would like to create a FPGA implementation of an entire scene labeling network. In order to keep the developed system flexible in terms of the convolutional neural network that is applied as well as the types of layer in the ConvNet, interaction between a flow controlling processor (e.g. an ARM core on a Xilinx Zynq) and the programmable logic is foreseen. If time permits or based on the preference of the student, some focus can also be given towards interfacing directly to a camera and a display or an ethernet adapter. As opposed to an ASIC project, such FPGA and hardware-software codesign work is much more applicable in industry and less constrained in terms of memory and interfaces. If desired by the student, also the use of high-level synthesis tools can be considered.
+
We have acquired such a camera and would like to explore its use for image understanding/scene labeling/semantic segmentation (see labeled image). Your task would be to evaluate and integrate this camera into a working scene labeling system [http://dl.acm.org/citation.cfm?id=2744788 paper] and would be very diverse:
 +
* create a software interface to read the imaging data from the camera
 +
* collect some images to build a dataset for evaluation (fused together with data from a high-res RGB camera)
 +
* adapt the convolutional network we use for scene labeling to profit from the new data (don't worry, we will help you :) )
 +
* create a system from the individual parts (build a case/box mounting the cameras, dev board, WiFi module, ...) and do some programming to make all work together smoothly and efficiently
 +
* cross your finger, hoping that we will outperform all the existing approaches to scene labeling in urban areas
  
 
===Status: Available ===
 
===Status: Available ===
Line 15: Line 19:
 
: Supervision: [[:User:Lukasc | Lukas Cavigelli]]
 
: Supervision: [[:User:Lukasc | Lukas Cavigelli]]
 
: Date: tbd
 
: Date: tbd
[[Category:Digital]] [[Category:FPGA]] [[Category:Available]] [[Category:Semester Thesis]]
+
[[Category:Digital]] [[Category:Software]] [[Category:System]] [[Category:Available]] [[Category:Semester Thesis]] [[Category:Master Thesis]]
  
 
===Prerequisites===
 
===Prerequisites===
* Knowledge of Matlab
+
* Knowledge of C/C++
* Interest in VLSI design and computer vision
+
* Interest in computer vision and system engineering
* VLSI 1
 
  
 
===Character===
 
===Character===
: 20% Theory / Literature Research  
+
: 10% Literature Research
: 80% VLSI Architecture, Implementation & Verification
+
: 50% Programming
 +
: 20% Collecting Data
 +
: 30% System Integration
  
 
===Professor===
 
===Professor===

Revision as of 18:09, 13 August 2015

Labeled-scene.png
X1-adas.jpg
Labeled-scene.png

Short Description

Hyperspectral imaging is different from normal RGB imaging in the sense that it does not capture the amount of light within three spectral bins (red, green, blue), but many more (e.g. 16 or 25) and not necessarily in the visual range of the spectrum. This allows the camera to capture more information than humans can process visually with their eyes and thus gives way to very interesting opportunities to outperform even the best-trained humans; in fact you can see it as a step towards spectroscopic analysis of materials.

Recently, a novel hyperspectral imaging sensor has been presented [video, pdf] and adopted in first industrial computer vision cameras [link]. These new cameras are only 31 grams w/o the lens as opposed to the old cameras which used complex optics with beam splitters, could not provide a large number of channels (of non-usual spectral ranges) and were very heavy, ultra expensive and not mobile.

We have acquired such a camera and would like to explore its use for image understanding/scene labeling/semantic segmentation (see labeled image). Your task would be to evaluate and integrate this camera into a working scene labeling system paper and would be very diverse:

  • create a software interface to read the imaging data from the camera
  • collect some images to build a dataset for evaluation (fused together with data from a high-res RGB camera)
  • adapt the convolutional network we use for scene labeling to profit from the new data (don't worry, we will help you :) )
  • create a system from the individual parts (build a case/box mounting the cameras, dev board, WiFi module, ...) and do some programming to make all work together smoothly and efficiently
  • cross your finger, hoping that we will outperform all the existing approaches to scene labeling in urban areas

Status: Available

Supervision: Lukas Cavigelli
Date: tbd

Prerequisites

  • Knowledge of C/C++
  • Interest in computer vision and system engineering

Character

10% Literature Research
50% Programming
20% Collecting Data
30% System Integration

Professor

Luca Benini

↑ top

Detailed Task Description

Goals

The goals of this project are

  • for the student(s) to get to know the FPGA design flow from specification through architecture exploration to implementation, including the useof memory interfaces and other off-chip communication
  • to learn how to gradually port software blocks to programmable logic and design an entire hetergeneous system using with software, FPGA fabric and hardwired interfaces.

Meetings & Presentations

The students and advisor(s) agree on weekly meetings to discuss all relevant decisions and decide on how to proceed. Of course, additional meetings can be organized to address urgent issues. At the end of the project, you have to present/defend your work during a 15 min. presentation and 5 min. of discussion as part of the IIS colloquium.\

Timeline

To give some idea on how the time can be split up, we provide some possible partitioning:

  • Literature survey, building a basic understanding of the problem at hand, catch up on related work
  • Development of a working software-based implementation running on the Zynq's ARM core
  • Piece-by-piece off-loading of relevant tasks to the programmable logic
  • Implementation of data interfaces (software or hardware)
  • Report and presentation

Literature

  • Hardware Acceleration of Convolutional Networks:
    • C. Farabet, B. Martini, B. Corda, P. Akselrod, E. Culurciello and Y. LeCun, "NeuFlow: A Runtime Reconfigurable Dataflow Processor for Vision", Proc. IEEE ECV'11@CVPR'11 [1]
    • V. Gokhale, J. Jin, A. Dundar, B. Martini and E. Culurciello, "A 240 G-ops/s Mobile Coprocessor for Deep Neural Networks", Proc. IEEE CVPRW'14 [2]
    • [3]
  • two not-yet-published papers by our group on acceleration of ConvNets

Practical Details

Links

  • The EDA wiki with lots of information on the ETHZ ASIC design flow (internal only) [4]
  • The IIS/DZ coding guidelines [5]


↑ top