Personal tools

Difference between revisions of "Combining Multi Sensor Imaging and Machine Learning for Robust Far-Field Vision"

From iis-projects

Jump to: navigation, search
(Created page with "==Introduction== Today, Machine Learning and Computer Vision are taking a central place in surveillance tasks. The surveillance of air space to detect drone activity, the surv...")
 
 
Line 36: Line 36:
 
At the end of the project, you have to present/defend your work during a 15 min. presentation and 5 min. of discussion as part of the IIS colloquium.
 
At the end of the project, you have to present/defend your work during a 15 min. presentation and 5 min. of discussion as part of the IIS colloquium.
  
[[Category:Digital]] [[Category:Available]] [[Category:Semester Thesis]] [[Category:Master Thesis]] [[Category:Bachelor Thesis]] [[Category:Event-Driven Computing]] [[Category:Hot]]
+
[[Category:Digital]] [[Category:Semester Thesis]] [[Category:Master Thesis]] [[Category:Bachelor Thesis]] [[Category:Event-Driven Computing]] [[Category:Hot]]
 
[[Category:Scheremo]]
 
[[Category:Scheremo]]

Latest revision as of 14:53, 11 October 2021

Introduction

Today, Machine Learning and Computer Vision are taking a central place in surveillance tasks. The surveillance of air space to detect drone activity, the surveillance of streets to detect pedestrian or vehicular activity and many other scenarios rely heavily on Machine Learning algorithms to detect and alarm about it. With recently discovered attacks against object detection algorithms that build on the vulnerabilities of traditional imaging, novel solutions for robust detection are needed. Sensor fusion has been proposed in recent literature to combat the issue of adversarial attacks.

Project description

In this project, the student will implement a multi-camera platform using a high-performance embedded GPU and state-of-the-art imagers, ranging from DVS over hyperspectral imagers to traditional, high-resolution cameras. The student will use a cutting-edge high-performance embedded GPU to acquire data from these devices and run sensor-fusion based algorithms to detect the activity of objects in the far-field (>> 10m distance).

The student is required to:

  1. Read up on sensor fusion approaches
  2. Connect all cameras to a NVIDIA Jetson development board
  3. Acquire a multi-sensor dataset using different cameras
  4. Train and deploy a sensor-fusion network onto the development board


Required Skills

  • Basic knowledge of camera technology
  • Basic knowledge of Machine Learning algorithms

Skills you might find useful, but are not required:

  • Previous experience with the multi-sensor fusion applications

Professor

Luca Benini
Status: Available

Possible to complete as a Master, Semester or Bachelor Thesis

Supervision: Moritz Scherer scheremo@iis.ee.ethz.ch, Michele Magno magnom@pbl.ee.ethz.ch

Meetings & Presentations

The students and advisor(s) agree on weekly meetings to discuss all relevant decisions and decide on how to proceed. Of course, additional meetings can be organized to address urgent issues. At the end of the project, you have to present/defend your work during a 15 min. presentation and 5 min. of discussion as part of the IIS colloquium.