Personal tools

Difference between revisions of "Object Detection and Tracking on the Edge"

From iis-projects

Jump to: navigation, search
Line 1: Line 1:
 
<!-- Creating Object Detection and Tracking on the Edge -->
 
<!-- Creating Object Detection and Tracking on the Edge -->
  
 +
[[File:Wing_ultraslow.webp|200px|thumb|middle|Example of Drone Detection, comparing DVS and RGB images]]
 
[[File:Dvs_drones.png|200px|thumb|right|Example of Drone Detection, comparing DVS and RGB images]]
 
[[File:Dvs_drones.png|200px|thumb|right|Example of Drone Detection, comparing DVS and RGB images]]
 +
  
 
= Overview =
 
= Overview =

Revision as of 09:00, 25 May 2023


File:Wing ultraslow.webp

Example of Drone Detection, comparing DVS and RGB images


Overview

Dynamic Vision Sensors (DVS) or also called Event-based cameras can detect (when stationary placed) fast-moving and small objects and open-up tons of new possibilities for AI and tinyML. We are creating a completely new system, with an autonomous base station and distributed smart sensor nodes to run cutting-edge AI algorithms and perform novel sensor fusion techniques.

Project description

Nowadays, Object detection has switched from classical approaches of finding handcrafted features within an image, to AI approaches. With the hardware speedup GPUs are delivering, these deep neural networks can run incredibly fast and achieve detection accuracy comparable or even superior to humans. However, when it comes to scenarios with low or high brightness, the dynamic range of standard CCD/CMOS cameras perform poorly and as such the networks start to fail. One approach to overcome these problems is to use a novel camera-sensor, called a dynamic vision Sensor, short DVS. These sensors do not record the intensity of pixels, instead, they record intensity changes, similar to a human’s eye. With this novel technology, a dynamic range of >120 dB can be achieved, which is comparable to a human’s eye. We aim of developing new object detection and tracking algorithms (for UAVs, Cars and People), use of bio-inspired processing hardware, implement a new stereo-vision algorithm, attacking existing algorithms with adversarial attacks and building up a new autonomous detection system. This can involve using/programming an MCU, Nvidia Jetson Orin, custom ASIC or even neuromorphic computing platforms.

Your task in this project will be one or several of the tasks mentioned below. Depending on your thesis (Semester/Master thesis), tasks will be assigned accordingly to your interests and skills.

Tasks:

  • Event and Frame-based and/or 3D Imaging
  • Parallel Programming from MCU to Nvidia Jetson Orin
  • Adversarial Attacks
  • New Object Detection Algorithms or Neuromorphic Algorithms


Prerequisites (not all needed!) depending of Tasks

  • Embedded Firmware Design and experience in Free RTOS, Zephyr, etc…
  • Experience in Machine Learning and/or neuromorphic computing
  • Parallel programming


Type of work

  • 20% Literature study
  • 60% Software and/or Hardware design
  • 20% Measurements and validation

Status: Available

  • Type: Semester or Master Thesis (multiple students possible)
  • Professor: : Prof. Dr. Luca Benini
  • Supervisors:
Julian Moosmann.jpg

Julian Moosmann

Philippmayer.jpg

Philipp Mayer

  • Currently involved students:
    • None