Personal tools

Difference between revisions of "Autonomous Obstacle Avoidance with Nano-Drones and Novel Depth Sensors"

From iis-projects

Jump to: navigation, search
(Project Description)
(Project Description)
Line 18: Line 18:
 
== Project Description ==
 
== Project Description ==
  
 +
One reason for the high research interest in the field of UAVs is their potential to autonomously navigate indoors while avoiding obstacles. Performing this task includes several challenges, such as online perception, control, trajectory optimization, and localization. The small form-factor category represents an even more promising class of UAVs. The UAVs in this class (i.e., nano-UAVs) measure a few centimeters in size and weigh a few tens of grams. They are considered the ideal candidates for navigating in very narrow indoor areas for monitoring and inspection purposes.
 +
 +
Vision-based perception algorithms used routinely on standard-size drones are based on simultaneous-localization and-mapping (SLAM) – a perception technique that builds a 3D local map of the environment – or end-to-end convolutional neural networks (CNNs). However, due to the large number of pixels typically associated with images, this approach still requires a large number of computations per frame.
  
 
[[.png|thumb|center|800px| ]]
 
[[.png|thumb|center|800px| ]]

Revision as of 16:32, 7 February 2022


Status: Available

Project Description

One reason for the high research interest in the field of UAVs is their potential to autonomously navigate indoors while avoiding obstacles. Performing this task includes several challenges, such as online perception, control, trajectory optimization, and localization. The small form-factor category represents an even more promising class of UAVs. The UAVs in this class (i.e., nano-UAVs) measure a few centimeters in size and weigh a few tens of grams. They are considered the ideal candidates for navigating in very narrow indoor areas for monitoring and inspection purposes.

Vision-based perception algorithms used routinely on standard-size drones are based on simultaneous-localization and-mapping (SLAM) – a perception technique that builds a 3D local map of the environment – or end-to-end convolutional neural networks (CNNs). However, due to the large number of pixels typically associated with images, this approach still requires a large number of computations per frame.

thumb|center|800px|

Character

  • 30% Literature and algorithm development
  • 30% FreeRTOS C programming (STM32 Platform)
  • 40% In-field evaluation and testing

Prerequisites

  • Strong interest in embedded systems
  • Experience with data acquisition and analysis
  • Experience with low-level C programming

References