Personal tools

Difference between revisions of "Design of a Low Power Smart Sensing Multi-modal Vision Platform"

From iis-projects

Jump to: navigation, search
(Created page with "<!-- Creating Design of a Low Power Smart Sensing Multi-modal Vision Platform--> 200px|thumb|right|Example of a Mulit-modal Sensor Node = Overvi...")
 
 
(4 intermediate revisions by the same user not shown)
Line 1: Line 1:
 
<!-- Creating Design of a Low Power Smart Sensing Multi-modal Vision Platform-->
 
<!-- Creating Design of a Low Power Smart Sensing Multi-modal Vision Platform-->
  
[[File:Multimodal_node.png|200px|thumb|right|Example of a Mulit-modal Sensor Node]]
+
[[File:Smart glass rendering2.png|200px|thumb|right|Smart Glasses with custom integrated electronics]]
 +
[[File:Ai node new.png|200px|thumb|right|AI-Node for Multi-modal Sensor Nodes]]
  
 
= Overview =
 
= Overview =
Line 8: Line 9:
  
 
= Project description =
 
= Project description =
Low Power Smart Sensing Devices often consist of a handful sensors such as IMUs, microphones, temperature and humidity sensors, barometer sensors, a communication module, to communicate with the cloud (Wifi, LoRa, BLE) and one or multiple processing units (STM32s, GAP9 or other microcontrollers). However, due to computing and resource limitations, camera sensors are often neglected. Adding vision to such a smart sensing platform not only adds value to its reusability but also opens the door into the world of tiny-edge AI for vision-based applications. IIS and PBL are creating such a new platform to not only enable real-time vision processing on microcontrollers, instead, we also want to push boundaries and perform sensor-fusion with RGB images and other sensors. In the long term, event-based vision sensors will be added to enable real-time responses to events within the sensor’s proximity and thus enable true sensor “smartness” by visually understanding the nodes understand its environment
+
Low Power Smart Sensing Devices often consist of a handful of sensors (IMUs, microphones, etc…), a communication module (Wifi, LoRa, BLE), and one or multiple processing units (STM32s, GAP9, or other microcontrollers). However, due to computing and resource limitations, camera sensors are often neglected due to high memory consumption and high computational complexity for on-board processing. Nevertheless, novel processing techniques slowly and steadily transition the world of the Internet of Things (IoT) into Artificial Intelligence of Things (AIoT) devices and as such increase processing capabilities of low-power microcontrollers using dedicated silicon to accelerate ML processing. As such, it started to enable the possibility of adding vision sensors to such a smart sensing platform to not only add value for its reusability but also open the door into the world of tiny-edge AI for vision-based applications. The newest creation of such a platform is shown on the right AI-node, enabling real-time vision processing on microcontrollers, while performing sensor fusion on RGB images and other sensors. In the short term, this platform leads to a smart glasses application, while In the long term, event-based vision sensors will be added to enable real-time responses to events within the sensor’s proximity and thus enable true sensor “smartness” by visually understanding the nodes’ environment.
 +
 
 +
Your task in this project will be one or several out of the tasks mentioned below. Depending on your thesis (Semester/Master thesis), tasks will be assigned according to your interests and skills.
  
Your task in this project will be one or several out of the tasks mentioned below. Depending on your thesis (Semester/Master thesis), tasks will be assigned accordingly to your interests and skills.
 
  
 
== Tasks: ==
 
== Tasks: ==
* PCB Design of the Platform
+
* High-end PCB Design for the smart-glasses
* Embedded Firmware Design for sensor read-out and wireless communication
+
* Embedded Firmware Design for sensors, communication and AI processing
 
* AI part (“the smartness”). Develop and deploy networks to run on microcontrollers
 
* AI part (“the smartness”). Develop and deploy networks to run on microcontrollers
 +
  
 
== Prerequisites (not all needed!) depending of Tasks ==
 
== Prerequisites (not all needed!) depending of Tasks ==
Line 55: Line 58:
 
** None
 
** None
  
[[Category:Available]] [[Category:Digital]] [[Category:Event-Driven Computing]] [[Category:Deep Learning Projects]] [[Category:EmbeddedAI]] [[Category:SmartSensors]] [[Category:System Design]] [[Category:2023]] [[Category:Semester Thesis]] [[Category:Master Thesis]] [[Category:Julian]] [[Category:Hot]]
+
[[Category:Available]] [[Category:Digital]] [[Category:Event-Driven Computing]] [[Category:Deep Learning Projects]] [[Category:EmbeddedAI]] [[Category:SmartSensors]] [[Category:System Design]] [[Category:2023]] [[Category:Semester Thesis]] [[Category:Master Thesis]] [[Category:Julian]] [[Category:Mayerph]] [[Category:Hot]]

Latest revision as of 10:02, 5 December 2023


Smart Glasses with custom integrated electronics
AI-Node for Multi-modal Sensor Nodes

Overview

Dynamic Vision Sensors (DVS) or also called Event-based cameras can detect (when stationary placed) fast-moving and small objects and open-up tons of new possibilities for AI and tinyML. We are creating a completely new system, with an autonomous base station and distributed smart sensor nodes to run cutting-edge AI algorithms and perform novel sensor fusion techniques.

Project description

Low Power Smart Sensing Devices often consist of a handful of sensors (IMUs, microphones, etc…), a communication module (Wifi, LoRa, BLE), and one or multiple processing units (STM32s, GAP9, or other microcontrollers). However, due to computing and resource limitations, camera sensors are often neglected due to high memory consumption and high computational complexity for on-board processing. Nevertheless, novel processing techniques slowly and steadily transition the world of the Internet of Things (IoT) into Artificial Intelligence of Things (AIoT) devices and as such increase processing capabilities of low-power microcontrollers using dedicated silicon to accelerate ML processing. As such, it started to enable the possibility of adding vision sensors to such a smart sensing platform to not only add value for its reusability but also open the door into the world of tiny-edge AI for vision-based applications. The newest creation of such a platform is shown on the right AI-node, enabling real-time vision processing on microcontrollers, while performing sensor fusion on RGB images and other sensors. In the short term, this platform leads to a smart glasses application, while In the long term, event-based vision sensors will be added to enable real-time responses to events within the sensor’s proximity and thus enable true sensor “smartness” by visually understanding the nodes’ environment.

Your task in this project will be one or several out of the tasks mentioned below. Depending on your thesis (Semester/Master thesis), tasks will be assigned according to your interests and skills.


Tasks:

  • High-end PCB Design for the smart-glasses
  • Embedded Firmware Design for sensors, communication and AI processing
  • AI part (“the smartness”). Develop and deploy networks to run on microcontrollers


Prerequisites (not all needed!) depending of Tasks

  • Embedded Firmware Design and experience in Free RTOS, Zephyr, etc…
  • C-Code programming
  • Circuit design tools (e.g. Altium)
  • Experience in ML on MCU or deep knowledge of ML and strong will to deploy on the edge


Type of work

  • 20% Literature study
  • 60% Software and/or Hardware design
  • 20% Measurements and validation

Status: Available

  • Type: Semester or Master Thesis (multiple students possible)
  • Professor: : Prof. Dr. Luca Benini
  • Supervisors:
Julian Moosmann.jpg

Julian Moosmann

Philippmayer.jpg

Philipp Mayer

  • Currently involved students:
    • None