Adversarial Attacks Against Deep Neural Networks In Wearable Cameras
Deep neural networks (DNNs) are growing in popularity and they are today coverting a wide range of application, including image processing, text analysis, and speech recognition. DNNs are today one of the most important component in sevral cyber-physical systems. Among other, the vision system of a self-driving takes advantage of DNNs to better recognize pedestrians, vehicles, and road signs. An hot research topic is the investigation of the vulnerability of DNN to adversarial examples: Adding carefully crafted adversarial perturbations to the inputs can mislead the target DNN into mislabeling them during run time. It is clear that this shows the issue of security and safety in the use of DNNs in the real world applications. The student will work on this promising and hot topic investigation on the adversarial attack for both hardware and software, including the possible ottimization of deep learning algorithms. In particular the main application area is the the Wearable cameras that have also the stringer requirements of low power consumption. This work can also be done in collaboration with armasuisse.
Depending on the applicant's profile and project type, his tasks may involve some of the following:
(not all need to be met by the single candidate)
- Machine learning and deep learning on PC and microcontroller (or the motivation/interess to learn it)
- Motivation to build and test a real system
- Base of Video processing
- PCB Desing or willing to learn it as an option
Detailed Task Description A detailed task description will be worked out right before the project, taking the student's interests and capabilities into account.
Figure's source: https://bair.berkeley.edu/blog/2017/12/30/yolo-attack/
- Looking for Bachelor, Semester and Master Project Students
- Supervisors: Michele Magno
- 40% Theory
- 40% Implementation
- 20% Testing