Personal tools

Low Resolution Neural Networks

From iis-projects

Revision as of 13:52, 24 November 2021 by Taners (talk | contribs) (Prerequisites)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search
Neural network with quantized weights.

Short Description

Neural networks (NNs) have become very popular for various artificial intelligence tasks, including but not limited to computer vision and natural language processing. Due to high computational complexity, NNs usually require very fast and power-hungry hardware. However, there has been growing interest in deploying NNs at run-time on mobile platforms such as smartphones or drones, which have limited on-device memory, computing resources, and power consumption. This has led to an abundance of research that aims to make NNs computationally more efficient, and one way to achieve this is using low-resolution weights. For example, weights may be constrained to binary values or quantized to low-precision fixed-point numbers. In this project, our goal is to find low-resolution weights using a novel method developed in our group to reduce the complexity of inference in a network.

Status: Available

Looking for a Semester/Master student
Contact: Sueda Taner

Prerequisites

Introduction to Machine Learning (recommended)

Character

30% Literature research
70% Programming

Professor

Christoph Studer

↑ top

Detailed Task Description

Goals

Practical Details

Results

Links

↑ top