Personal tools

Difference between revisions of "Hyper Meccano: Acceleration of Hyperdimensional Computing"

From iis-projects

Jump to: navigation, search
(Created page with "thumb ==Short Description== Hyperdimensional (HD) computing is a brain-inspired non von Neumann machine learning model based on representing information w...")
 
 
(7 intermediate revisions by the same user not shown)
Line 3: Line 3:
 
Hyperdimensional (HD) computing is a brain-inspired non von Neumann machine learning model based on representing information with hypervectors (i.e., random vectors with dimensionality in the thousands). At its very core, HD computing is about manipulating and comparing large patterns using ultra-wide instructions, e.g., 10,000 bits. An HD computing machine automatically generates such wide instructions, applies a set of arithmetic operations on them, and finally performs analogical reasoning. Its arithmetic operations allow a high degree of parallelism by needing to communicate with only a local component or its immediate neighbors. Other operations can be performed in a distributed fashion.  
 
Hyperdimensional (HD) computing is a brain-inspired non von Neumann machine learning model based on representing information with hypervectors (i.e., random vectors with dimensionality in the thousands). At its very core, HD computing is about manipulating and comparing large patterns using ultra-wide instructions, e.g., 10,000 bits. An HD computing machine automatically generates such wide instructions, applies a set of arithmetic operations on them, and finally performs analogical reasoning. Its arithmetic operations allow a high degree of parallelism by needing to communicate with only a local component or its immediate neighbors. Other operations can be performed in a distributed fashion.  
  
HD computing is a complete and universal computational paradigm that is easily applied to various learning problems. In this project, your goal is to accelerate, optimize, and autotune execution of HD operations on GPGPUs. This will enable application developers to easily explore and choose appropriate hyperparameters for HD computing-based designs
+
HD computing is a complete and universal computational paradigm that is easily applied to various learning problems. In this project, your goal is to accelerate, optimize, and autotune execution of HD operations on GPGPUs and other accelerators. This will enable application developers to easily explore and choose appropriate hyperparameters for HD computing-based designs.
  
 
===Status: Available ===
 
===Status: Available ===
Line 11: Line 11:
 
===Prerequisites===
 
===Prerequisites===
 
: Parallel Programming
 
: Parallel Programming
: GPPUs
+
: Parallel Architectures
 +
: General-purpose GPU (GPGPU)
 +
 
 
<!--  
 
<!--  
 
===Status: Completed ===
 
===Status: Completed ===
Line 50: Line 52:
  
 
[[#top|↑ top]]
 
[[#top|↑ top]]
 
+
[[Category:Hyperdimensional Computing]]
[[Category:Accelerator‏‎]]
 
 
[[Category:Available]]
 
[[Category:Available]]
 +
[[Category:Digital]]
 
[[Category:Semester Thesis]]
 
[[Category:Semester Thesis]]
 
[[Category:Master Thesis]]
 
[[Category:Master Thesis]]

Latest revision as of 12:58, 9 November 2017

Meccano.png

Short Description

Hyperdimensional (HD) computing is a brain-inspired non von Neumann machine learning model based on representing information with hypervectors (i.e., random vectors with dimensionality in the thousands). At its very core, HD computing is about manipulating and comparing large patterns using ultra-wide instructions, e.g., 10,000 bits. An HD computing machine automatically generates such wide instructions, applies a set of arithmetic operations on them, and finally performs analogical reasoning. Its arithmetic operations allow a high degree of parallelism by needing to communicate with only a local component or its immediate neighbors. Other operations can be performed in a distributed fashion.

HD computing is a complete and universal computational paradigm that is easily applied to various learning problems. In this project, your goal is to accelerate, optimize, and autotune execution of HD operations on GPGPUs and other accelerators. This will enable application developers to easily explore and choose appropriate hyperparameters for HD computing-based designs.

Status: Available

Looking for 1-2 Semester/Master students
Contact: Abbas Rahimi Andrea Marongiu

Prerequisites

Parallel Programming
Parallel Architectures
General-purpose GPU (GPGPU)

Character

20% Theory
50% Parallel Programming
30% Verification

Professor

Luca Benini

↑ top

Detailed Task Description

Goals

Practical Details

Results

Links


↑ top