Personal tools

Search results

From iis-projects

Jump to: navigation, search

Page title matches

Page text matches

  • ...ystem integration aspects. Eventually, the goal is to attach the developed accelerator to the ARM processing system on the Xilinx Zynq platform, and establish the [[Category:Digital]] [[Category:Master Thesis]] [[Category:Accelerator]] [[Category:FPGA]] [[Category:ABB CHCRC]] [[Category:Model Predictive Cont
    4 KB (542 words) - 12:39, 1 June 2017
  • ...ate. While this is acceptable for some sub-circuits, like a small hardware accelerator with no relevant information to be retained between two calls, it is not ac
    2 KB (364 words) - 09:34, 25 July 2017
  • ...e have several aspects which we would like to explore: porting the Origami accelerator to run efficiently on the FPGA, hardware/software-co-design configuring mem ...Mayer, S. Willi, B. Muheim, L. Benini, “Origami: A Convolutional Network Accelerator,” in Proceedings of the 25th Edition on Great Lakes Symposium on VLSI, 20
    3 KB (397 words) - 18:17, 29 August 2016
  • ...e time is spent performing the convolutions (80% to 90%). We have built an accelerator for this, Origami, which has been very successful. Nevertheless, it has som ...Samuel Willi, Beat Muheim, Luca Benini, "Origami: A Convolutional Network Accelerator", Proc. ACM/IEEE GLS-VLSI'15 [http://dl.acm.org/citation.cfm?id=2743766] [h
    9 KB (1,263 words) - 18:52, 12 December 2016
  • #REDIRECT [[Accelerator for Spatio-Temporal Video Filtering]]
    61 bytes (6 words) - 18:44, 14 April 2016
  • ...running on the host CPU [1,2] and a dedicated helper thread running on the accelerator [3]. The first IOTLB is implemented using a fully-associative content addre ...ns through, e.g., an mmap() system call. Ideally, all data shared with the accelerator is placed in this section, requiring a single entry in the first IOTLB only
    6 KB (866 words) - 13:43, 29 November 2019
  • ...hem better and use their structure to build an even more efficient ConvNet accelerator with almost no multipliers and relatively small adders. ...Benini, L. (2016). YodaNN: An Ultra-Low Power Convolutional Neural Network Accelerator Based on Binary Weights. arXiv preprint arXiv:1606.05487. [https://arxiv.or
    10 KB (1,357 words) - 16:25, 30 October 2020
  • ...field of active exciting research to develop a state-of-art neuromorphic accelerator for MPSoC and FPGA targets. You will learn:
    9 KB (1,427 words) - 18:36, 5 September 2019
  • ...field of active exciting research to develop a state-of-art neuromorphic accelerator for MPSoC and FPGA targets. You will learn:
    7 KB (1,000 words) - 12:22, 13 January 2017
  • ...directly connected to the Himax ULP camera [7] and a second connecting the accelerator to the existing MCU. ...Palossi, A. Marongiu, D. Rossi and L. Benini, "Enabling the heterogeneous accelerator model on ultra-low power microcontroller platforms," 2016 Design, Automatio
    6 KB (875 words) - 11:06, 23 February 2018
  • ...and Luca Benini. "YodaNN: An ultra-low power convolutional neural network accelerator based on binary weights." In VLSI (ISVLSI), 2016 IEEE Computer Society Annu
    6 KB (823 words) - 08:36, 20 January 2021
  • In this thesis, the students will develop an optimized Deconvolution Accelerator which can be used to implement state-of-the-art neural networks with a deco
    6 KB (842 words) - 08:37, 20 January 2021
  • ...n a field of active exciting research to develop a state-of-art inference accelerator for MPSoC and FPGA targets. You will learn:
    6 KB (949 words) - 13:41, 10 November 2020
  • ...nstitute of Neuroinformatics designed '''''NullHop''''' [Aimar2017], a CNN accelerator architecture which can support the implementation of state of the art CNNs ...mar2017] A. Aimar et al., NullHop: A Flexible Convolutional Neural Network Accelerator Based on Sparse Representations of Feature Maps [https://arxiv.org/pdf/1706
    7 KB (1,001 words) - 10:43, 26 June 2017
  • ...-tunable performance, e.g., to use use PULP as a high-performance parallel accelerator in heterogeneous systems. To this end, we also study the seamless integrati ...oading of highly-parallel OpenMP function kernels from the host CPU to the accelerator [2], and
    6 KB (805 words) - 12:17, 22 January 2018
  • ...-tunable performance, e.g., to use use PULP as a high-performance parallel accelerator in heterogeneous systems. To this end, we also study the seamless integrati ...oading of highly-parallel OpenMP function kernels from the host CPU to the accelerator [2], and
    6 KB (801 words) - 15:05, 23 August 2018
  • The main difficulty in traditional accelerator programming stems from a widely coherent caches and virtual memory. The accelerator features local, private
    6 KB (865 words) - 12:16, 17 November 2017
  • #REDIRECT [[Elliptic Curve Accelerator for zkSNARKS]]
    53 bytes (6 words) - 09:54, 24 August 2018
  • #If interested: Commissioning of the unit in particle accelerator beam line experiment at PSI
    4 KB (460 words) - 21:42, 30 January 2018
  • ...-tunable performance, e.g., to use use PULP as a high-performance parallel accelerator in heterogeneous systems. To this end, we also study the seamless integrati ...oading of highly-parallel OpenMP function kernels from the host CPU to the accelerator [2], and
    6 KB (796 words) - 17:19, 18 November 2019

View (previous 20 | next 20) (20 | 50 | 100 | 250 | 500)