Personal tools

Difference between revisions of "PULPonFPGA: Hardware L2 Cache"

From iis-projects

Jump to: navigation, search
 
(One intermediate revision by one other user not shown)
Line 42: Line 42:
 
[[#top|↑ top]]
 
[[#top|↑ top]]
 
[[Category:Digital]]
 
[[Category:Digital]]
[[Category:Available]]
 
 
[[Category:Semester Thesis]]
 
[[Category:Semester Thesis]]
 
[[Category:PULP]]
 
[[Category:PULP]]
Line 50: Line 49:
 
[[Category:Marongiu]]
 
[[Category:Marongiu]]
 
[[Category:PSocrates]]
 
[[Category:PSocrates]]
 +
[[Category:Akurth]]
 +
[[Category:Heterogeneous Acceleration Systems]]
  
 
<!--  
 
<!--  

Latest revision as of 09:27, 5 November 2019

Pulpjuno.png

Intro

While high-end heterogeneous systems-on-chip (SoCs) are increasingly supporting heterogeneous uniform memory access (hUMA), their low-power counterparts targeting the embedded domain still lack basic features like virtual memory support for accelerators. As opposed to simply passing virtual address pointers, explicit data management involving copies is needed to share data between host processor and accelerators which hampers programmability and performance.

At IIS, we study the integration of programmable many-core accelerators into embedded heterogeneous SoCs. We have developed a mixed hardware/software solution to enable lightweight virtual memory support for many-core accelerators in heterogeneous embedded SoCs [1,2]. Recently, we have switched to a new evaluation platform based on the Juno ARM Development Platform [3]. This system combines a modern ARMv8 multicluster CPU with a Xilinx Virtex-7 XC7V2000T FPGA [4] capable of implementing PULP [5] with 4 to 8 clusters and a total of 32 to 64 cores.

Short Description

The two subsystems are on two separate chips and connected through a high-bandwidth off-chip interface. To reduce the traffic on this interface, PULP can be equipped with an L2 cache memory that caches frequent accesses to the shared main memory. This also allows to reduce the stress on our lightweight virtual memory solution. While the overall platform is of considerable complexity, your job is well defined and isolated: The idea of this project is to develop the hardware IP of this L2 cache.

You can start from an existing IP block designed in a previous project. This IP is equipped with proprietary interfaces and needs to be adapted to use the widely adopted AXI4 protocol [6], which has built-in cache control signals that shall be used by your new IP. Besides designing the IP, your job is also to develop a set of testbenches to verify the functionality and analyze the design. The work primarily targets the implementation on the Xilinx Virtex-7 FPGA but if desired, an ASIC back-end design can also be implemented.

Status: Available

Looking for 1-2 Interested Master Students (Semester Project)
Supervision: Pirmin Vogel, Michael Schaffner, Andrea Marongiu

Character

20% Theory, Algorithms and Simulation
50% VHDL, FPGA/ASIC Design
30% Verification

Prerequisites

VLSI I,
VHDL/System Verilog, C

Professor

Luca Benini

References

  1. P. Vogel, A. Marongiu, L. Benini, "Lightweight Virtual Memory Support for Many-Core Accelerators in Heterogeneous Embedded SoCs", Proceedings of the 10th International Conference on Hardware/Software Codesign and System Synthesis (CODES+ISSS'15), Amsterdam, The Netherlands, 2015. link
  2. P. Vogel, A. Marongiu, L. Benini, "Lightweight Virtual Memory Support for Zero-Copy Sharing of Pointer-Rich Data Structures in Heterogeneous Embedded SoCs", IEEE Transactions on Parallel and Distributed Systems, vol. 28, no. 7, 2017. link
  3. Juno ARM Development Platform link
  4. Xilinx Virtex-7 XC7V2000T FPGA link
  5. PULP link
  6. AMBA 4 AXI Protocol Specifications link
  7. D. J. Sorin, M. D. Hill, D. A. Wood, "A Primer on Memory Consistency and Cache Coherence", Morgan & Claypool Publishers, 2011. link

↑ top