Predictable Execution on Heterogeneous SoCs
In modern embedded heterogeneous commercial-off-the-shelf (COTS) systems, there is a trend towards the integration of accelerators such as FPGA and GPU on the same chip as the host CPU, sharing a single DRAM. This has benefits in both programability and performance, due to the removal of the need for data transfers between the host CPU and the accelerator.
However, in the context of real-time systems, where guarantees must be given on the finishing of computations before a deadline, the sharing of resources, such as the DRAM, become a problem, as contention for the shared resource between the CPU and accelerator lead to performance degradation. This in turn introduces the risk of unbounded delays that cause real-time applications to continue execution beyond their specified deadline. Because of this, such architectures are currently not used in real-time critical applications, such as self-driving cars, even though they promise an order of magnitude improvement in performance. In light of this, our work focuses on software solutions that can be deployed on COTS hardware that limit the memory interference within the system, such that real-time guarantees can be provided, enabling the use of these architectures in a real-time setting.
- PREM on PULP
- Freedom from Interference in Heterogeneous COTS SoCs
- Predictable Execution on GPU Caches