Scaling Up the Memory Interference Analysis for Hard Real-Time Many-Core Systems

Maximilien Dupont de Dinechin1,a, Matheus Schuh2,3,c, Matthieu Moy1,b and Claire Maiza2,d

1Univ Lyon, EnsL, UCBL, CNRS, Inria, LIP F-69342, LYON Cedex 07, France
amaximilien.dupont.de.dinechin@ens.fr
bMatthieu.Moy@ens.fr
2Univ. Grenoble Alpes CNRS, Grenoble INP, VERIMAG 38000 Grenoble, France
cMatheus.Schuh@univ-lyon1.fr
dClaire.Maiza@univ-lyon1.fr
3Kalray Montbonnot-Saint-Martin, France
Matheus.Schuh@kalray.eu

ABSTRACT

In RTNS 2016, Rihani et al. [7] proposed an algorithm to compute the impact of interference on memory accesses on the timing of a task graph. It calculates a static, time-triggered schedule, i.e. a release date and a worst-case response time for each task. The task graph is a DAG, typically obtained by compilation of a high-level dataflow language, and the tool assumes a previously determined mapping and execution order. The algorithm is precise, but suffers from a high 𝒪 (n4) complexity, n being the number of input tasks. Since we target many-core platforms with tens or hundreds of cores, applications likely to exploit the parallelism of these platforms are too large to be handled by this algorithm in reasonable time.
This paper proposes a new algorithm that solves the same problem. Instead of performing global fixed-point iterations on the task graph, we compute the static schedule incrementally, reducing the complexity to 𝒪 (n2) . Experimental results show a reduction from 535 seconds to 0.90 seconds on a benchmark with 384 tasks, i.e. 593 times faster.

Keywords: Response time Analysis, Algorithm Optimization, Many-core Architectures, Real-time Systems.



Full Text (PDF)