Time | Label | Session |
---|---|---|
08:30 | W04.1 | Welcome and Opening |
08:40 | W04.2 | Session 1a: Analysis Methods and Platform Requirements for Analysability |
08:40 | W04.2.1 | Avionics Requirements for Dependability and Composability Sascha Uhrig, Airbus Group Innovations, DE |
09:25 | W04.2.2 | Static Code Level Timing Analysis on Systems with Interference Christian Ferdinand, AbsInt, DE |
09:50 | W04.2.3 | Addressing the Path Coverage Problem with Measurement-based Timing Analysis Tullio Vardanega, University of Padua, IT |
10:45 | W04.3 | Session 1b: Analysis Methods and Platform Requirements for Analysability |
10:45 | W04.3.1 | Analysis of Power - Measurement, Simulation, and Composability Kim Grüttner, OFFIS - Institute for Information Technology, DE |
11:10 | W04.3.2 | Short Panel 1: Static Analysis vs. Measurement-based Analysis Sascha Uhrig1, Christian Ferdinand2, Tullio Vardanega3 and Kim Grüttner4 1Airbus Group Innovations, DE; 2AbsInt, DE; 3University of Padua, IT; 4OFFIS - Institute for Information Technology, DE |
11:35 | W04.4 | Session 2a: Concepts for Composable Dependable Architectures |
11:35 | W04.4.1 | DREAMS: Dependable NoC Roman Obermaisser, University of Siegen, DE |
13:00 | W04.5 | Session 2b: Concepts for Composable Dependable Architectures |
13:00 | W04.5.1 | Model-Based Code Generation for the MPPA Manycore Processor Benoit Dupont de Dinechin, Kalray, FR |
13:20 | W04.5.2 | Safe and Secure Real-Time (SSRT) Benjamin Gittins, Synaptic Laboratories Limited, MT |
13:40 | W04.5.3 | PROXIMA Probabilistic Architecture for FPGA and COTS Francisco J. Cazorla, Barcelona Supercomputing Center, ES |
14:05 | W04.5.4 | CompSOC: A Predictable and Composable Multicore System Kees Goossens, Eindhoven Univ. of Technology, NL |
14:35 | W04.5.5 | Short Panel 2: Costs of Hardware-Support for Dependability Roman Obermaisser1, Benoit Dupont de Dinechin2, Benjamin Gittins3, Francisco J. Cazorla4 and Kees Goossens5 1University of Siegen, DE; 2Kalray, FR; 3Synaptic Laboratories Limited, MT; 4Barcelona Supercomputing Center, ES; 5Eindhoven Univ. of Technology, NL |
Time | Label | Session |
---|---|---|
08:30 | W03.1 | Opening session Chair: |
08:40 | W03.2 | Session 1 |
08:40 | W03.2.1 | Technology trends and their impact on HPC benchmarks Xavier Vigouroux, Atos, FR For 20 years, according to the top500 list, High Performance Computing has steadily increased its performance and is now seeking for the exaflop (10**18 double precision floating point operations per second). This increase is hiding drastic evolution in terms or architectures, moving from vector machines to GPGPU. Today, if the hardware is sustaining the pace, the applications are not getting easily benefits from it. Code has to be rewritten. Furthermore, performance implies on-the fly decisions by the processor impacting the performance: An obvious example is the "turbo". The consequence is that it's becoming more and more difficult to predict the performance of an application on a future architecture. In this talk, Xavier will do an introduction about HPC trends and requirements, then, he will provide details about the impact of these evolutions on the application performance. Finally, he will expose what kind of tools and models he would need to predict the performance in the future architecture. Speaker's bio: After a Ph.D. from Ecole normale superieure de Lyon in Distributed computing, he worked for several major companies in different positions. He has now been working for Bull for 10 years. He led the HPC benchmarking team for the first five years, then in charge of the "Education and Research" market for HPC at Bull, he is now managing de "Center for Excellence in Parallel Programming" of Bull. This CEPP activities focus on tackling issues in HPC application Performance |
09:30 | W03.2.2 | Fidelity of native-based performance models for Design Space Exploration Fernando Herrera, University of Cantabria, ES Eugenio Villar and Fernando Herrera, University of Cantabria, ES The utilisation of fast performance assessment technologies is crucial for bounding the cost of the design of efficient embedded systems. Accuracy is a prime concern, since performance models have to be sufficiently faithful to the actual implementations they reflect. In a design space exploration (DSE) context, sufficiently means that the performance models shall enable design decisions. In this context, this talk shows how native simulation, while providing a qualitative speed-up for DSE, can also preserve an accuracy (.e.g, <10% error vs binary translation in account of simulated instructions in bare processing modelling). This should provide the fidelity required for design space exploration in most scenarios. The generic ideas presented are supported by the experiments performed on top of an actual adaptation and extension of a native simulation tool called VIPPE, in order to enable time and energy estimation of specific target processors. |
09:50 | W03.2.3 | Thoughts on the Fidelity of (Data-Flow) Models for Real-Time MPSOC Architectures Kees Goossens, Eindhoven Univ. of Technology, NL To be able to guarantee that real-time requirements of an application are met, the performance of the application running on a MPSOCs must be analysed at design time. Since exhaustive simulation is not possible practically and theoretically, the application and MPSOC must be modelled somehow. A model can be as complex as the real implementation and contain all details of the application and MPSOC, or be very simple by omitting most of the implementation details. The question is how abstraction (the level of detail that is modelled) affects fidelity (model accuracy, or the correspondence of model and implementation). Intuitively, when abstracting more, the analysis effort decreases but the fidelity also decreases. In this talk we discuss our experiences with modelling the CompSOC MPSOC platform using several variants of dataflow at different levels of abstraction. |
10:10 | W03.3 | Coffee break |
10:30 | W03.4 | Session 2 Chair: |
10:30 | W03.4.1 | A Timed-automata based Middleware for Time-critical Multicore Applications Saddek Ben Salem, Verimag, FR Various models of computation for multi-core time-critical systems have been proposed in the literature, but there is a significant gap between the models of computation and the real-time scheduling and analysis techniques, that makes timing validation challenging. To overcome this difficulty, we represent both the models of computation and the scheduling policies by timed automata. While, traditionally, they are only used for simulation and validation, we use the automata for programming. We believe that using the same formal language for the model of computation and the scheduling techniques is an important step to close the gap between them. Our approach is demonstrated using a publicly available toolset, an industrial application use case and a multi-core platform. |
11:00 | W03.4.2 | Microprocessor Thermal Modelling and Validation Giovanni Beltrame, École Polytehcnique de Montréal, CA Modern integrated circuits generate very high heat fluxes that can lead to a high temperature, degrading the performance and reducing the life time of the device. Thermal simulation is used to prevent this kind of issues, and many models were introduced in recent years. However, their validation is challenging: it is either based on established simulators (with reduced accuracy), or requires to produce a specific test chip with several thermal sensors. We present a thermal modelling approach with an associated methodology and measurement setup that uses existing commercial processors to validate thermal models. We use infrared thermography and a low-cost thermoelectric cooling, avoiding the issues of mineral oil setups. We show how our approach can be used to create validated models for two thermal simulators (our own approach, and a commercial tool).
|
11:30 | W03.4.3 | Accurate environment model for obstacle detection using multiple noisy range sensors and implementation on industrial targets Julien Mottin, CEA LETI, FR Julien Mottin1, Diego Puschini2 and Tiana Rakotovao1 1CEA LETI, FR; 2CEA, LETI, MINATEC, FR Several robotic applications imply motion in complex unknown environments. Occupancy Grids model the surrounding obstacles through a partitioned spacial representation of the environment. A set of cells are filled iteratively with the interpretation of the information from sensors, considering their uncertainty through probabilistic models. Even if Occupancy Grids have been widely used in the state-of-the-art, the relation between the cell sizing and the inverse sensor model is usually neglected. In this paper, we propose a novel methodology to build inverse probabilistic model for single-target sensors. Since no additional limitation compared to the original formulation is introduced, our contribution allows to propagate the original precision of the sensor to the inverse model. In addition, it enables to properly choose the size of the cells. Our experiments apply our approach to a LIDAR sensor and to a Time-of-Flight camera, evaluating the grid resolution and the impact of the variation of the sensor precision. |