Mohammad Tehranipoor, University of Connecticut, US (Contact Mohammad Tehranipoor)
Domenic Forte, University of Connecticut, US (Contact Domenic Forte)
The migration from a vertical to horizontal business model has made it easier to introduce hardware Trojans and counterfeit electronic parts into the electronic component supply chain. Hardware Trojans are malicious modifications made to original IC designs that reduce system integrity (change functionality, leak private data, etc.). Counterfeit parts are often below specification and/or of substandard quality. The existence of Trojans and counterfeit parts creates risks for the life-critical systems and infrastructures that incorporate them including automotive, aerospace, military, and medical systems. In this tutorial, we will cover (i) Background and motivation for hardware Trojan and counterfeit prevention/detection; (ii) Taxonomies related to both topics; (iii) Existing solutions; (iv) Open challenges; (v) New and unified solutions to address these challenges.
Time | Label | Session |
---|---|---|
14:30 | M12.1 | Session 1 |
00:00 | M12.1.1 | Background and motivation for hardware Trojan and counterfeit prevention/detection |
00:00 | M12.1.2 | Taxonomies related to both topics |
16:30 | M12.2 | Session 2 |
00:00 | M12.2.1 | Existing solutions |
00:00 | M12.2.2 | Open challenges |
00:00 | M12.2.3 | New and unified solutions to address these challenges |
Nagib Hakim, Intel Corporation Santa Clara, US
Subhasish Mitra, Stanford University, US
Amir Nahir, IBM Research Labs Haifa, IL
Alan Hu, University of British Columbia, CA
Hardware failures are a growing concern as electronic systems become more complex, interconnected, and pervasive. The complexity challenge is further exacerbated by new ways of improving energy efficiency of electronic systems with the slowdown of CMOS (Dennard) scaling: increasing amounts of cores, uncore components, and accelerators; increasing degrees of adaptivity; and, increasing levels of heterogeneous integration. All these features and their complex interactions make future systems highly vulnerable to design flaws (bugs) that can jeopardize correct system operation and/or introduce security vulnerabilities. Existing validation methods barely cope with today's complexity. Traditional pre-silicon verification alone is no longer adequate. Post-silicon validation involves operating manufactured ICs in actual application environments to detect and fix bugs. Existing post-silicon practices are ad-hoc, and their costs are rising faster than design cost. Effective post-silicon validation requires a radical departure from today's ad-hoc practices to structured techniques. A wide range of topics will be covered in this tutorial, from best practices at leading companies to recent research results that are immediately applicable. Examples include: 1. overview of validation product life cycle; 2. trade-offs in pre- vs. post-silicon validation; 3. validation test content generation using the concept of exercisers; 4. validation infrastructure including triggers, observability structures, and performance monitors; 5. structured and systematic techniques such as QED (Quick Error Detection); 6. coverage metrics; 7. logic and electrical bug validation and debug techniques; 8. formal techniques for post-silicon validation and debug; 9. post-silicon repair, survivability, and resiliency; 10. bug benchmarks and industrial case studies.
Time | Label | Session |
---|---|---|
14:30 | M11.1 | Session 1 |
00:00 | M11.1.1 | Big Picture (Nagib Hakim, Subhasish Mitra, Amir Nahir) Nagib Hakim1, Subhasish Mitra2 and Amir Nahir3 1Intel Corporation Santa Clara, US; 2Stanford University, US; 3IBM Research Labs Haifa, IL |
16:30 | M11.2 | Session 2 |
00:00 | M11.2.1 | Observability enhancement during post-silicon validation (Alan Hu, Subhasish Mitra) Alan Hu1 and Subhasish Mitra2 1University of British Columbia, CA; 2Stanford University, US |
Davide Quaglia, EDALab s.r.l., IT
Dimitris Drogoudis, Agilent, BE
Davide Bresolin, University of Bologna, IT
The design of future smart embedded systems should jointly take into account aspects from different domains, such as digital (hardware, software, network) and analog (electronic, electromechanical, etc., as for instance RF, MEMS, power sources, thermal issues, sensors and actuators) so that they can be considered ever more cyber-physical systems. To increase energy efficiency, to fully exploit the potential of current nanoelectronics technologies, as well as to enable the integration of existing/new IPs and "More than Moore" devices, new methodologies and tools for multi-disciplinary and multi-scale modeling, simulation and verification are needed. In engineering practice, the analysis of a complex system is usually carried on through simulation, which allows the engineer to explore one of the possible system executions at a time. Formal verification instead aims at exploring all possible executions, in order to be certain that a property of interest holds in all cases, or conversely acquire information about potential fault cases. Because of their heterogeneous nature, cyber-physical systems have a mixed discrete and continuous behavior, which makes them quite challenging for verification. In this tutorial, we survey state-of-the-art modeling, simulation and verification techniques for cyber-physical systems. The presentations will be accompanied by concrete tool introductions and demonstrations, showing how the presented concepts support improvement of today's state-of-the-art system-level design flow of smart systems. Most of this tutorial is based of the results of the SMAC European project on smart systems design. By scope and contents, this tutorial targets students and researchers belonging to both academia and industry.
Time | Label | Session |
---|---|---|
14:30 | M10.1 | Session 1 |
00:00 | M10.1.1 | Introduction to smart systems and cyber-physical systems (Davide Quaglia) Davide Quaglia, EDALab s.r.l, IT |
00:00 | M10.1.2 | Multi-domain modeling languages and methodologies (Davide Quaglia) Davide Quaglia, EDALab s.r.l, IT |
00:00 | M10.1.3 | Multi-scale modeling: abstraction and refinement (Dimitris Drogoudis) Dimitris Drogoudis, Agilent, BE |
16:30 | M10.2 | Session 2 |
00:00 | M10.2.1 | Verification of Cyber-Physical Systems (Davide Bresolin) Davide Bresolin, University of Bologna, IT |
00:00 | M10.2.2 | Application of modeling concepts and tools to real case studies (Dimitris Drogoudis) Dimitris Drogoudis, Agilent, BE |
00:00 | M10.2.3 | Application of verification concepts and tools to real case studies (Davide Bresolin) Davide Bresolin, University of Bologna, IT |
Houman Homayoun, George Mason University, US (Contact Houman Homayoun)
Farhang Yazdani, BroadPak Corporation, US (Contact Farhang Yazdani)
Ayse Coskun, Boston University, US (Contact Ayse Coskun)
Hank Hoffmann, University of Chicago, US (Contact Hank Hoffmann)
The microprocessor industry is at a crossroads. While it continues to scale performance with each generation, we continue to drive this critically important technology domain. When performance scaling stops, microprocessors become a generic commodity and no longer a technology driver or enabler. Because modern processors are most heavily constrained by power, and sometimes energy, constraints, performance scaling no longer falls naturally from increased transistor counts. Instead, total performance is maximized by maximizing performance/Watt. Future computing platforms will need to be flexible, scalable, conservative on power, while saving size, weight, energy, etc. In addressing these challenges, microprocessor industry is moving towards heterogeneous architecture design. Heterogeneous designs promise to push the envelope of power efficiency further by enabling general purpose processors to achieve the efficiency of customized cores. By enabling more diverse designs, and designs that are customized dynamically, we can push the efficiency envelope even further. This tutorial first reviews the major challenges facing semiconductor industry; in general performance, power, temperature and reliability, and in specific dark and unreliable silicon. The tutorial then introduces the concept of heterogeneous architecture to address the efficiency crisis and briefly reviews the state of the art in static and dynamic heterogeneous architectures in industry and academia. The tutorial then presents 3D design concept and argue how it can eliminate the fundamental barrier to dynamic heterogeneity. Finally it reviews the state of the art in simulators and modeling tools and how they can be integrated to accurately model performance, power, area, and temperature in 3D heterogeneous architectures. About the Team: The team consists of experts in interdisciplinary areas including heterogeneous architecture and 3D design (Houman Homayoun), temperature-aware design, DRAM and 3D integration (Ayse Coskun), 3D fabrication and packaging (Farhang Yazdani), and system architecture design (Hank Hoffman). The team consists of three faculties and one industry expert in the field. Houman Homayoun is an Assistant Professor of the Department of Electrical and Computer Engineering at George Mason University.
Time | Label | Session |
---|---|---|
09:30 | M04.1 | Session 1 |
00:00 | M04.1.1 | Reviews of major challenges facing semiconductor industry, introduce the concept of dynamic heterogeneous architecture and concept of core pooling (Houman Homayoun) Houman Homayoun, , |
00:00 | M04.1.2 | Pathfinding methodology for optimal design and integration of 2.5D/3D heterogeneous systems Farhang Yazdani, , |
11:30 | M04.2 | Session 2 |
00:00 | M04.2.1 | 3D systems as platforms for "flexible heterogeneity", cache/memory pooling, power & temperature challenges Ayse Coskun, , |
00:00 | M04.2.2 | Managing dynamically configurable systems: optimizing energy under performance constraints; Coordinating adaptation across the system stack Hank Hoffman, , |
Daniel Ménard, INSA Rennes, FR
Daniel Ménard, INSA Rennes, FR
David Novo, EPFL, CH
Karthick Parashar, Imperial College London, GB
Olivier Sentieys, Inria and University of Rennes, FR
Given that Moore's law scaling has hit the power-wall, reducing power consumption of high-performance embedded systems becomes very crucial. It is also well admitted that system-level techniques offer the greatest potential for optimizing power. In this tutorial, we demonstrate how the careful tuning of the fixed-point arithmetic used to implement numerous functionalities in embedded system applications, can lead to significant savings in power consumption. Interestingly, proper dimensioning of the bit widths used to represent signals or variables can reduce the power consumption in both hardware and software implementations. Even in software implementation, the pervasive use of Single Instruction Multiple Data (SIMD) datapaths in modern processors is pushing designers to meddle with bit allocation. Often, a reduction in bit widths can enable the use of more SIMD slots, which increases the parallelism boosting the speed and energy efficiency of the software implementation. Although quantization effects in digital signal processing systems have been studied since the 70's, significant progress has been made in the recent years. This tutorial packs nearly a decade of research in designing systems with fixed-point arithmetic. We expose the deficiency in the support offered by existing EDA tools and motivate the need for new solutions. Accordingly, we put into perspective several recent techniques that have been developed to facilitate a quick analysis of the impact of a selected fixed-point format on the system performance and cost. We analyze the fixed-point refinement in a comprehensive way from a tools perspective, dividing the problem into various design steps (e.g., range and precision analysis). For each step, we present concrete solutions amenable to design automation that are illustrated with multiple relevant design examples from the wireless communication, multi-media and other signal processing domains.
Time | Label | Session |
---|---|---|
09:30 | M03.1 | Session 1 |
00:00 | M03.1.1 | Introduction |
00:00 | M03.1.2 | Fixed-point arithmetic |
00:00 | M03.1.3 | Range analysis |
11:30 | M03.2 | Session 2 |
00:00 | M03.2.1 | Precision analysis |
00:00 | M03.2.2 | Word-length optimization |
00:00 | M03.2.3 | Opportunistic run-time precision adaptation |
00:00 | M03.2.4 | Conclusion |
Erik Jan Marinissen, IMEC - Leuven, BE
Krishnendu Chakrabarty, Duke University - Durham, NC, US
Target Audience: Test and design-for-test engineers and their managers; test methodology developers; test-automation tool developers; researchers, university professors, and students.
Stacked ICs with vertical interconnect containing fine-pitch micro-bumps and through-silicon vias (TSVs) are a hot-topic in design and manufacturing communities. These 2.5D- and 3D-SICs hold the promise of heterogeneous integration, inter-die connections with increased performance at lower power dissipation, and increased yield and hence decreased product cost. However, testing for manufacturing defects remains an obstacle and potential showstopper before stacked-die products can become a reality. There are concerns about the cost or, even worse, feasibility of testing such TSV-based 3D chips. In this tutorial, we present key concepts in 3D technology, terminology, and benefits. We discuss design and test challenges and emerging solutions for 2.5D- and 3D-SICs. Topics to be covered include an overview of 3D integration and trendsetting products such as a 2.5D-FPGA and 3D-stacked memory chips, test flows and test content for 3D chips, advances in wafer probing, 3D design-for-test architectures and ongoing IEEE P1838 standardization efforts for test access, and 3D test cost modeling and test-flow selection.
Time | Label | Session |
---|---|---|
09:30 | M06.1 | Session 1 |
00:00 | M06.1.1 | Introduction |
00:00 | M06.1.2 | Overview of 2.5D- and 3D-technology |
00:00 | M06.1.3 | 3D test flows and test contents |
00:00 | M06.1.4 | 3D test access: wafer probing (industry/research) |
11:30 | M06.2 | Session 2 |
00:00 | M06.2.1 | 3D test access: DfT architecture (incl. IEEE P1838) and optimizations |
00:00 | M06.2.2 | 3D cost flow modeling (with case studies) |
00:00 | M06.2.3 | Conclusion |
Russ Klein, Mentor,
Emulation systems can execute designs fast enough to run significant amounts of software. For example, one can execute the software boot process, run diagnostics, boot an OS, load and exercise drivers. This allows earlier access to the design for the software team. It also allows software to be used to drive activity; exercising realistic use cases as part of the hardware verification. This software will need to be debugged. The emulated design will likely contain all the debug facilities, such as JTAG and ETM, as the final device. These can be used in emulation just as they would on the final silicon. Emulators will allow access to signals around the core, not accessible in the final device, which can used to debug and trace the processor. This gives the developer a number of options for debugging. This session explores the different debug approaches available, trade-offs involved in each approach, and how and when they can be most effectively applied during the design cycle. Russ Klein is a Technical Director in Mentor's emulation division. He has been developing verification and debug solutions which span the boundaries between hardware and software for over 20 year
Time | Label | Session |
---|---|---|
09:30 | M02.1 | Session 1 |
00:00 | M02.1.1 | Options for software debug and trace in the context of design running in emulation |
00:00 | M02.1.2 | Understanding the trade-offs in terms of performance, functionality, and intrusiveness of different debug approaches |
11:30 | M02.2 | Session 2 |
00:00 | M02.2.1 | Concurrent debug of multiple cores in emulation |
00:00 | M02.2.2 | Correlation of hardware and software debug views |
00:00 | M02.2.3 | Efficient utilization of emulation resources during software debug |
Partha Pratim Pande, Washington State University, US
Radu Marculescu, Carnegie Mellon University, US
Radu Marculescu, Carnegie Mellon University, US
Partha Pratim Pande, Washington State University, US
Deukhyoun Heo, Washington State University, US
Hiroki Matsutani, Keio University, JP
Continuing progress and integration levels in silicon technologies make possible complete end-user systems consisting of extremely high number of cores on a single chip targeting either embedded or high-performance computing. However, without new approaches for energy- and thermally-efficient design, as well as scalable, low power and high bandwidth on-chip communication architectures, this vision may remain a pipe dream. Towards this end, wireless Network-on-Chip (WiNoC) represents an emerging paradigm for designing low power, high bandwidth interconnect infrastructure for multicore chips. This tutorial will provide a timely and insightful journey into various challenges and emerging solutions of designing WiNoC architectures from a variety of different perspectives, ranging from very high levels of abstraction (e.g., system architecture) to very low levels (e.g., on-chip antenna and transceiver design). The tutorial will start by discussing the fundamentals of network-based communication for 2D and 3D multicore systems and advanced design techniques for multi-domain clock and power management for embedded and high-performance processors, using real examples of multicore platforms. The second part of the tutorial will focus on the design of high bandwidth and low power WiNoC architectures incorporating the small-world effects. We will present detailed performance evaluation and necessary design trade-offs for the small-world WiNoCs with respect to their conventional wireline counterparts. We will conclude this part of the tutorial by presenting design of on-chip millimeter (mm)-wave wireless link as the suitable physical layer for the WiNoCs. In the last part, we will complement the above discussions regarding planar WiNoCs by introducing the wireless 3D NoCs that use inductive coupling though-chip interfaces (TCIs) to connect stacked chips by square coils as data transmitters. We will present design and implementation of wireless 3D NoC systems, real-chip experimental results, and their interconnection techniques. By scope and contents, this tutorial targets students and researchers belonging to both academia and industry.
Time | Label | Session |
---|---|---|
09:30 | M05.1 | Session 1 |
00:00 | M05.1.1 | Foundations of On-chip Communication: Performance and Power Management in 2D and 3D Multicore Platforms Radu Marculescu, , |
00:00 | M05.1.2 | WiNoC: Network Architecture and Communication Resource Management Partha Pratim Pande, , |
11:30 | M05.2 | Session 2 |
00:00 | M05.2.1 | Millimeter-Wave Wireless Link: The Physical Layer Design for WiNoCs Deukhyoun Heo, , |
00:00 | M05.2.1 | 3D WiNoC Architectures Hiroki Matsutani, , |
Saibal Mukhopaddhyay, University of Georgia Tech., US
Shidhartha Das, ARM Ltd., GB
Anand Raghunathan, Purdue University, US
Srimat Chakradhar, NEC Labs, US
This is a half-day tutorial that covers a broad range of technologies for error-resilient computing, and highlights the significant role of resiliency technologies in achieving high energy-efficiency across different levels of abstractions (circuit, hardware architecture and software) in modern computing systems. Safety-margins added to address the impact of rising variations at nanometer geometries incur unacceptable power and performance overheads. Traditional adaptive techniques compensate for some manifestations of these variations, however, they require margins to account for localized and fast-changing variations. The adverse impact of margins has led to a recent research focus on so-called "error-resilient" techniques, both in academia and industry. Resilient techniques permit computational errors to occur at run-time, either by operating without the full setup margin or by deliberately designing for inexact outputs. In lieu of the "always-correct" output as mandated in the traditional model of computing, computing with errors enables significant improvements in energy-efficiency as long as the error-rate and/or the magnitude of errors are sufficiently low. Resilient techniques have wide-ranging applications that span high-performance general-purpose computing to digital signal processing (DSP) algorithms. In this tutorial, we provide an in-depth overview of error-resilient techniques encompassing circuits, micro-architectural, algorithmic and system-architecture aspects. We organize the material into two segments. In the first, we discuss error-resilient techniques for bit-exact applications where perfect recovery from errors is a key requirement. We briefly review the existing design space for traditional adaptive techniques and motivate the case for error-resiliency by analyzing the additional margins eliminated through explicit error-detection and correction. We then discuss error-detection and recovery approaches for microprocessor pipelines highlighting "Razor" as a specific example. We present measurement results from academia and industry on resilient techniques similar to Razor. The second segment of the tutorial focuses on "approximate" computing; an approach to computing that defines correctness as producing outputs of acceptable "quality". Many applications (such as web search, data analytics, sensor data processing, recognition, mining, and synthesis) have a high degree of intrinsic resilience to their underlying computations being executed incorrectly. We review software, hardware architecture and circuit design techniques to build approximate computing systems. These new techniques significantly improve performance or energy efficiency while ensuring that the results produced are acceptable. We will conclude with a discussion of the key challenges that need to be addressed in order to facilitate a broader adoption of approximate computing.
Time | Label | Session |
---|---|---|
14:30 | M09.1 | Session 1 |
00:00 | M09.1.1 | Error-resilient Computing - Motivation and Example Applications Saibal Mukhopaddhyay, University of Georgia Tech, US |
00:00 | M09.1.2 | Error-resilience for general-purpose computing - Razor Shidhartha Das, ARM Ltd, GB |
16:30 | M09.2 | Session 2 |
00:00 | M09.2.1 | Approximate Computing - A circuits and architecture perspective Anand Raghunathan, Purdue University, US |
00:00 | M09.2.2 | Approximate Computing - A software and applications perspective Srimat Chakradhar, NEC Labs, US |
Tsung-Yi Ho, National Cheng Kung University, TW
Krishnendu Chakrabarty, Duke University, US
The tutorial offers attendees an opportunity to bridge the semiconductor ICs/system industry with the biomedical and pharmaceutical industries. The tutorial will first describe emerging applications in biology and biochemistry that can benefit from advances in electronic "biochips". The presenters will next describe technology platforms for accomplishing "biochemistry on a chip", and introduce the audience to both the droplet-based "digital" microfluidics based on electrowetting actuation and flow-based "continuous" microfluidics based on microvalve technology. Next, the presenters will describe system-level synthesis includes operation scheduling and resource binding algorithms, and physical-level synthesis includes placement and routing optimizations. In this way, the audience will see how a "biochip compiler" can translate protocol descriptions provided by an end user (e.g., a chemist or a nurse at a doctor's clinic) to a set of optimized and executable fluidic instructions that will run on the underlying microfluidic platform. Testing techniques will be described to detect faults after manufacture and during field operation. A classification of defects will be presented based on data for fabricated chips. Appropriately fault models will be developed and presented to the audience. On-line and off-line reconfiguration techniques will be presented to bypass faults once they are detected. The problem of mapping a small number of chip pins to a large number of array electrodes will also be covered. Finally, sensor feedback-based cyberphysical adaptation will be covered. A number of case studies based on representative assays and laboratory procedures will be interspersed in appropriate places throughout the tutorial.
Time | Label | Session |
---|---|---|
14:30 | M08.1 | Session 1 |
00:00 | M08.1.1 | Technology and application drivers |
00:00 | M08.1.2 | Synthesis techniques |
16:30 | M08.2 | Session 2 |
00:00 | M08.2.1 | Testing and design-for-testability |
00:00 | M08.2.2 | Cyberphysical integration and dynamic adaptation |
Alfons Crespo, Universidad Politécnica de Valencia, ES (Contact Alfons Crespo)
Alejandro Alonso, Universidad Politécnica de Madrid, ES (Contact Alejandro Alonso)
Jon Pérez, IK4-IKERLAN, ES (Contact Jon Pérez)
Modern embedded applications typically integrate a multitude of functionalities with potentially different criticality levels into a single system. In addition, the increasing power of mono-core and multi-core processors make it possible to integrate them in a single platform. However, this implies a number of challenges, being the integration of mixed-criticality applications one of them. System partitioning emerges as a powerful alternative for dealing with these challenges. An hypervisor allows creating several virtual machines, that run with spatial and temporal isolation. Applications are assigned to partitions, according to several criteria, such as its criticality. Resources are assigned to virtual machines, to guarantee the fulfilment of applications time requirements. This approach is also valid for multi-cores. This tutorial will introduce the attendee the basic techniques in the development of partitioned high integrity embedded systems, which will be illustrated with an industrial case study. This development relies on the XtratuM hypervisor and supporting tools for validation, partitioning, and code and configuration files generation. This tutorial will benefit attendees from the industry, as it will show in a practical manner the basics in the development of partitioned embedded systems. They could have an idea on how to integrate this approach on their current practices. Attendees from the academia will get acquainted with advance development techniques and open research topics. In addition, the availability of the development framework can be the base of laboratory assignments on advanced courses.
Time | Label | Session |
---|---|---|
09:30 | M01.1 | Session 1 |
00:00 | M01.1.1 | Challenges in the development of high-integrity embedded systems Jon Pérez, IK4-IKERLAN, ES |
00:00 | M01.1.2 | Mixed criticality systems based on system partitioning Alfons Crespo, Universidad Politécnica de Valencia, ES |
00:00 | M01.1.3 | The XtratuM hypervisor Alfons Crespo, Universidad Politécnica de Valencia, ES |
11:30 | M01.2 | Session 2 |
00:00 | M01.2.1 | Framework for the development of mixed criticality systems Alejandro Alonso, Universidad Politécnica de Madrid, ES |
00:00 | M01.2.2 | Use case: development of a mixed-criticality embedded system; Aerospace (Alejandro) and Wind-power (Jon) Alejandro Alonso1 and Jon Pérez2 1Universidad Politécnica de Madrid, ES; 2IK4-IKERLAN, ES |
00:00 | M01.2.3 | Conclusion and future directions |
Hermann Härtig, Technische Universität Dresden, DE
Adam Lackorzynski, Kernkonzept GmbH, DE
Carsten Weinhold, Technische Universität Dresden, DE
Björn Döbel, Technische Universität Dresden, DE
Modern embedded systems contain an increasing amount of software components with differing requirements in terms of real-time guarantees, security isolation, and reliability. In order to reduce production cost it is desirable to consolidate many such applications into a single hardware platform. Such consolidation requires an operating system that suits these differing application requirements. L4/Fiasco.OC is a microkernel operating system developed as a research project at TU Dresden and now commercially supported by Kernkonzept GmbH. The operating system has been constantly evolved for the past 15 years to accomodate real-time, security, and reliability use cases. Commercially, the microkernel is the foundation of Deutsche Telekom's SIMKo3 high-security smartphone, which was certified for German Government use in September 2013. This tutorial will give an insight into Fiasco.OC's features. Talks by Fiasco.OC developers and researchers will explore usage enarios. A hands-on session lets participants get first-hand experience in Fiasco.OC system setup and application development.
Time | Label | Session |
---|---|---|
14:30 | M07.1 | Session 1 |
00:00 | M07.1.1 | Why we need microkernels Hermann Härtig, Technische Universität Dresden, DE |
00:00 | M07.1.2 | Isolation for Security, Portability, and Real-Time Adam Lackorzynski, Kernkonzept GmbH, DE |
00:00 | M07.1.3 | Building a Secure System on top of Fiasco.OC Carsten Weinhold, Technische Universität Dresden, DE |
00:00 | M07.1.4 | Fiasco.OC for Reliability and Fault Tolerance Björn Döbel, Technische Universität Dresden, DE |
16:30 | M07.2 | Session 2 |
00:00 | M07.2.1 | Hands On Session (Please bring your laptop) |
00:00 | M07.2.2 | Practical Introduction to running L4/Fiasco.OC |
00:00 | M07.2.3 | System Setup and Application Development |