The rigorous application of static timing analysis\ud
requires a large and costly amount of detail knowledge on the\ud
hardware and software components of the system. Probabilistic\ud
Timing Analysis has potential for reducing the weight of that\ud
demand. In this paper, we present a sound measurement-based\ud
probabilistic timing analysis technique based on Extreme Value\ud
Theory. In all the experiments made as part of this work, the\ud
timing bounds determined by our technique were less than\ud
15% pessimistic in comparison with the tightest possible bounds\ud
obtainable with any probabilistic timing analysis technique.\ud
As a point of interest to industrial users, our technique also\ud
requires a comparatively low number of measurement runs of\ud
the program under analysis; less than 650 runs were needed for\ud
the benchmarks presented in this paper
Abstract-Multicore processors (CMPs) represent a good solution to provide the performance required by current and future hard real-time systems. However, it is difficult to compute a tight WCET estimation for CMPs due to interferences that tasks suffer when accessing shared hardware resources. We propose an analyzable JEDEC-compliant DDRx SDRAM memory controller (AMC) for hard real-time CMPs, that reduces the impact of memory interferences caused by other tasks on WCET estimation, providing a predictable memory access time and allowing the computation of tight WCET estimations.
Measurement-Based Probabilistic Timing Analysis (MBPTA) has been recently proposed as a viable method to compute probabilistic worst-case execution time (pWCET) bounds for programs with hard real-time constraints.As a key trait, MBPTA needs a comparatively small number of observation runs, made on execution platforms to which MBPTA can be applied, to project the tail of the probability of occurrence of worst-case execution time durations of individual programs. In order for the use of MBPTA to fit the bill of industrial-quality development, it is imperative to understand what factors might threaten the trustworthiness of the pWCET computation.This paper addresses that important question by: (i) identifying the combined characteristics of applications and hardware resources that might lead to optimistic pWCET bounds; (ii) describing why this may occur; and (iii) providing the user with means to detect those cases so that trustworthiness is restored. In particular, we present a method for detecting risk scenarios for time-randomised caches, based on principles that apply to any other time-randomised resource which may challenge the application of MBPTA.
Multicore processors are an effective solution to cope with the performance requirements of real-time embedded systems due to their good performance-per-watt ratio and high performance capabilities. Unfortunately, their use in integrated architectures such as IMA or AUTOSAR is limited by the fact that multicores do not guarantee a time composable behavior for the applications: the WCET of a task depends on inter-task interferences introduced by other tasks running simultaneously.This article focuses on the off-chip memory system: the hardware shared resource with the highest impact on the WCET and hence the main impediment for the use of multicores in integrated architectures. We present an analytical model that computes the worst-case delay, also known as Upper Bound Delay (UBD), that a memory request can suffer due to memory interferences generated by other co-running tasks. By considering the UBD in the WCET analysis, the resulting WCET estimation is independent from the other tasks, hence ensuring the time composability property and enabling the use of multicores in integrated architectures. We propose a memory controller for hard real-time multicores compliant with the analytical model that implements extra hardware features to deal with refresh operations and interferences generated by co-running non hard real-time tasks.
Abstract-Current Operating Systems (OS) perceive the different contexts of Simultaneous Multithreaded (SMT) processors as multiple independent processing units, although, in reality, threads executed in these units compete for the same hardware resources. Furthermore, hardware resources are assigned to threads implicitly as determined by the SMT instruction fetch (Ifetch) policy, without the control of the OS. Both factors cause a lack of control over how individual threads are executed, which can frustrate the work of the job scheduler. This presents a problem for general purpose systems, where the OS job scheduler cannot enforce priorities, and also for embedded systems, where it would be difficult to guarantee worst-case execution times. In this paper, we propose a novel strategy that enables a two-way interaction between the OS and the SMT processor and allows the OS to run jobs at a certain percentage of their maximum speed, regardless of the workload in which these jobs are executed. In contrast to previous approaches, our approach enables the OS to run time-critical jobs without dedicating all internal resources to them so that non-time-critical jobs can make significant progress as well and without significantly compromising overall throughput. In fact, our mechanism, in addition to fulfilling OS requirements, achieves 90 precent of the throughput of one of the best currently known fetch policies for SMTs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.