Abstract-Queues are one of the most commonly used data structures in applications and operating systems [1]. Up-andcoming multi-core processors force software developers to consider data structures in order to make them thread-safe. But, in real-time systems, e.g., robotic controls, parallelization is even more complicated as such systems must guarantee to meet their mostly hard deadlines. A considerable amount of research has been carried out on wait-free objects [2] to achieve this. Waitfreedom can guarantee that each potentially concurrent thread completes its operation within a bounded number of steps. But applicable wait-free queues, which supports multiple enqueue, dequeue and read operations, do not exist yet. Therefore, we present a statically allocated and statically linked queue, which supports arbitrary concurrent operations. Our approach is also applicable in other scenarios, where unsorted queues with statically allocated elements are used. Moreover, we introduce 'local preferences' to minimize contention. But, as the response times of our enqueue operation directly depends on the fill level, the response times of a nearly filled queue still remain an issue. Moreover, our approach is jitter-prone with a varying fill level. In this paper, we also address all of these issues with an approach using a helping queue. The results show that we can decrease the worst case execution time by approximately factor twenty. Additionally, we reduce the average response times of potentially concurrent enqueue operations in our queue. To the best of our knowledge, our wait-free queue is the best known and practical solution for an unsorted thread-safe queue for multiple enqueuers, multiple dequeuers and mulitple readers.
Most of the real-time applicable dynamic storage allocators rely on conventional locking strategies for protecting globally accessible data. But it is common that lock compositions do not scale well under high allocation and deallocation rates in parallel scenarios, as they lead to convoy effects. Furthermore, lock compositions lead to jitter, which is often a critical factor in real-time systems. Additionally, it is often desirable to guarantee progress of threads in order to be able to determine the worst-case execution time.This led us designing a wait-free dynamic storage allocator (DSA), which can guarantee progress of threads and does not influence other threads to make progress. Our DSA implementation relies on a kind of buddy strategy with approximate best-fit. Hence, it ensures for this kind of allocation strategy typical memory wastage as a result of internal fragmentation. Preliminary tests show that we can outperform established DSA implementations in terms of predictability, like the famous TLSF memory allocator. To the best of our knowledge, our DSA is the first known approach using a scalable and bounded nonblocking synchronization strategy.Our approach towards a wait-free DSA algorithm is applicable in real-time applications where adequate a priori knowledge about the memory requirements is available because it uses a statically allocated heap. We think that most real-time systems -especially ones with hard timing constraints -fulfill this precondition.
Over the last 25 years, performance improvements by the steady increase of CPU clock frequencies were the driving factor for innovations in the domain of computationally intensive embedded applications. Now the free lunch is over [12] -developers have to parallelize their systems in order to achieve further improvements by integration of multi-core platforms. In embedded systems, this is even more challenging than in the domain of desktop computers, as safety properties and hard real-time constraints impose a much stronger demand on determinism. In this experience report, we present a concrete coordination and synchronization problem for a double buffering procedure that arose on our ongoing attempts to parallelize a robotic control kernel. This double buffering procedure used by two tasks must assure a consistent data flow without data losses. Therefore, we approach a fast bounded wait-free solution, which does not suffer from priority inversion.
We introduce our major ideas of a wait-free, linearizable, and disjoint-access parallel NCAS library, called RTNCAS. It focuses the construction of wait-free data structure operations (DSO) in real-time circumstances. RTNCAS is able to conditionally swap multiple independent words (NCAS) in an atomic manner. It allows us, furthermore, to implement arbitrary DSO by means of their sequential specification.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.