Abstract-Cooperative adaptive cruise control and platooning are well-known applications in the field of cooperative automated driving. However, extension towards maneuvering is desired to accommodate common highway maneuvers, such as merging, and to enable urban applications. To this end, a layered control architecture is adopted. In this architecture, the tactical layer hosts the interaction protocols, describing the wireless information exchange to initiate the vehicle maneuvers, supported by a novel wireless message set, whereas the operational layer involves the vehicle controllers to realize the desired maneuvers. This hierarchical approach was the basis for the Grand Cooperative Driving Challenge (GCDC), which was held in May 2016 in The Netherlands. The GCDC provided the opportunity for participating teams to cooperatively execute a highway lanereduction scenario and an urban intersection-crossing scenario. The GCDC was set up as a competition and, hence, also involving assessment of the teams' individual performance in a cooperative setting. As a result, the hierarchical architecture proved to be a viable approach, whereas the GCDC appeared to be an effective instrument to advance the field of cooperative automated driving.Index Terms-Cooperative driving, interaction protocol, controller design, vehicle platoons, wireless communications. A. Voronov and C. Englund are with RISE Viktoria,
SDRAM is a shared resource in modern multi-core platforms executing multiple real-time (RT) streaming applications. It is crucial to analyze the minimum guaranteed SDRAM bandwidth to ensure that the requirements of the RT streaming applications are always satisfied. However, deriving the worst-case bandwidth (WCBW) is challenging because of the diverse memory traffic with variable transaction sizes. In fact, existing RT memory controllers either do not efficiently support variable transaction sizes or do not provide an analysis to tightly bound WCBW in their presence. We propose a new mode-controlled data-flow (MCDF) model to capture the command scheduling dependencies of memory transactions with variable sizes. The WCBW can be obtained by employing an existing tool to automatically analyze our MCDF model rather than using existing static analysis techniques, which in contrast to our model are hard to extend to cover different RT memory controllers. Moreover, the MCDF analysis can exploit static information about known transaction sequences provided by the applications or by the memory arbiter. Experimental results show that 77% improvement of WCBW can be achieved compared to the case without known transaction sequences. In addition, the results demonstrate that the proposed MCDF model outperforms state-of-the-art analysis approaches and improves the WCBW by 22% without known transaction sequences. Abstract-SDRAM is a shared resource in modern multi-core platforms executing multiple real-time (RT) streaming applications. It is crucial to analyze the minimum guaranteed SDRAM bandwidth to ensure that the requirements of the RT streaming applications are always satisfied. However, deriving the worst-case bandwidth (WCBW) is challenging because of the diverse memory traffic with variable transaction sizes. In fact, existing RT memory controllers either do not efficiently support variable transaction sizes or do not provide an analysis to tightly bound WCBW in their presence. We propose a new mode-controlled data-flow (MCDF) model to capture the command scheduling dependencies of memory transactions with variable sizes. The WCBW can be obtained by employing an existing tool to automatically analyze our MCDF model rather than using existing static analysis techniques, which in contrast to our model are hard to extend to cover different RT memory controllers. Moreover, the MCDF analysis can exploit static information about known transaction sequences provided by the applications or by the memory arbiter. Experimental results show that 77% improvement of WCBW can be achieved compared to the case without known transaction sequences. In addition, the results demonstrate that the proposed MCDF model outperforms state-of-the-art analysis approaches and improves the WCBW by 22% without known transaction sequences.
The goal of buffer allocation for real-time streaming applications, modeled as dataflow graphs, is to minimize total memory consumption while reserving sufficient space for each production without overwriting any live tokens and guaranteeing the satisfaction of real-time constraints. We present a buffer allocation solution for dataflow graphs scheduled on a system without back-pressure.Our contributions are 1)We extend the available dataflow techniques by applying best-case analysis. 2) We introduce dominator based relative life-time analysis. For our benchmark set, it exhibits up to 12% savings on memory consumption compared to traditional absolute life-time analysis. 3)We investigate the effect of variation in execution times on the buffer sizes for systems without back-pressure. It turns out that reducing the variation in execution times reduces the buffer sizes. 4)We compare the buffer allocation techniques for systems with and without back-pressure. For our benchmark set, we show that the system with backpressure reduces the total memory consumption by as much as 30 % compared to the system without back-pressure. Our benchmark set includes wireless communications and multimedia applications. I. INTRODUCTIONReal-time streaming applications, such as multimedia streaming and wireless transceivers, are becoming increasingly complex. They have strict end-to-end latency and throughput requirements, and run continuously, processing virtually infinite input sequences in a pipelined manner. Since their performance must meet rigorous standards, they are often mapped onto Heterogeneous Multi-Processor (HMP) platforms, and both simulation and formal analytical techniques are used to verify the timing behavior.Dataflow is a well-known temporal analysis and programming model which is well suited to model concurrent real-time streaming applications [11], [7], [21] In dataflow, an application is modeled as a directed graph, where nodes (actors) represent processing elements and edges represent data dependencies. In static variants of dataflow, bounds on actor execution times and the number of data items (tokens) consumed/produced on input/output edges for each execution (firing) of an actor are known at compile time, and for these variants, there exist techniques to verify real-time requirements such as deadlock-freedom and execution in bounded memory.In dataflow, data is communicated through queues with First-In-First-Out (FIFO) behavior. The computation of the minimum amount of memory needed by FIFO queues (edges) of an application modeled as a dataflow graph such that it meets its real-time requirements, is called buffer sizing [13], [10], [23]. Often, execution platforms are equipped with a back-pressure mechanism. In such a platform, for an actor to be able to fire, not only a sufficient amount of tokens is required at each of its input edges, but also a sufficient amount of space is required to produce tokens on each of its output edges. However, on a platform without back-pressure, an actor can fire as soon ...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.