Systematic testing of autonomous vehicles operating in complex real-world scenarios is a difficult and expensive problem. We present Paracosm, a framework for writing systematic test scenarios for autonomous driving simulations. Paracosm allows users to programmatically describe complex driving situations with specific features, e.g., road layouts and environmental conditions, as well as reactive temporal behaviors of other cars and pedestrians. A systematic exploration of the state space, both for visual features and for reactive interactions with the environment is made possible. We define a notion of test coverage for parameter configurations based on combinatorial testing and low dispersion sequences. Using fuzzing on parameter configurations, our automatic test generator can maximize coverage of various behaviors and find problematic cases. Through empirical evaluations, we demonstrate the capabilities of Paracosm in programmatically modeling parameterized test environments, and in finding problematic scenarios.
Parametric computer‐aided design (CAD) enables description of a family of objects, wherein each valid combination of parameter values results in a different final form. Although Graphical User Interface (GUI)‐based CAD tools are significantly more popular, GUI operations do not carry a semantic description, and are therefore brittle with respect to changes in parameter values. Programmatic interfaces, on the other hand, are more robust due to an exact specification of how the operations are applied. However, programming is unintuitive and has a steep learning curve. In this work, we link the interactivity of GUI with the robustness of programming. Inspired by programme synthesis by example, our technique synthesizes code representative of selections made by users in a GUI interface. Through experiments, we demonstrate that our technique can synthesize relevant and robust sub‐programmes in a reasonable amount of time. A user study reveals that our interface offers significant improvements over a programming‐only interface.
Since its introduction in $1985$, competitive analysis is a widely used tool for the performance measurement of online algorithms. Despite its simplicity and popularity, competitive analysis has its own set of drawbacks which lead to the development of other performance measures. However, these measures were seldom applied to problems in other domains. Recently Boyar et al. (A comparison of performance measures via online search, \textit{Theoretical Computer Science}, 2014) studied the online search problem using various performance analysis measures for non-preemptive algorithms. We extend the work by considering preemptive \textit{threat-based} algorithms and evaluate it using competitive analysis, bijective analysis, average case and relative interval analysis. For competitive analysis, and average case analysis, our findings are in contrast with that of Boyar et al., whereas for bijective and relative interval analysis our findings complement that of Boyar et al.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.