This paper presents a benchmark suite for evaluating a configurable computing system's infrastructure, both tools and architecture. A novel aspect of this work is the use of stressmarks, benchmarks that focus on a specific characteristic or property of interest. This is in contrast to traditional approaches that utilize functional benchmarks, benchmarks that emphasize measuring end-to-end execution time. This suite can be used to assess a broad range of configurable computing systems, including single configurable devices, multiple configurable devices, and mixed architectures, such as fixed-plus-variable devices and hybrid systems. In addition, aspects that are particularly relevant to the domain of configurable computing, such as run-time reconfiguration and variable precision arithmetic, are considered. The paper provides an overview of the benchmark suite, presents some implementation results on an Annapolis Micro Systems WILDFORCE board, reflects on the benchmark suite developed, and briefly describes future work.
This paper describes the implementation of the hypothesis testing benchmark, one of ten kernels from the C31 (Command, Control, Communications and Intelligence) Parallel Benchmark Suite (C31PBS)' . The benchmark was implemented and executed on a variety of parallel environments. This paper details the run times obtained with these implementations, and offers an analysis of the results.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.