We apply a distance-based Bell-test analysis method [E. Knill et al., Phys. Rev. A. 91, 032105 (2015)] to three experimental data sets where conventional analyses failed or required additional assumptions. The first is produced from a new classical source exploiting a "coincidence-time loophole" for which standard analysis falsely shows a Bell violation. The second is from a source previously shown to violate a Bell inequality; the distance-based analysis agrees with the previous results but with fewer assumptions. The third data set does not show a violation with standard analysis despite the high source quality, but is shown to have a strong violation with the distancebased analysis method.
Future wireless internet enabled devices will be increasingly powerful supporting many more applications including one of the most crucial, security. Although SoCs offer more resistance to bus probing attacks, power/EM attacks on cores and network snooping attacks by malicious code are relevant. This paper presents a methodology for security on NoC at both the network level (or transport layer) and at the core level (or application layer) is proposed. For the first time a low cost security wrapper design is presented, which prevents unencrypted keys from leaving the cores and NoC. This is crucial to prevent untrusted software on or off the NoC from gaining access to keys. At the core level (application layer) power analysis attacks are examined for the first time for parallel and adiabatic architectural cores. With the emergence of secure IP cores in the market, a security methodology for designing NoCs is crucial for supporting future wireless internet enabled devices.
SUMMARYEmpirical systems research is facing a dilemma. Minor aspects of an experimental setup can have a significant impact on its associated performance measurements and potentially invalidate conclusions drawn from them. Examples of such influences, often called hidden factors, include binary link order, process environment size, compiler generated randomized symbol names, or group scheduler assignments. The growth in complexity and size of modern systems will further aggravate this dilemma, especially with the given time pressure of producing results. How can one trust any reported empirical analysis of a new idea or concept in computer science? DataMill is a community-based services-oriented open benchmarking infrastructure for rigorous performance evaluation. DataMill facilitates producing robust, reliable, and reproducible results. The infrastructure incorporates the latest results on hidden factors and automates the variation of these factors. DataMill is also of interest for research on performance evaluation. The infrastructure supports quantifying the effect of hidden factors, disseminating the research results beyond mere reporting. It provides a platform for investigating interactions and composition of hidden factors. This paper discusses experience earned through creating and using an open benchmarking infrastructure. Multiple research groups participate and have used DataMill. Furthermore, DataMill has been used for a performance competition at the International Conference on Runtime Verification (RV) 2014 and is currently hosting the RV 2015 competition. This paper includes a summary of our experience hosting the first RV competition.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.