The popularity of sensor networks and their many uses in critical domains such as military and healthcare make them more vulnerable to malicious attacks. In such contexts, trustworthiness of sensor data and their provenance is critical for decision-making. In this demonstration, we present an efficient and secure approach for transmitting provenance information about sensor data. Our provenance approach uses light-weight in-packet Bloom filters that are encoded as sensor data travels through intermediate sensor nodes, and are decoded and verified at the base station. Our provenance technique is also able to defend against malicious attacks such as packet dropping and allows one to detect the responsible node for packet drops. As such it makes possible to modify the transmission route to avoid nodes that could be compromised or malfunctioning. Our technique is designed to create a trustworthy environment for sensor nodes where only trusted data is processed.
This paper describes a new method for narrative frame alignment that extends and supplements models reliant on graph theory from the domain of fiction to the domain of nonfiction news articles. Preliminary tests of this method against a corpus of 24 articles related to private security firms operating in Iraq and the Blackwater shooting of 2007 show that prior methods utilizing a graph similarity approach can work but require a narrower entity set than commonly occurs in non-fiction texts. They also show that alignment procedures sensitive to abstracted event sequences can accurately highlight similar narratological moments across documents despite syntactic and lexical differences. Evaluation against LDA for both the event sequence lists and source sentences is provided for performance comparison. Next steps include merging these semantic and graph analytic approaches and expanding the test corpus.
The on-going COVID-19 pandemic has brought surveillance and privacy concerns to the forefront, given that contact tracing has been seen as a very effective tool to prevent the spread of infectious disease and that public authorities and government officials hope to use it to contain the spread of COVID-19. On the other hand, the rejection of contact tracing tools has also been widely reported, partly due to privacy concerns. We conducted an online survey to identify participants’ privacy concerns and their risk perceptions during the on-going COVID-19 pandemic. Our results contradict media claims that people are more willing to share their private information in a public health crisis. We identified a significant difference depending on the information recipient, the type of device, the intended purpose, and thus concretize the claims rather than suggesting a fundamental difference. We note that participants’ privacy preferences are largely impacted by their perceived autonomy and the perceived severity of consequences related to privacy risks. Contrarily, even during an on-going COVID-19 pandemic, health risk perceptions had limited influence on participants’ privacy preference, given only the perceived newness of the risk could weakly increase their comfort level. Finally, our results show that participants’ computer expertise has a positive influence on their privacy preference while their knowledge to security make them less comfortable with sharing.
Android and iOS mobile operating systems use permissions to enable phone owners to manage access to their device's resources. Both systems provide resource access dialogues at first use and per-resource controls. Android continues to offer permission manifests in the Android PlayStore for older apps but is transitioning away from this. Neither manifests nor first-use dialogues enable people to easily compare apps based on resource requests, and the corresponding privacy and security risks. Without the ability to compare resource requests when choosing an app, customers cannot select those apps that request fewer resources. Unnecessary and excessive permission requests, overuse of resources, information exfiltration, and risky apps are endemic. To address this issue we built upon past work in warning science and risk communication to design multimedia indicators to communicate the aggregate privacy and security risk associated with an app. Specifically, we provided participants with a privacy rating using the familiar padlock icon and used audio notifications to either warn or reinforce user choices. We empirically tested participants' app decisions with these padlock icons and audio notifications. The results showed that people with both visual cues and audio feedback are more likely to make app choices that are inversely correlated with the resources requested by the app. Those with neither indicators made decisions reflecting only app rating, while decisions made by those with either the audio or the visual indicators are sometimes inversely correlated with resource requests. This illustrates that simple clear communication about apps' aggregate risk, as opposed to atomic resource requests, changes participants' app selections potentially mitigating the state of information overuse and potential abuse. Additionally, neither the visual indicator nor the audio feedback affected the time required for participants to make a decision.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.