In this paper we address the problem of physical node capture attacks in wireless sensor networks and provide a control theoretic framework to model physical node capture, cloned node detection and revocation of compromised nodes. By combining probabilistic analysis of logical key graphs and linear control theory, we derive a dynamical model that efficiently describes network behavior under attack. Using LQR and LQG optimal control theory tools, we develop a network response strategy, which guarantees secure network connectivity and stability under attack. Detailed simulations are presented to validate the methodology.
IEEE TEchnology and SocIETy MagazInE ∕ j u n e 2 0 1 5 1932-4529/15©2015IEEE large number of Brain-Computer Interfaces (BCIs) are currently under development, or being proposed, for both medical and non-medical applications. These applications include advertising, market surveys, focus groups and gaming. For example, in 2008, the Nielsen Company acquired Neurofocus, for the development of neural engineering technologies aimed at better understanding customer needs and preferences [1]. In May 2013, Samsung, in collaboration with the University of Texas, demonstrated how BCIs could be used to control mobile devices [2]. In the same month, the first neurogaming conference gathered more than 50 involved companies [3]. In September 2013, Neuroware presented Neurocam, a wearable EEG system equipped with a camera. The system is set to automatically start recording moments of interest based on inferred information from users' neural signals [4].Several neural engineering companies, including Emotiv [5] and NeuroSky [6] currently offer low-cost, consumer-grade BCIs and software development kits. These companies have recently introduced the concept of BCI "app stores" [7], with the purpose of facilitating expansion of BCI applications. Future BCIs will likely be simpler App Stores for the Brain ISTOCK/verTIgO3dA j u n e 2 0 1 5 ∕ IEEE TEchnology and SocIETy MagazInE to use and will require less time and user effort, while enabling faster and more accurate translation of users' intended messages.These developments raise questions about privacy and security. At the 2012 USENIX Security Symposium, researchers introduced the first BCI enabled malicious application, referred to as "brain spyware." The application was used to extract private information, such as credit card PINs, dates of birth, and locations of residence, from users' recorded EEG signals [7]. As BCI technology spreads further (towards becoming ubiquitous), it is easy to imagine more sophisticated "spying" applications being developed for nefarious purposes. Leveraging recent neuroscience results (e.g., [8]-[11]), it may be possible to extract private information about users' memories, prejudices, religious and political beliefs, as well as about their possible neurophysiological disorders. The extracted information could be used to manipulate or coerce users, or otherwise harm them. The impact of "brain malware" could be severe, in terms of privacy and other important values. A question arises: is it in the public interest to allow anyone to have unrestricted access to the private information extractable from neural signals? And if not, how should we grant such access, and how can this be managed, regulated, or otherwise controlled? While U.S. federal law protects medical information [12] and generally guards against unfair or deceptive practices [13], few rules or standards currently limit access to BCI-generated data. Importantly, platforms are immunized for apps that third parties submit, such that BCImanufacturers are not necessarily incentivized, from a le...
Applications of robotic systems have had an explosive growth in recent years. In 2008, more than eight million robots were deployed worldwide in factories, battlefields, and medical services. The number and the applications of robotic systems are expected to continue growing, and many future robots will be controlled by distant operators through wired and wireless communication networks.The open and uncontrollable nature of communication media between robots and operators renders these cyber-physical systems vulnerable to a variety of cyber-security threats, many of which cannot be prevented using traditional cryptographic methods. A question thus arises: what if teleoperated robots are attacked, compromised or taken over?In this paper, we systematically analyze cyber-security attacks against Raven II R , an advanced teleoperated robotic surgery system. We classify possible threats, and focus on denial-of-service (DoS) attacks, which cannot be prevented using available cryptographic solutions. Through a series of experiments involving human subjects, we analyze the impact of these attacks on teleoperated procedures. We use the Fitts' law as a way of quantifying the impact, and measure the increase in tasks' difficulty when under DoS attacks.We then consider possible steps to mitigate the identified DoS attacks, and evaluate the applicability of these solutions for teleoperated robotics. The broader goal of our paper is to raise awareness, and increase understanding of emerging cyber-security threats against teleoperated robotic systems.
Teleoperated robots are playing an increasingly important role in military actions and medical services. In the future, remotely operated surgical robots will likely be used in more scenarios such as battlefields and emergency response. But rapidly growing applications of teleoperated surgery raise the question; what if the computer systems for these robots are attacked, taken over and even turned into weapons?Our work seeks to answer this question by systematically analyzing possible cyber security attacks against Raven II R , an advanced teleoperated robotic surgery system. We identify a slew of possible cyber security threats, and experimentally evaluate their scopes and impacts. We demonstrate the ability to maliciously control a wide range of robots functions, and even to completely ignore or override command inputs from the surgeon. We further find that it is possible to abuse the robot's existing emergency stop (E-stop) mechanism to execute efficient (single packet) attacks.We then consider steps to mitigate these identified attacks, and experimentally evaluate the feasibility of applying the existing security solutions against these threats. The broader goal of our paper, however, is to raise awareness and increase understanding of these emerging threats. We anticipate that the majority of attacks against telerobotic surgery will also be relevant to other teleoperated robotic and co-robotic systems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.