Abstract. This paper considers the problem of deploying a mobile sensor network in an unknown environment. A mobile sensor network is composed of a distributed collection of nodes, each of which has sensing, computation, communication and locomotion capabilities. Such networks are capable of self-deployment; i.e., starting from some compact initial configuration, the nodes in the network can spread out such that the area 'covered' by the network is maximized. In this paper, we present a potential-field-based approach to deployment. The fields are constructed such that each node is repelled by both obstacles and by other nodes, thereby forcing the network to spread itself throughout the environment. The approach is both distributed and scalable.
Visual and inertial sensors, in combination, are able to provide accurate motion estimates and are well-suited for use in many robot navigation tasks. However, correct data fusion, and hence overall performance, depends on careful calibration of the rigid body transform between the sensors. Obtaining this calibration information is typically difficult and time-consuming, and normally requires additional equipment. In this paper we describe an algorithm, based on the unscented Kalman filter, for self-calibration of the transform between a camera and an inertial measurement unit (IMU). Our formulation rests on a differential geometric analysis of the observability of the camera-IMU system; this analysis shows that the sensor-to-sensor transform, the IMU gyroscope and accelerometer biases, the local gravity vector, and the metric scene structure can be recovered from camera and IMU measurements alone. While calibrating the transform we simultaneously localize the IMU and build a map of the surroundings -all without additional hardware or prior knowledge about the environment in which a robot is operating. We present results from simulation studies and from experiments with a monocular camera and a low-cost IMU, which demonstrate accurate estimation of both the calibration parameters and the local scene structure.
ark Weiser envisioned a world in which computing is so pervasive that everyday devices can sense their relationship to us and to each other. They could, thereby, respond so appropriately to our actions that the computing aspects would fade into the background. Underlying this vision is the assumption that sensing a broad set of physical phenomena, rather than just data input, will become a common aspect of small, embedded computers and that these devices will communicate with each other (as well as to some more powerful infrastructure) to organize and coordinate their actions. Recall the story of Sal in Weiser's article; Sal looked out her window and saw "tracks" as evidence of her neighbors' morning strolls. What sort of system did this seemingly simple functionality imply? Certainly Weiser did not envision ubiquitous cameras placed throughout the neighborhood. Such a solution would be far too heavy for the application's relatively casual nature as well as quite invasive with respect to personal privacy. Instead, Weiser posited the existence of far less intrusive instrumentation in neighborhood spacesperhaps smart paving stones that could detect local activity and indicate the walker's direction based on exchanges between neighboring nodes. As we have marched technology forward, we are now in a position to translate this aspect of Weiser's vision to reality and apply it to a wide range of important applications, both computing and social. Other articles in this issue address the user interface-, application-, software-, and device-level design challenges associated with realizing Weiser's vision. Here, we address the challenges and opportunities of instrumenting the physical world with pervasive networks of sensor-rich, embedded computation. Such systems fulfill two of Weiser's key objectivesubiquity, by injecting computation into the physical world with high spatial density, and invisibility, by having the nodes and collectives of nodes operate autonomously. Of particular importance to the technical community is making such pervasive computing itself pervasive. We need reusable building blocks that can help us move away from the specialized instrumentation of each particular environment and move toward building reusable techniques for sensing, computing, and manipulating the physical world. The physical world presents an incredibly rich set of input modalities, including acoustics, image, motion, vibration, heat, light, moisture, pressure, ultrasound, radio, magnetic, and many more exotic modes. Traditionally, sensing and manipulating the This article addresses the challenges and opportunities of instrumenting the physical world with pervasive networks of sensor-rich, embedded computation. The authors present a taxonomy of emerging systems and outline the enabling technological developments.
Abstract-We consider the problem of self-deployment of a mobile sensor network. We are interested in a deployment strategy that maximizes the area coverage of the network with the constraint that each of the nodes has at least K neighbors, where K is a user-specified parameter. We propose an algorithm based on artificial potential fields which is distributed, scalable and does not require a prior map of the environment. Simulations establish that the resulting networks have the required degree with a high probability, are well connected and achieve good coverage. We present analytical results for the coverage achievable by uniform random and symmetrically tiled network configurations and use these to evaluate the performance of our algorithm.
We propose three sampling-based motion planning algorithms for generating informative mobile robot trajectories. The goal is to find a trajectory that maximizes an information quality metric (e.g. variance reduction, information gain, or mutual information) and also falls within a pre-specified budget constraint (e.g. fuel, energy, or time). Prior algorithms have employed combinatorial optimization techniques to solve these problems, but existing techniques are typically restricted to discrete domains and often scale poorly in the size of the problem. Our proposed rapidly exploring information gathering (RIG) algorithms combine ideas from sampling-based motion planning with branch and bound techniques to achieve efficient information gathering in continuous space with motion constraints. We provide analysis of the asymptotic optimality of our algorithms, and we present several conservative pruning strategies for modular, submodular, and time-varying information objectives. We demonstrate that our proposed techniques find optimal solutions more quickly than existing combinatorial solvers, and we provide a proof-of-concept field implementation on an autonomous surface vehicle performing a wireless signal strength monitoring task in a lake.
We present the design and implementation of a real-time, vision-based landing algorithm for an autonomous helicopter. The landing algorithm is integrated with algorithms for visual acquisition of the target (a helipad) and navigation to the target from an arbitrary initial position and orientation. We use vision for precise target detection and recognition and a combination of vision and GPS for navigation. The helicopter updates its landing target parameters based on vision and uses an on board behaviorbased controller to follow a path to the landing site. We present significant results from flight trials in the field which demonstrate that our detection, recognition and control algorithms are accurate, robust and repeatable.
Severe energy limitations, and a paucity of computation pose a set of difficult design challenges for sensor networks. Recent progress in two seemingly disparate research areas namely, distributed robotics and low power embedded systems has led to the creation of mobile (or robotic) sensor networks. Autonomous node mobility brings with it its own challenges, but also alleviates some of the traditional problems associated with static sensor networks. We illustrate this by presenting the design of the robomote, a robot platform that functions as a single mobile node in a mobile sensor network. We briefly describe two case studies where the robomote has been used for table top experiments with a mobile sensor network.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.