Identifying locations of nodes in wireless sensor networks (WSNs) is critical to both network operations and most application level tasks. Sensor nodes equipped with geographical positioning system (GPS) devices are aware of their locations at a precision level of few meters. However, installing GPS devices on a large number of sensor nodes is not only expensive but affects the form factor of these nodes. Moreover, GPS-based localization is not applicable in the indoor environments such as buildings. There exists an extensive body of research literature that aims at obtaining absolute locations as well as relative spatial locations of nodes in a WSN without requiring specialized hardware at large scale. The typical approach consists of employing only a limited number of anchor nodes that are aware of their own locations, and then trying to infer locations of non-anchor nodes using graph theoretic, geometric, statistical, optimization and machine learning techniques. Thus, the literature represents a very rich ensemble of algorithmic techniques applicable to low power, highly distributed nodes with resource-optimal computations. In this chapter we take a close look at the algorithmic aspects of various important localization techniques for WSNs.
This work attempts to develop a method for SLAM using semantics based on FastSLAM 2.0. Our approach to semantic mapping consists of segmenting images obtained from two sensors (optical and radar) aboard a UAV. We then identify landmarks within the segmented image, followed by the construction of relational trees with the landmarks; these trees are then used at consecutive time-steps of the robot’s motion for its localization as well as update of the landmarks. The term semantics has been used for region-landmarks which are validated with a look-up table (LUT) of the predefined surface type information, a superset of the robot’s actual environment. Finally, based on particle filters, the posterior density of the state of the robot is estimated and a 2-D semantics map is constructed. The methodology has been tested in a situation wherein the robot’s true environment and path have been simulated. For simulation we consider satellite images of optical and radar sensors of the robot’s environment. At different time-steps the robot’s images are cropped from these images, incorporating errors in the robot’s control information. Experiments carried out on the simulated environment have provided encouraging results.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.