We propose a novel cloud-based precise positioning system that uses visual sensing data. Any mobile module with a vision sensor and wireless communication can be a client and can receive benefit from this system. When the client module takes a picture of an environment and uploads it to the server, it receives the shooting position with 6 degrees of freedom (DoFs) with an accuracy on the order of centimeters within a couple of seconds. The server maintains a map of the environment and localizes the uploaded picture in the map. The contributions of this paper are threefold. First, we develop a new visual localization method using a 3D wireframe map. The method proceeds in three steps: (i) the generation of an arbitrary perspective 2D image composed of line segments from a 3D wireframe map, (ii) the gradient dilation of a line segment image for effective image retrieval, (iii) pixelwise-AND-based image-similarity evaluation by parallel computing. Second, we build 3D CAD models of an actual building from a 2D design drawing and with manual measurements. Third, we experimentally evaluate our method using virtual sensing data.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.