This paper introduces a new challenge problem: designing robotic systems to recover after disassembly from high-energy events and a first implemented solution of a simplified problem. It uses vision-based localization for self-reassembly. The control architecture for the various states of the robot, from fully-assembled to the modes for sequential docking, are explained and inter-module communication details for the robotic system are described. Robots and Systems, IROS 2007, October 2007. This material is posted here with permission of the IEEE. Such permission of the IEEE does not in any way imply IEEE endorsement of any of the University of Pennsylvania's products or services. Internal or personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution must be obtained from the IEEE by writing to pubs-permissions@ieee.org. By choosing to view this document, you agree to all provisions of the copyright laws protecting it. Comments Reprinted from Proceedings of the 2007 IEEE/RSJ International Conference on Intelligent Author(s)Mark Yim, Babak Shirmohammadi, Jimmy Sastra, Michael Park, Michael Dugan, and Camillo J. Taylor This journal article is available at ScholarlyCommons: http://repository.upenn.edu/meam_papers/147Abstract-This paper introduces a new challenge problem: designing robotic systems to recover after disassembly from high-energy events and a first implemented solution of a simplified problem. It uses vision-based localization for selfreassembly. The control architecture for the various states of the robot, from fully-assembled to the modes for sequential docking, are explained and inter-module communication details for the robotic system are described.
This paper describes a novel approach to localizing networks of embedded cameras and sensors. In this scheme the cameras and the sensors are equipped with controllable light sources (either visible or infrared) which are used for signaling. Each camera node can then automatically determine the bearing to all of the nodes that are visible from its vantage point. By fusing these measurements with the measurements obtained from onboard accelerometers, the camera nodes are able to determine the relative positions and orientations of other nodes in the network.The method is dual to other network localization techniques in that it uses angular measurements derived from images rather than range measurements derived from time of flight or signal attenuation. The scheme can be implemented relatively easily with commonly available components and scales well since the localization calculations exploit the sparse structure of the system of measurements. Further, the method provides estimates of camera orientation which cannot be determined solely from range measurements.The localization technology can serve as a basic capability on which higher level applications can be built. The method could be used to automatically survey the locations of sensors of interest, to implement distributed surveillance systems or to analyze the structure of a scene based on the images obtained from multiple registered vantage points. It also provides a mechanism for integrating the imagery obtained from the cameras with the measurements obtained from distributed sensors.
In order to realize the goal of self assembling or self reconfiguring modular robots the constituent modules in the system need to be able to gauge their position and orientation with respect to each other. This paper describes an approach to solving this localization problem by equipping each of the modules in the ensemble with a smart camera system. The paper describes one implementation of this scheme on a modular robotic system and discusses the results of a self assembly experiment. This material is posted here with permission of the IEEE. Such permission of the IEEE does not in any way imply IEEE endorsement of any of the University of Pennsylvania's products or services. Internal or personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution must be obtained from the IEEE by writing to pubs-permissions@ieee.org. By choosing to view this document, you agree to all provisions of the copyright laws protecting it. ABSTRACT to using self localizing smart camera systems to provide this In order to realize the goal of self assembling or self reconpositioning information. figuring modular robots the constituent modules in the sysIn this paper we describe how self localization techniques tem need to be able to gauge their position and orientation originally developed for automatically localizing collections with respect to each other. This paper describes an approach of distributed smart cameras [1] can be adapted to localize to solving this localization problem by equipping each of the modular robotic components and, hence, to facilitate these modules in the ensemble with a smart camera system. The patypes of self assembly operations. In the proposed scheme, per describes one implementation of this scheme on a modueach of the modular components is equipped with a smart lar robotic system and discusses the results of a self assembly camera system and a controllable light source. The modules experiment.use the lights to signal to each other and they determine their Index Terms-Smart Cameras, Localization, Modular relative pose from the available image measurements.
This paper describes a novel decentralized target tracking scheme for distributed smart cameras. This approach is built on top of a distributed localization protocol which allows the smart camera nodes to automatically identify neighboring sensors with overlapping fields of regard and establish a communication graph which reflects how the nodes will interact to fuse measurements in the network. The new protocol distributes the detection and tracking problems evenly throughout the network accounting for sensor handoffs in a seamless manner. The approach also distributes knowledge about the state of tracked objects amongst the nodes in the network. This information can then be harvested through distributed queries which allow network participants to subscribe to different kinds of events that they may be interested in. The proposed scheme has been used to track targets in real time using a collection of custom designed smart camera nodes. Results from these experiments are presented.
This paper presents the design of a robot that can traverse land, water, as well as quicksand-like mud. The robot is low cost and modular allowing the replacement of a variety of arms suitable for many of the tasks associated with astrobiological exploration. An astrobiologist on a field study will spend most of the time walking around and exploring the site looking for areas of interest which will be tested in situ or sampled for testing offsite. For a robot replicating these tasks, it must be able to locomote in that terrain, sense the interesting features (or provide sensing for teleoperation), and do a variety of manipulation tasks once an area of interest is reached. The configurations for this robot include 10's of modules that can achieve astrobiological tasks such as amphibious locomotion, digging, core sampling, probing, liquid sampling and exploration. This paper also presents results from the first experiments of this platform at Lake Tyrrell, a salt lake in Australia.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.