In the DARPA Robotics Challenge (DRC), participating human‐robot teams were required to integrate mobility, manipulation, perception, and operator interfaces to complete a simulated disaster mission. We describe our approach using the humanoid robot Atlas Unplugged developed by Boston Dynamics. We focus on our approach, results, and lessons learned from the DRC Finals to demonstrate our strategy, including extensive operator practice, explicit monitoring for robot errors, adding additional sensing, and enabling the operator to control and monitor the robot at varying degrees of abstraction. Our safety‐first strategy worked: we avoided falling, and remote operators could safely recover from difficult situations. We were the only team in the DRC Finals that attempted all tasks, scored points (14/16), did not require physical human intervention (a reset), and did not fall in the two missions during the two days of tests. We also had the most consistent pair of runs.
Person detection from vehicles has made rapid progress recently with the advent of multiple high‐quality datasets of urban and highway driving, yet no large‐scale benchmark is available for the same problem in off‐road or agricultural environments. Here we present the National Robotics Engineering Center (NREC) Agricultural Person‐Detection Dataset to spur research in these environments. It consists of labeled stereo video of people in orange and apple orchards taken from two perception platforms (a tractor and a pickup truck), along with vehicle position data from Real Time Kinetic (RTK) GPS. We define a benchmark on part of the dataset that combines a total of 76k labeled person images and 19k sampled person‐free images. The dataset highlights several key challenges of the domain, including varying environment, substantial occlusion by vegetation, people in motion and in nonstandard poses, and people seen from a variety of distances; metadata are included to allow targeted evaluation of each of these effects. Finally, we present baseline detection performance results for three leading approaches from urban pedestrian detection and our own convolutional neural network approach that benefits from the incorporation of additional image context. We show that the success of existing approaches on urban data does not transfer directly to this domain.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.