Data-driven approach for grasping shows significant advance recently. But these approaches usually require much training data. To increase the efficiency of grasping data collection, this paper presents a novel grasp training system including the whole pipeline from data collection to model inference. The system can collect effective grasp sample with a corrective strategy assisted by antipodal grasp rule, and we design an affordance interpreter network to predict pixelwise grasp affordance map. We define graspability, ungraspability and background as grasp affordances. The key advantage of our system is that the pixel-level affordance interpreter network trained with only a small number of grasp samples under antipodal rule can achieve significant performance on totally unseen objects and backgrounds. The training sample is only collected in simulation. Extensive qualitative and quantitative experiments demonstrate the accuracy and robustness of our proposed approach. In the real-world grasp experiments, we achieve a grasp success rate of 93% on a set of household items and 91% on a set of adversarial items with only about 6,300 simulated samples. We also achieve 87% accuracy in clutter scenario. Although the model is trained using only RGB image, when changing the background textures, it also performs well and can achieve even 94% accuracy on the set of adversarial objects, which outperforms current state-of-the-art methods.
Drone also known as unmanned aerial vehicle (UAV) has drawn lots of attention in recent years. Quadcopter as one of the most popular drones has great potential in both industrial and academic fields. Quadcopter drones are capable of taking off vertically and flying towards any direction. Traditional researches of drones mainly focus on their mechanical structures and movement control. The aircraft movement is usually controlled by a remote controller manually or the trajectory is pre-programmed with specific algorithms. Consumer drones typically use mobile device together with remote controllers to realize flight control and video transmission. Implementing different functions on mobile devices can result in different behaviors of drones indirectly. With the development of deep learning in computer vision field, commercial drones equipped with camera can be much more intelligent and even realize autonomous flight. In the past, running deep learning based algorithms on mobile devices is highly computational intensive and time consuming. This paper utilizes a novel real-time object detection method and deploys the deep learning model on the modern mobile device to realize autonomous object detection and object tracking of drones.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.