Unmanned Aerial Systems (UASs) have recently become a versatile platform for many civilian applications including inspection, surveillance and mapping. Sense-and-Avoid systems are essential for the autonomous safe operation of these systems in non-segregated airspaces. Vision-based Sense-andAvoid systems are preferred to other alternatives as their price, physical dimensions and weight are more suitable for small and medium-sized UASs, but obtaining real flight imagery of potential collision scenarios is hard and dangerous, which complicates the development of Vision-based detection and tracking algorithms. For this purpose, user-friendly software for synthetic imagery generation has been developed, allowing to blend user-defined flight imagery of a simulated aircraft with real flight scenario images to produce realistic images with ground truth annotations. These are extremely useful for the development and benchmarking of Vision-based detection and tracking algorithms at a much lower cost and risk. An image processing algorithm has also been developed for automatic detection of the occlusions caused by certain parts of the UAV which carries the camera. The detected occlusions can later be used by our software to simulate the occlusions due to the UAV that would appear in a real flight with the same camera setup. Additionally this algorithm could be used to mask out pixels which do not contain relevant information of the scene for the visual detection, making the image search process more efficient. Finally an application example of the imagery obtained with our software for the benchmarking of a state-of-art visual tracker is presented.
A high proportion of hospital-acquired diseases are transmitted nowadays during surgery despite existing asepsis preservation measures. These are quite drastic, prohibiting surgeons from interacting directly with non-sterile equipment. Indirect control is presently achieved through an assistant or a nurse. Gesture-based Human-Computer Interfaces constitue a promising approach for giving direct control over such equipment to surgeons. This paper introduces a novel hand descriptor based on measurements extracted from hand contour convex and concave extrema. Using a 9750-picture database created especially for this purpose, it is compared with three state-ofthe-art description methods, namely Hu moments, and both SIFT and HOG features. Effects of large amounts of hand rotation are also studied on each rotation axis independently. Obtained results give HOG features as best in recognizing hands from our database, closely followed by the proposed descriptor. Performance comparison when facing rotated hands shows our descriptor as the most robust to rotations, outperforming the other descriptors by a wide margin.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.