Digital image processing serves as a multifunctional tool for measurement and positioning tasks in robotics. The present paper deals with the development of a camera-based positioning system for quadrocopters in order to automate their landing process. In this regard, a quadrocopter equipped with classical radio-control components is upgraded with applicable hardware, such as a Raspberry Pi 3B+ and a wide-angle camera. Hereupon, black-box system identifications are executed to attain the relevant plants of the attitude control performed by the flight controller. Thereby, a PID-controller for the altitude control including a back-calculation anti-windup as well as two PD-controllers for the horizontal plane are designed using a pole placement method. Effective tests of the controller gains are conducted by simulating the closed loops respectively. Since the camera functions as a position sensor, an image processing algorithm is then implemented to detect a distinctive landing symbol in real time while converting its image position into compliant feedback errors (pixel-to-physical distance-conversion). Ultimately, the developed system allows for the robust detection and successful landing on the landing spot by a position control operating at 20 hertz.
Recent development of deep convolutional neural networks (DCNN) devoted in creating a slim model for devices with lower specification such as embedded, mobile hardware, or microcomputer. Slim model can be achieved by minimizing computational complexity which theoretically will make processing time faster. Therefore, our focus is to build an architecture with minimum floating-point operation per second (FLOPs). In this work, we propose a small and slim architecture which later will be compared to state-of-the-art models. This architecture will be implemented into two models which are CustomNet and CustomNet2. Each of these models implements 3 convolutional blocks which reduce the computational complexity while maintains its accuracy and able to compete with state-of-the-art DCNN models. These models will be trained using ImageNet, CIFAR 10, CIFAR 100 and other datasets. The result will be compared based on accuracy, complexity, size, processing time, and trainable parameter. From the result, we found that one of our models which is CustomNet2, is better than MobileNet, MobileNet-v2, DenseNet, NASNetMobile in accuracy, trainable parameter, and complexity. For future implementation, this architecture can be adapted using region based DCNN for multiple object detection.
This paper describes a human search system that uses an unmanned aerial vehicle (UAV). The use of robots to search for people is expected to become an auxiliary tool for saving lives during a disaster. In particular, because UAVs can collect information from the air, there has been much research into human search using UAVs equipped with cameras. However, the disadvantage of cameras is that they struggle to detect people who are hidden in shadows. To solve this problem, we mounted an array microphone on a UAV and to detect the human voice as a means of finding people that cameras cannot. Also a search method is proposed that combines voice and camera human detection to compensate for their respective shortcomings. The rate and accuracy of human detection by the proposed method are assessed experimentally.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.