Conventionally, aerial manipulators, when used for inspection, use drone rotors to stabilize the center of gravity (CoG) shifts, which highly affects its performance. This paper discusses the development of a self-balancing lightweight cable aerial manipulator that can be used for construction inspection purposes. The design is based on a 3D-printed, three degrees of freedom (DoF), planar cable manipulator that is mounted on an extended platform below it as a counter-balance mechanism. The actuators control the manipulator links through a cable system, allowing them to be mounted at the system base to reduce the moving mass of the manipulator during operation. The counter-balance mechanism compensates for any shifts in the CoG of the system by actively sliding a counter-balance weight, mainly a battery, which powers the setup. This mechanism can be attached beneath an off-the-shelf quadrotor to solve the problem of CoG shifts. CoG shifts are due to the manipulator operation when a payload or inspection tool is attached to the end effector to perform a given task. For construction integrity inspection, the aerial manipulator must remain stable during the push or slide processes on both flat and curved surfaces while the non-destructive tests are carried out. To validate the effectiveness of the proposed design, an experimental setup was used, and comparisons were made between the compensated and uncompensated tilt angles of the aerial manipulator. Significant tilt angle reductions were observed with an average of 94.69% improvement, undergoing different manipulator motions during different operation scenarios, as a result of an active compensation of the CoG shift and lightweight design of the system, without sacrificing the functionality of the manipulator for the task.
Computing optical flow is a fundamental problem in computer vision. However, deep learning-based optical flow techniques do not perform well for non-rigid movements such as those found in faces, primarily due to lack of the training data representing the fine facial motion. We hypothesize that learning optical flow on face motion data will improve the quality of predicted flow on faces. This work aims to: (1) exploring self-supervised techniques to generate optical flow ground truth for face images; (2) computing baseline results on the effects of using face data to train Convolutional Neural Networks (CNN) for predicting optical flow; and (3) using the learned optical flow in micro-expression recognition to demonstrate its effectiveness. We generate optical flow ground truth using facial key-points in the BP4D-Spontaneous dataset. This optical flow is used to train the FlowNetS architecture to test its performance on the Extended Cohn-Kanade dataset and a portion of the generated dataset. The performance of FlowNetS trained on face images surpassed that of other optical flow CNN architectures. Our optical flow features are further compared with other methods using the STSTNet micro-expression classifier, and the results indicate that the optical flow obtained using this work has promising applications in facial expression analysis.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.