As a non-contact inspection approach, vision technology usually undertakes the task of positioning, measuring and defect identification in the field of industrial automation. However, traditional visual programs at a high price are often designed for only a single category of products. Furthermore, the quantitative measurement tasks in the industry usually require a rigorous visual environment as well as hardware equipment, which implies a lack of generalization. Hence, it is imperative to establish a robust approach to break the barriers of multi-type product inspection, while reducing both system complexity and costs. This paper proposed an adaptive approach that performs inspections of the pins’ position for multi-type connectors. A joint strategy of deep neural network and pattern matching based on prior knowledge registration is constructed to achieve rapid positioning of sub-elements arranged in the target. Then, a hierarchical extraction method is designed to analyze features with various appearances and improve the anti-interference of vision-based system. The 3D version of the registration algorithm is embedded into the framework to determine abnormal positions of spatial data without reference. The proposed algorithm demonstrates a successful inspection of a total of 33 types of connectors, significant measurement robustness and adaptivity to the target pose, imaging status and feature diversity.
Vision‐based pose estimation is a basic task in many industrial fields such as bin‐picking, autonomous assembly, and augmented reality. One of the most commonly used pose estimation methods first detects the 2D pose keypoints in the input image and then calculates the 6D pose using a pose solver. Recently, deep learning is widely used in pose keypoint detection and performs excellent accuracy and adaptability. However, its over‐reliance on sufficient and high‐quality samples and supervision is prominent, particularly in the industrial field, leading to high data cost. Based on domain adaptation and computer‐aided‐design (CAD) models, herein, a virtual‐to‐real knowledge transfer method for pose keypoint detection to reduce the data cost of deep learning is proposed. To address the disorder of knowledge flow, a viewpoint‐driven feature alignment strategy is proposed to simultaneously eliminate interdomain differences and preserve intradomain differences. The shape invariance of rigid objects is then introduced as constraints to address the large assumption space problem in the regressive domain adaptation. The multidimensional experimental results demonstrate the superiority of the method. Without real annotations, the normalized pixel error of keypoint detection is reported as 0.033, and the proportion of pixel errors lower than 0.05 is up to 92.77%.
Foreign object debris (FOD) impacts significantly on the quality control during product assembly because it usually causes product failure. The vision-based method as a nondestructive and efficient technology has become an important approach to FOD detection. However, it faces two important challenges: (1) inexhaustible types (almost any object can become FOD) and (2) unpredictable locations (FOD can appear almost anywhere on surface of a product). Therefore, this paper proposes an FOD visual detection method based on doubt–confirmation strategy and aided by assembly models. Firstly, a coarse-to-fine method is designed for feature extraction and registration to align the test image with the reference image. Then, to solve the unpredictable location problem, different types of suspected FOD are extracted from the test image by a combined method of supervision and nonsupervision. Finally, to solve the inexhaustible type problem, an image comparison method based on a Histogram of Line Direction Angle is proposed, and re-recognition rules of suspected FOD established to complete the final discrimination. Experiments are conducted on a product with complex shape, and the results demonstrate the effectiveness and efficiency of our approach.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.