Pose estimation is crucial for automating assembly tasks, yet achieving sufficient accuracy for assembly automation remains challenging and part-specific. This paper presents a novel, streamlined approach to pose estimation that facilitates automation of assembly tasks. Our proposed method employs deep learning on a limited number of annotated images to identify a set of keypoints on the parts of interest. To compensate for network shortcomings and enhance accuracy we incorporated a Bayesian updating stage that leverages our detailed knowledge of the assembly part design. This Bayesian updating step refines the network output, significantly improving pose estimation accuracy. For this purpose, we utilized a subset of network-generated keypoint positions with higher quality as measurements, while for the remaining keypoints, the network outputs only serve as priors. The geometry data aid in constructing likelihood functions, which in turn result in enhanced posterior distributions of keypoint pixel positions. We then employed the maximum a posteriori (MAP) estimates of keypoint locations to obtain a final pose, allowing for an update to the nominal assembly trajectory. We evaluated our method on a 14-point snap-fit dash trim assembly for a Ford Mustang dashboard, demonstrating promising results. Our approach does not require tailoring to new applications, nor does it rely on extensive machine learning expertise or large amounts of training data. This makes our method a scalable and adaptable solution for the production floors.
Packages such as microelectronic packages require hermetic seals, as the reliability of such devices is extremely important. On possible failure is due to leakage resulted from
Abstract�A novel edge extraction method that employs an active defocusing technique is presented. The method is based on the principle that a Laplacian-of-Gaussian (LOG) operation can be approximated by a Difference-of-Gaussian (DOG) operation. While such an operation is usually done in digital image processing, it can also be very effective conducted in a combination of optical techniques and digital processing. In this edge extraction method, a focused image of an object in a scene is first acquired. The image of the scene is then slightly defocused by changing the focal length of the camera. A real time subtraction operation is applied to subtract the defocused image from the previously acquired image. It produces a residual image that emphasizes abrupt intensity variations. An objective evaluation, called an edge index, is performed on the resulting image. The amount of defocusing is carefully adjusted according to this measurement so that a desired edge image is generated. Boundaries of objects can then be obtained by further enhancement of the edge image. Since this edge detection method is an optical-based process aided by digital processing, it is fast and relatively inexpensive.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.