Aiming at the problems of low efficiency and low accuracy in manual detection of winding angle and wire spacing during automatic winding of high-voltage primary coils of transmission and distribution transformers, a detection scheme using machine vision is proposed. Firstly, the coil image is acquired by the industrial camera, the detection region is segmented, and the ROI (region of interest) image is pre-processed. For winding angle detection, we propose a slicing method for image graying to reduce the interference caused by uneven light irradiation. The gray image is converted to a binary image, and wire skeleton extraction is performed; the skeleton is identified using the Hough transform for feature straight lines, and the winding angle is then calculated. For wire spacing detection, we propose an intersection of the perpendicular lines method, which extracts edge coordinates using contour images and performs endpoint pixel expansion and shape classification. Use the intersection of the vertical lines to determine the centroid coordinates of the wire outline, calculate the pixel distance of the adjacent centroid, and obtain the wire spacing by combining pixel calibration. Comparison experiments have shown that the solution has a high detection accuracy (0.01 mm), and the error of the integrated detection results is not higher than 10%, which enables the real-time detection of coil winding status and corrects the winding process according to the visual real-time detection result to improve the finished product quality of coils.
In the process of producing winding coils for power transformers, it is necessary to detect the tilt angle of the winding, which is one of the important parameters that affects the physical performance indicators of the transformer. The current detection method is manual measurement using a contact angle ruler, which is not only time-consuming but also has large errors. To solve this problem, this paper adopts a contactless measurement method based on machine vision technology. Firstly, this method uses a camera to take pictures of the winding image and performs a 0° correction and preprocessing on the image, using the OTSU method for binarization. An image self-segmentation and splicing method is proposed to obtain a single-wire image and perform skeleton extraction. Secondly, this paper compares three angle detection methods: the improved interval rotation projection method, quadratic iterative least squares method, and Hough transform method and through experimental analysis, compares their accuracy and operating speed. The experimental results show that the Hough transform method has the fastest operating speed and can complete detection in an average of only 0.1 s, while the interval rotation projection method has the highest accuracy, with a maximum error of less than 0.15°. Finally, this paper designs and implements visualization detection software, which can replace manual detection work and has a high accuracy and operating speed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.