In the steel industry, slabs are manufactured with different amounts of alloying elements according to production purposes or final products. Because slabs have similar shapes, product identification is required to prevent inadequate production processes. In many steel mills, paint marking systems are widely used to inscribe slab identification numbers (SINs). As smart factory technology receives more attention in recent years, automatic recognition of SINs becomes more important for factory automation. The recognition of SINs is a challenging problem due to complex background of factory scenes and low quality of characters in SINs. To address this difficulties, this paper proposes a deep learning algorithm for recognizing SINs in factory scenes. Most existing recognition algorithms conduct text detection and classification using separate modules, and errors in each step are accumulated. The proposed algorithm employs a fully convolutional network (FCN) with deconvolution layers to integrate the recognition processes and improve the performance in processing time and accuracy. The main contribution of this work is on a novel type of ground-truth data (GTD) for the training of a FCN to recognize SINs in factory scenes. The relation between an input image and the corresponding GTD is directly trained in the manner of image-to-image training, and the FCN generates a prediction map that contains categorical information of individual pixels in an input image. Experiments were thoroughly conducted on industrial data collected from a steelworks to demonstrate the effectiveness of the proposed algorithm.
Cone-beam computed tomography (CBCT) produces high-resolution of hard tissue even in small voxel size, but the process is associated with radiation exposure and poor soft tissue imaging. Thus, we synthesized a CBCT image from the magnetic resonance imaging (MRI), using deep learning and to assess its clinical accuracy. We collected patients who underwent both CBCT and MRI simultaneously in our institution (Seoul). MRI data were registered with CBCT data, and both data were prepared into 512 slices of axial, sagittal, and coronal sections. A deep learning-based synthesis model was trained and the output data were evaluated by comparing the original and synthetic CBCT (syCBCT). According to expert evaluation, syCBCT images showed better performance in terms of artifacts and noise criteria but had poor resolution compared to the original CBCT images. In syCBCT, hard tissue showed better clarity with significantly different MAE and SSIM. This study result would be a basis for replacing CBCT with non-radiation imaging that would be helpful for patients planning to undergo both MRI and CBCT.
This study proposes an automated brittle fracture rate (BFR) estimator using deep learning. As the demand for line-pipes increases in various industries, the need for BFR estimation through dropweight tear test (DWTT) increases to evaluate steel's property. Conventional BFR or ductile fracture rate (DFR) estimation methods require an expensive 3D scanner. Alternatively, a rule-based approach is used with a single charge-coupled device (CCD) camera. However, it is sensitive to the hyper-parameter. To solve these problems, we propose an approach based on deep learning that has recently been successful in the fields of computer vision and image processing. The method proposed in this study is the first to use deep learning approach for BFR estimation. The proposed method consists of a VGG-based U-Net (VU-Net) which is inspired by U-Net and fully convolutional network (FCN). VU-Net includes a deep encoder and a decoder. The encoder is adopted from VGG19 and transferred with a pre-trained model with ImageNet. In addition, the structure of the decoder is the same as that of the encoder, and the decoder uses the feature maps of the encoder through concatenation operation to compensate for the reduced spatial information. To analyze the proposed VU-Net, we experimented with different depths of networks and various transfer learning approaches. In terms of accuracy used in real industrial application, we compared the proposed VU-Net with U-Net and FCN to evaluate the performance. The experiments showed that VU-Net was the accuracy of approximately 94.9 %, and was better than the other two, which had the accuracies of about 91.8 % and 93.7 %, respectively.
In steel industry, product number recognition is necessary for factory automation. Before final production, the billet identification number (BIN) should be checked to prevent mixing billets of different material. There are two types of BINs, namely, paint-type and sticker-type BINs. In addition, the BIN comprises seven to nine alphanumeric characters except the letters I and O. The BIN may be rotated in various directions. Therefore, for proper recognition and accident prevention, end-to-end BIN recognition system that uses the deep learning is proposed. Specifically, interpretation and sticker extraction modules are developed. Furthermore, the fully convolutional network (FCN) with deconvolution layer is used and optimized. To increase the BIN recognition accuracy, the FCN was simulated for various structures and was transferred from the pre-trained model. The BIN is identified by the trained FCN model and interpretation module. If the BIN is sticker-type, it is inferred after the sticker region is extracted by the sticker extraction module. The accuracy of the proposed system was shown to be approximately 99.59% in an eight-day period.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.