In the pork fat content detection task, traditional physical or chemical methods are strongly destructive, have substantial technical requirements and cannot achieve nondestructive detection without slaughtering. To solve these problems, we propose a novel, convenient and economical method for detecting the fat content of pig B-ultrasound images based on hybrid attention and multiscale fusion learning, which extracts and fuses shallow detail information and deep semantic information at multiple scales. First, a deep learning network is constructed to learn the salient features of fat images through a hybrid attention mechanism. Then, the information describing pork fat is extracted at multiple scales, and the detailed information expressed in the shallow layer and the semantic information expressed in the deep layer are fused later. Finally, a deep convolution network is used to predict the fat content compared with the real label. The experimental results show that the determination coefficient is greater than 0.95 on the 130 groups of pork B-ultrasound image data sets, which is 2.90, 6.10 and 5.13 percentage points higher than that of VGGNet, ResNet and DenseNet, respectively. It indicats that the model could effectively identify the B-ultrasound image of pigs and predict the fat content with high accuracy.
IntroductionAiming at the problems of low accuracy in estimating the rotation angle after the rotation of circular image data within a wide range (0°–360°) and difficulty in blind detection without a reference image, a method based on ensemble transfer regression network, fused HOG, and Rotate Loss is adopted to solve such problems.MethodsThe proposed Rotate Loss was combined to solve the angle prediction error, especially the huge error when near 0°. Fused HOG was mainly used to extract directional features. Then, the feature learning was conducted by the ensemble transfer regression model combined with the feature extractor and the ensemble regressors to estimate an exact rotation angle. Based on miniImageNet and Minist, we made the circular random rotation dataset Circular-ImageNet and random rotation dataset Rot-Minist, respectively.ResultsExperiments showed that for the proposed evaluation index MSE_Rotate, the best single regressor could be as low as 28.79 on the training set of Circular-ImageNet and 2686.09 on the validation set. For MSE_Rotate, MSE, MAE, and RMSE on the test set were 1,702.4325, 0.0263, 0.0881, and 0.1621, respectively. And under the ensemble transfer regression network, it could continue to decrease by 15%. The mean error rate on Rot-Minist could be just 0.59%, significantly working easier in a wide range than other networks in recent years. Based on the ensemble transfer regression model, we also completed the application of image righting blindly.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.