We propose a lightweight neural network-based method to detect the estrus behavior of ewes. Our suggested method is mainly proposed to solve the problem of not being able to detect ewe estrus behavior in a timely and accurate manner in large-scale meat sheep farms. The three main steps of our proposed methodology include constructing the dataset, improving the network structure, and detecting the ewe estrus behavior based on the lightweight network. First, the dataset was constructed by capturing images from videos with estrus crawling behavior, and the data enhancement was performed to improve the generalization ability of the model at first. Second, the original Darknet-53 was replaced with the EfficientNet-B0 for feature extraction in YOLO V3 neural network to make the model lightweight and the deployment easier, thus shortening the detection time. In order to further obtain a higher accuracy of detecting the ewe estrus behavior, we joined the feature layers to the SENet attention module. Finally, the comparative results demonstrated that the proposed method had higher detection accuracy and FPS, as well as a smaller model size than the YOLO V3. The precision of the proposed scheme was 99.44%, recall was 95.54%, F1 value was 97%, AP was 99.78%, FPS was 48.39 f/s, and Model Size was 40.6 MB. This study thus provides an accurate, efficient, and lightweight detection method for the ewe estrus behavior in large-scale mutton sheep breeding.
There are some problems with estrus detection in ewes in large-scale meat sheep farming: mainly, the manual detection method is labor-intensive and the contact sensor detection method causes stress reactions in ewes. To solve the abovementioned problems, we proposed a multi-objective detection layer neural network-based method for ewe estrus crawling behavior recognition. The approach we proposed has four main parts. Firstly, to address the problem of mismatch between our constructed ewe estrus dataset and the YOLO v3 anchor box size, we propose to obtain a new anchor box size by clustering the ewe estrus dataset using the K-means++ algorithm. Secondly, to address the problem of low model recognition precision caused by small imaging of distant ewes in the dataset, we added a 104 × 104 target detection layer, making the total target detection layer reach four layers, strengthening the model’s ability to learn shallow information and improving the model’s ability to detect small targets. Then, we added residual units to the residual structure of the model, so that the deep feature information of the model is not easily lost and further fused with the shallow feature information to speed up the training of the model. Finally, we maintain the aspect ratio of the images in the data-loading module of the model to reduce the distortion of the image information and increase the precision of the model. The experimental results show that our proposed model has 98.56% recognition precision, while recall was 98.04%, F1 value was 98%, mAP was 99.78%, FPS was 41 f/s, and model size was 276 M, which can meet the accurate and real-time recognition of ewe estrus behavior in large-scale meat sheep farming.
To solve the problems of high labor intensity, low efficiency, and frequent errors in the manual identification of cone yarn types, in this study five kinds of cone yarn were taken as the research objects, and an identification method for cone yarn based on the improved Faster R-CNN model was proposed. In total, 2750 images were collected of cone yarn samples in real of textile industry environments, then data enhancement was performed after marking the targets. The ResNet50 model with strong representation ability was used as the feature network to replace the VGG16 backbone network in the original Faster R-CNN model to extract the features of the cone yarn dataset. Training was performed with a stochastic gradient descent approach to obtain an optimally weighted file to predict the categories of cone yarn. Using the same training samples and environmental settings, we compared the method proposed in this paper with two mainstream target detection algorithms, YOLOv3 + DarkNet-53 and Faster R-CNN + VGG16. The results showed that the Faster R-CNN + ResNet50 algorithm had the highest mean average precision rate for the five types of cone yarn at 99.95%, as compared with the YOLOv3 + DarkNet-53 algorithm with a mean average precision rate that was 2.24% higher and the Faster R-CNN + VGG16 algorithm with a mean average precision that was 1.19% higher. Regarding cone yarn defects, shielding, and wear, the Faster R-CNN + ResNet50 algorithm can correctly identify these issues without misdetection occurring, with an average precision rate greater than 99.91%.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.