Purpose Current research on the aging of bony orbit is usually done manually, which is inefficient and has a large error. In this paper, automatic segmentation of bony orbit based on deep learning and automatic calculation of the parameters of the segmented orbital contour (area and height of bony orbit) are presented. Methods The craniofacial CT scanning data of 595 Chinese were used to carry out three-dimensional reconstruction and output the craniofacial images. The orbital contour images are obtained automatically by UNet++ segmentation network, and then the bony orbital area and height were calculated automatically by connected component analysis. Results The automatic segmentation method has an Intersection of Union of 95.41% in craniofacial CT images. During the aging, the bony orbital area of males increased with age, while that of females decreased, and the area in male was larger than that in female ( P < 0.05). The distance from equal points 10 and 40–90 to the supraorbital rim was significantly larger ( P < 0.05). Except for the equal point 90, the distance from equal points to the inferior orbital rim was obviously larger ( P < 0.05). In the females, the distance from equal points 50–70 to inferior orbital rim was significantly lower ( P < 0.05). Conclusion The method proposed here can automatically and accurately study image dataset of large-scale bony orbital CT imaging. UNet++ can achieve high-precision segmentation of bony orbital contours. The bony orbital area of Chinese changes with aging, and the bony orbital height changes different between males and females, which may be caused by the different position and degree of orbital bone resorption of males and females in the process of aging.
Human gesture recognition is a popular issue in the studies of computer vision, since it provides technological expertise required to advance the interaction between people and computers, virtual environments, smart surveillance, motion tracking, as well as other domains. Extraction of the human skeleton is a rather typical gesture recognition approach using existing technologies based on two-dimensional human gesture detection. Likewise, I t cannot be overlooked that objects in the surrounding environment give some information about human gestures. To semantically recognize the posture of the human body, the logic system presented in this research integrates the components recognized in the visual environment alongside the human skeletal position. In principle, it can improve the precision of recognizing postures and semantically represent peoples’ actions. As such, the paper suggests a potential and notion for recognizing human gestures, as well as increasing the quantity of information offered through analysis of images to enhance interaction between humans and computers.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.