PurposeTo evaluate retinal damage as the result of craniomaxillofacial trauma and explain its pathogenic mechanism using finite element (FE) simulation.MethodsComputed tomography (CT) images of an adult man were obtained to construct a FE skull model. A FE skin model was built to cover the outer surface of the skull model. A previously validated FE right eye model was symmetrically copied to create a FE left eye model, and both eye models were assembled to the skull model. An orbital fat model was developed to fill the space between the eye models and the skull model. Simulations of a ball-shaped object striking the frontal bone, temporal bone, brow, and cheekbones were performed, and the resulting absorption of the impact energy, intraocular pressure (IOP), and strains on the macula and ora serrata were analyzed to evaluate retinal injuries.ResultsStrain was concentrated in the macular regions (0.18 in average) of both eyes when the frontal bone was struck. The peak strain on the macula of the struck-side eye was higher than that of the other eye (>100%) when the temporal bone was struck, whereas there was little difference (<10%) between the two eyes when the brow and cheekbones were struck. Correlation analysis showed that the retinal strain time histories were highly correlated with the IOP time histories (r > 0.8 and P = 0.000 in all simulation cases).ConclusionsThe risk of retinal damage is variable in craniomaxillofacial trauma depending on the struck region, and the damage is highly related to IOP variation caused by indirect blunt eye trauma.Translational RelevanceThis finite element eye model allows us to evaluate and understand the indirect ocular injury mechanisms in craniomaxillofacial trauma for better clinical diagnosis and treatment.
IntroductionEye-tracking technology provides a reliable and cost-effective approach to characterize mental representation according to specific patterns. Mental rotation tasks, referring to the mental representation and transformation of visual information, have been widely used to examine visuospatial ability. In these tasks, participants visually perceive three-dimensional (3D) objects and mentally rotate them until they identify whether the paired objects are identical or mirrored. In most studies, 3D objects are presented using two-dimensional (2D) images on a computer screen. Currently, visual neuroscience tends to investigate visual behavior responding to naturalistic stimuli rather than image stimuli. Virtual reality (VR) is an emerging technology used to provide naturalistic stimuli, allowing the investigation of behavioral features in an immersive environment similar to the real world. However, mental rotation tasks using 3D objects in immersive VR have been rarely reported.MethodsHere, we designed a VR mental rotation task using 3D stimuli presented in a head-mounted display (HMD). An eye tracker incorporated into the HMD was used to examine eye movement characteristics during the task synchronically. The stimuli were virtual paired objects oriented at specific angular disparities (0, 60, 120, and 180°). We recruited thirty-three participants who were required to determine whether the paired 3D objects were identical or mirrored.ResultsBehavioral results demonstrated that the response times when comparing mirrored objects were longer than identical objects. Eye-movement results showed that the percent fixation time, the number of within-object fixations, and the number of saccades for the mirrored objects were significantly lower than that for the identical objects, providing further explanations for the behavioral results.DiscussionIn the present work, we examined behavioral and eye movement characteristics during a VR mental rotation task using 3D stimuli. Significant differences were observed in response times and eye movement metrics between identical and mirrored objects. The eye movement data provided further explanation for the behavioral results in the VR mental rotation task.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.