Arboreal animals face numerous challenges when negotiating complex three dimensional terrain. Directed aerial descent or gliding flight allows for rapid traversal of arboreal environments, but presents control challenges. Some animals, such as birds or gliding squirrels, have specialized structures to modulate aerodynamic forces while airborne. However, many arboreal animals do not possess these specializations but still control posture and orientation in mid-air. One of the largest inertial segments in lizards is their tail. Inertial reorientation can be used to attain postures appropriate for controlled aerial descent. Here we discuss the role of tail inertia in a range of mid-air reorientation behaviors using experimental data from geckos in combination with mathematical and robotic models. Geckos can self-right in mid-air by tail rotation alone. Equilibrium glide behavior of geckos in a vertical wind tunnel show that they can steer towards a visual stimulus by using rapid, circular tail rotations to control pitch and yaw. Multiple coordinated tail responses appear to be required for the most effective terminal velocity gliding. A mathematical model allows us to explore the relationship between morphology and the capacity for inertial reorientation by conducting sensitivity analyses, and testing control approaches. Robotic models further define the limits of performance and generate new control hypotheses. Such comparative analysis allows predictions about the diversity of performance across lizard morphologies, relative limb proportions, and provides insights into the evolution of aerial behaviors.
Imaging techniques are widely used for medical diagnostics. In some cases, a lack of medical practitioners who can manually analyze the images can lead to a bottleneck. Consequently, we developed a custom-made convolutional neural network (RiFNet = Rib Fracture Network) that can detect rib fractures in postmortem computed tomography. In a retrospective cohort study, we retrieved PMCT data from 195 postmortem cases with rib fractures from July 2017 to April 2018 from our database. The computed tomography data were prepared using a plugin in the commercial imaging software Syngo.via whereby the rib cage was unfolded on a single-in-plane image reformation. Out of the 195 cases, a total of 585 images were extracted and divided into two groups labeled “with” and “without” fractures. These two groups were subsequently divided into training, validation, and test datasets to assess the performance of RiFNet. In addition, we explored the possibility of applying transfer learning techniques on our dataset by choosing two independent noncommercial off-the-shelf convolutional neural network architectures (ResNet50 V2 and Inception V3) and compared the performances of those two with RiFNet. When using pre-trained convolutional neural networks, we achieved an F1 score of 0.64 with Inception V3 and an F1 score of 0.61 with ResNet50 V2. We obtained an average F1 score of 0.91 ± 0.04 with RiFNet. RiFNet is efficient in detecting rib fractures on postmortem computed tomography. Transfer learning techniques are not necessarily well adapted to make classifications in postmortem computed tomography.
Object recognition tests are widely used in neuroscience to assess memory function in rodents. Despite the experimental simplicity of the task, the interpretation of behavioural features that are counted as object exploration can be complicated. Thus, object exploration is often analysed by manual scoring, which is time-consuming and variable across researchers. Current software using tracking points often lacks precision in capturing complex ethological behaviour. Switching or losing tracking points can bias outcome measures. To overcome these limitations we developed “EXPLORE”, a simple, ready-to use and open source pipeline. EXPLORE consists of a convolutional neural network trained in a supervised manner, that extracts features from images and classifies behaviour of rodents near a presented object. EXPLORE achieves human-level accuracy in identifying and scoring exploration behaviour and outperforms commercial software with higher precision, higher versatility and lower time investment, in particular in complex situations. By labeling the respective training data set, users decide by themselves, which types of animal interactions on objects are in- or excluded, ensuring a precise analysis of exploration behaviour. A set of graphical user interfaces (GUIs) provides a beginning-to-end analysis of object recognition tests, accelerating a fast and reproducible data analysis without the need of expertise in programming or deep learning.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.