The evolution of communicative signals involves a major hurdle; signals need to effectively stimulate the sensory systems of their targets. Therefore, sensory specializations of target animals are important sources of selection on signal structure. Here we report the discovery of an animal signal that uses a previously unknown communicative modality, infrared radiation or ''radiant heat,'' which capitalizes on the infrared sensory capabilities of the signal's target. California ground squirrels (Spermophilus beecheyi) add an infrared component to their snake-directed tail-flagging signals when confronting infrared-sensitive rattlesnakes (Crotalus oreganus), but tail flag without augmenting infrared emission when confronting infrared-insensitive gopher snakes (Pituophis melanoleucus). Experimental playbacks with a biorobotic squirrel model reveal this signal's communicative function. When the infrared component was added to the tail flagging display of the robotic models, rattlesnakes exhibited a greater shift from predatory to defensive behavior than during control trials in which tail flagging included no infrared component. These findings provide exceptionally strong support for the hypothesis that the sensory systems of signal targets should, in general, channel the evolution of signal structure. Furthermore, the discovery of previously undescribed signaling modalities such as infrared radiation should encourage us to overcome our own human-centered sensory biases and more fully examine the form and diversity of signals in the repertoires of many animal species.animal communication ͉ signal evolution ͉ multimodal communication
Congenital heart disease (CHD) is the most common birth defect. Fetal survey ultrasound is recommended worldwide, including five views of the heart that together could detect 90% of complex CHD. In practice, however, sensitivity is as low as 30%. We hypothesized poor detection results from challenges in acquiring and interpreting diagnostic-quality cardiac views, and that deep learning could improve complex CHD detection. Using 107,823 images from 1,326 retrospective echocardiograms and surveys from 18-24 week fetuses, we trained an ensemble of neural networks to (i) identify recommended cardiac views and (ii) distinguish between normal hearts and complex CHD. Finally, (iii) we used segmentation models to calculate standard fetal cardiothoracic measurements. In a test set of 4,108 fetal surveys (0.9% CHD, >4.4 million images, about 400 times the size of the training dataset) the model achieved an AUC of 0.99, 95% sensitivity (95%CI, 84-99), 96% specificity (95%CI, 95-97), and 100% NPV in distinguishing normal from abnormal hearts. Sensitivity was comparable to clinicians' task-for-task and remained robust on external and lower-quality images. The model's decisions were based on clinically relevant features. Cardiac measurements correlated with reported measures for normal and abnormal hearts. Applied to guidelines-recommended imaging, ensemble learning models could significantly improve detection of fetal CHD and expand telehealth options for prenatal care at a time when the COVID-19 pandemic has further limited patient access to trained providers. This is the first use of deep learning to approximately double standard clinical performance on a critical and global diagnostic challenge.
Deep learning (DL) requires labeled data. Labeling medical images requires medical expertise, which is often a bottleneck. It is therefore useful to prioritize labeling those images that are most likely to improve a model's performance, a practice known as instance selection. Here we introduce ENRICH, a method that selects images for labeling based on how much novelty each image adds to the growing training set. In our implementation, we use cosine similarity between autoencoder embeddings to measure that novelty. We show that ENRICH achieves nearly maximal performance on classification and segmentation tasks using only a fraction of available images, and outperforms the default practice of selecting images at random. We also present evidence that instance selection may perform categorically better on medical vs. non-medical imaging tasks. In conclusion, ENRICH is a simple, computationally efficient method for prioritizing images for expert labeling for DL.
Objective Deep learning (DL) has been applied in proofs of concept across biomedical imaging, including across modalities and medical specialties. Labeled data are critical to training and testing DL models, but human expert labelers are limited. In addition, DL traditionally requires copious training data, which is computationally expensive to process and iterate over. Consequently, it is useful to prioritize using those images that are most likely to improve a model’s performance, a practice known as instance selection. The challenge is determining how best to prioritize. It is natural to prefer straightforward, robust, quantitative metrics as the basis for prioritization for instance selection. However, in current practice, such metrics are not tailored to, and almost never used for, image datasets. Materials and Methods To address this problem, we introduce ENRICH—Eliminate Noise and Redundancy for Imaging Challenges—a customizable method that prioritizes images based on how much diversity each image adds to the training set. Results First, we show that medical datasets are special in that in general each image adds less diversity than in nonmedical datasets. Next, we demonstrate that ENRICH achieves nearly maximal performance on classification and segmentation tasks on several medical image datasets using only a fraction of the available images and without up-front data labeling. ENRICH outperforms random image selection, the negative control. Finally, we show that ENRICH can also be used to identify errors and outliers in imaging datasets. Conclusions ENRICH is a simple, computationally efficient method for prioritizing images for expert labeling and use in DL.
While prenatal congenital heart disease (CHD) screening has improved, accuracy remains as low as 30 percent. Standard fetal biometrics—cardiac axis (CA), cardiothoracic ratio (CTR), RV fractional area change (FAC), LV FAC, RA:LA area ratio, RV:LV area ratio—are available from screening imaging and can each aid in CHD screening, but can be cumbersome to measure. Combinations of biometrics may offer further utility but are challenging to integrate at the point of care. We tested whether using these biometrics in combination has utility in CHD screening (normal vs. abnormal). Further, we tested whether automatically predicted biometrics could function similarly to manually-labeled biometrics for this purpose. We included 105 fetal echocardiograms (20 normal, 85 abnormal across 12 different CHD lesions). We manually calculated the six biometrics above, performed dimensionality reduction using principal component analysis, and then clustered the resulting data by K-means. A previously developed deep learning model (Arnaout et al Nature 2021) was also used to automatically predict biometrics for normal, tetralogy of Fallot, and hypoplastic left heart syndrome hearts and plotted on the above cluster map. Optimal number of clusters was four, with RV:LV ratio and CTR as the most important features distinguishing clusters. Cluster 1 was predominantly normal hearts with cluster 2-4 largely abnormal hearts (Figure 1). The sensitivity and specificity for predicting abnormal hearts (e.g. CHD) was 86% and 75%, respectively. Model-predicted biometrics landed in the same clusters as the manually labeled lesions (Figure 1). To our knowledge, this is the first use of clustering to provide visualization of multiple fetal cardiac biometrics at once and reveal diagnostic utility. Once tested in screening ultrasounds on a larger scale, clustering of automated biometrics may be clinically useful at the screening point of care to augment scalable population-based screening.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.