Since the publication of the last US national burden of skin disease report in 2006, there have been substantial changes in the practice of dermatology and the US health care system. These include the development of new treatment modalities, marked increases in the cost of medications, increasingly complex payer rules and regulations, and an aging of the US population. Recognizing the need for up-to-date data to inform researchers, policy makers, public stakeholders, and health care providers about the impact of skin disease on patients and US society, the American Academy of Dermatology produced a new national burden of skin disease report. Using 2013 claims data from private and governmental insurance providers, this report analyzed the prevalence, cost, and mortality attributable to 24 skin disease categories in the US population. In this first of 3 articles, the presented data demonstrate that nearly 85 million Americans were seen by a physician for at least 1 skin disease in 2013. This led to an estimated direct health care cost of $75 billion and an indirect lost opportunity cost of $11 billion. Further, mortality was noted in half of the 24 skin disease categories.
Humans often produce vocalizations for infants that differ from vocalizations for adults. Is this property common across societies? The forms of infant-directed vocalizations may be shaped by their function in parent-infant communication. If so, infant-directed song and speech should be differentiable from adult-directed song and speech on the basis of their acoustic features, and this property should be relatively invariant across cultures. To test this hypothesis, we built a corpus of 1,614 recordings of infant-and adult-directed singing and speech produced by 411 people living in 21 urban, rural, and small-scale societies. We studied the corpus in a massive online experiment and in a series of acoustic analyses. Naïve listeners (N = 13,218) reliably identified infant-directed vocalizations as infant-directed, and adult-directed speech (but not songs) as adult-directed, at rates far higher than chance. Ratings of infant-directed song were the most accurate and the most consistent across all societies; infant-directed speech was accurately identified on average, but inconsistently across societies. To determine the mechanisms underlying these results, we extracted many acoustic features from each recording and identified those that most reliably characterize infant-directed song and speech across cultures, via preregistered exploratory-confirmatory analyses and machine classification. The features distinguishing infant-and adult-directed song and speech concerned pitch, rhythmic, phonetic, and timbral attributes; a hypothesis-free classifier with cross-validation across societies reliably identified all vocalization types, with highest accuracy for infant-directed song. Last, we isolated 12 acoustic features that were predictive of perceived infant-directedness; of these, two pitch attributes (median F0 and its variability) were by far the most explanatory. These findings demonstrate cross-cultural regularities in infant-directed vocalizations that are suggestive of universality; moreover, infant-directed song appears to be more cross-culturally stereotyped than infant-directed speech, informing hypotheses of the functions and evolution of both.
Background Chest x-ray is a relatively accessible, inexpensive, fast imaging modality that might be valuable in the prognostication of patients with COVID-19. We aimed to develop and evaluate an artificial intelligence system using chest x-rays and clinical data to predict disease severity and progression in patients with COVID-19. Methods We did a retrospective study in multiple hospitals in the University of Pennsylvania Health System in Philadelphia, PA, USA, and Brown University affiliated hospitals in Providence, RI, USA. Patients who presented to a hospital in the University of Pennsylvania Health System via the emergency department, with a diagnosis of COVID-19 confirmed by RT-PCR and with an available chest x-ray from their initial presentation or admission, were retrospectively identified and randomly divided into training, validation, and test sets (7:1:2). Using the chest x-rays as input to an EfficientNet deep neural network and clinical data, models were trained to predict the binary outcome of disease severity (ie, critical or non-critical). The deep-learning features extracted from the model and clinical data were used to build time-to-event models to predict the risk of disease progression. The models were externally tested on patients who presented to an independent multicentre institution, Brown University affiliated hospitals, and compared with severity scores provided by radiologists. Findings 1834 patients who presented via the University of Pennsylvania Health System between March 9 and July 20, 2020, were identified and assigned to the model training (n=1285), validation (n=183), or testing (n=366) sets. 475 patients who presented via the Brown University affiliated hospitals between March 1 and July 18, 2020, were identified for external testing of the models. When chest x-rays were added to clinical data for severity prediction, area under the receiver operating characteristic curve (ROC-AUC) increased from 0·821 (95% CI 0·796–0·828) to 0·846 (0·815–0·852; p<0·0001) on internal testing and 0·731 (0·712–0·738) to 0·792 (0·780–0 ·803; p<0·0001) on external testing. When deep-learning features were added to clinical data for progression prediction, the concordance index (C-index) increased from 0·769 (0·755–0·786) to 0·805 (0·800–0·820; p<0·0001) on internal testing and 0·707 (0·695–0·729) to 0·752 (0·739–0·764; p<0·0001) on external testing. The image and clinical data combined model had significantly better prognostic performance than combined severity scores and clinical data on internal testing (C-index 0·805 vs 0·781; p=0·0002) and external testing (C-inde 0·752 vs 0·715; p<0·0001). Interpretation In patients with COVID-19, artificial intelligence based on chest x-rays had better prognostic performance than clinical data or radiologist-derived severity scores. Using artificial intelligence, chest x-rays can augment clinical data i...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.