Purpose There has been increased interest in using telepractice for involving more diverse children in research and clinical services, as well as when in-person assessment is challenging, such as during COVID-19. Little is known, however, about the feasibility, reliability, and validity of language samples when conducted via telepractice. Method Child language samples from parent–child play were recorded either in person in the laboratory or via video chat at home, using parents' preferred commercially available software on their own device. Samples were transcribed and analyzed using Systematic Analysis of Language Transcripts software. Analyses compared measures between-subjects for 46 dyads who completed video chat language samples versus 16 who completed in-person samples; within-subjects analyses were conducted for a subset of 13 dyads who completed both types. Groups did not differ significantly on child age, sex, or socioeconomic status. Results The number of usable samples and percent of utterances with intelligible audio signal did not differ significantly for in-person versus video chat language samples. Child speech and language characteristics (including mean length of utterance, type–token ratio, number of different words, grammatical errors/omissions, and child speech intelligibility) did not differ significantly between in-person and video chat methods. This was the case for between-group analyses and within-child comparisons. Furthermore, transcription reliability (conducted on a subset of samples) was high and did not differ between in-person and video chat methods. Conclusions This study demonstrates that child language samples collected via video chat are largely comparable to in-person samples in terms of key speech and language measures. Best practices for maximizing data quality for using video chat language samples are provided.
Despite increasing emphasis on emergent brain‐behavior patterns supporting language, cognitive, and socioemotional development in toddlerhood, methodologic challenges impede their characterization. Toddlers are notoriously difficult to engage in brain research, leaving a developmental window in which neural processes are understudied. Further, electroencephalography (EEG) and event‐related potential paradigms at this age typically employ structured, experimental tasks that rarely reflect formative naturalistic interactions with caregivers. Here, we introduce and provide proof of concept for a new “Social EEG” paradigm, in which parent–toddler dyads interact naturally during EEG recording. Parents and toddlers sit at a table together and engage in different activities, such as book sharing or watching a movie. EEG is time locked to the video recording of their interaction. Offline, behavioral data are microcoded with mutually exclusive engagement state codes. From 216 sessions to date with 2‐ and 3‐year‐old toddlers and their parents, 72% of dyads successfully completed the full Social EEG paradigm, suggesting that it is possible to collect dual EEG from parents and toddlers during naturalistic interactions. In addition to providing naturalistic information about child neural development within the caregiving context, this paradigm holds promise for examination of emerging constructs such as brain‐to‐brain synchrony in parents and children.
Purpose: There has been a significant increased interest in using telepractice for involving more diverse children in research and clinical services, as well as when in-person assessment is challenging, such as during COVID-19. Little is known, however, about the feasibility, reliability, and validity of language samples when conducted via telepractice. Method: Child language samples from parent-child play were recorded either in person in the lab or via video chat at home, using parents’ preferred commercially available software on their own device. Samples were transcribed and analyzed using SALT software. Analyses compared measures between-subjects for 46 dyads who completed video chat language samples versus 16 who completed in-person samples; within-subjects analyses were conducted for a subset of 13 dyads who completed both types. Groups did not differ significantly on child age, sex, or socio-economic status. Results: The number of usable samples and percent of utterances with intelligible audio signal did not differ significantly for in-person versus video chat language samples. Child speech and language characteristics (including MLU, TTR, NDW, grammatical errors/omissions, and child speech intelligibility) did not differ significantly between in-person and video chat methods. This was the case for between-group analyses and within-child comparisons. Further, transcription reliability (conducted on a subset of samples) was high and did not differ between in-person and video chat methods. Conclusions: This study demonstrates that child language samples collected via video chat are largely comparable to in-person samples in terms of key speech and language measures. Best practices for maximizing data quality for using video chat language samples are provided.
1556 Background: Much information describing a patient’s cancer treatment remains in unstructured text in electronic health records and is not recorded in discrete data fields. Accurate data completeness is essential for quality care improvement and research studies on de-identified patient records. Accessing this high-value content often requires manual and extensive curation review. Methods: AstraZeneca, CancerLinQ, ConcertAI, and Tempus have developed a natural language processing (NLP)-assisted process to improve clinical cohort selection for targeted curation efforts. Hybrid, machine-learning model development included text classification, named entity recognition, relation extraction and false positive removal. A subset of nearly 60,000 lung cancer cases were included from the CancerLinQ database, comprised of multiple source EHR systems. NLP models extracted EGFR status, stage, histology, radiation therapy, surgical resection and oral medications. Based on the results, cases were selected for additional manual curation, where curators confirmed findings of the NLP-processed data. Results: NLP methods improved cohort identification. Successfully returned cases using the NLP method ranged from 75.2% to 96.5% over more general case selection criteria based on limited structured data. For all cohorts combined, 84.2% of the cases sent out for NLP curation were returned with curated content (Table). Each cohort contained a range of NLP-derived elements for curators to further review. In comparison, more general case selection criteria yielded a total of 3,878 cases returned out of 41,186 lung cancer cases sent for curation, for a success rate of only 9.6%. Conclusions: NLP-driven case selection of six distinct, complex lung cohorts resulted in an order of magnitude improvement in eligibility over candidate selection using structured EHR data alone. This study demonstrates NLP-assisted approaches can significantly improve efficiency in curating unstructured health data. [Table: see text]
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.