Background When applying genomic medicine to a rare disease patient, the primary goal is to identify one or more genomic variants that may explain the patient’s phenotypes. Typically, this is done through annotation, filtering, and then prioritization of variants for manual curation. However, prioritization of variants in rare disease patients remains a challenging task due to the high degree of variability in phenotype presentation and molecular source of disease. Thus, methods that can identify and/or prioritize variants to be clinically reported in the presence of such variability are of critical importance. Methods We tested the application of classification algorithms that ingest variant annotations along with phenotype information for predicting whether a variant will ultimately be clinically reported and returned to a patient. To test the classifiers, we performed a retrospective study on variants that were clinically reported to 237 patients in the Undiagnosed Diseases Network. Results We treated the classifiers as variant prioritization systems and compared them to four variant prioritization algorithms and two single-measure controls. We showed that the trained classifiers outperformed all other tested methods with the best classifiers ranking 72% of all reported variants and 94% of reported pathogenic variants in the top 20. Conclusions We demonstrated how freely available binary classification algorithms can be used to prioritize variants even in the presence of real-world variability. Furthermore, these classifiers outperformed all other tested methods, suggesting that they may be well suited for working with real rare disease patient datasets.
Moss, AC, Dinyer, TK, Abel, MG, and Bergstrom, HC. Methodological considerations for the determination of the critical load for the deadlift. J Strength Cond Res 35(2S): S31–S37, 2021—This study determined whether performance method during conventional deadlifting affects critical load (CL) estimates derived from the linear work limit (Wlim) vs. repetitions relationship. Eleven subjects completed 1-repetition maximum (1RM) deadlift testing followed by separate visits, to determine the number of repetitions to failure at 50, 60, 70, and 80% 1RM for both reset (RS) and touch-and-go (TG) methods. The CL was the slope of the line of total work completed (load [kg] × repetitions) vs. total repetitions for 4 intensities (50–80% 1RM). The number of repetitions to failure were determined at CLRS and CLTG. The kg values and repetitions to failure at CLRS and CLTG, and total repetitions at each intensity (50–80%) for each method (RS and TG) were compared. There were no significant mean differences (±SD) in kg values (−0.4 ± 7.9 kg, range = −8.8 to 17 kg, p = 0.856), %1RM (−1.2 ± 5.6%, p = 0.510), or total repetitions completed (2.8 ± 15.7 reps, range = −15 to 37 reps, p = 0.565) for CLRS and CLTG. These findings indicated that performance method did not affect mean estimation of CL or number of repetitions completed at submaximal loads. Thus, the estimates of CL from the modeling of total work vs. repetitions were relatively robust to variations in deadlifting methodologies. However, individual variability (range of scores) in kg values and repetition to failure at CLRS and CLTG indicated that deadlifting methods may differ in anatomical region of fatigue. The CL is an individually derived threshold that may be used to examine and describe performance capabilities.
Motivation: In genomic medicine for rare disease patients, the primary goal is to identify one or more variants that cause their disease. Typically, this is done through filtering and then prioritization of variants for manual curation. However, prioritization of variants in rare disease patients remains a challenging task due to the high degree of variability in phenotype presentation and molecular source of disease. Thus, methods that can identify and/or prioritize variants to be clinically reported in the presence of such variability are of critical importance. Results: We tested the application of classification algorithms that ingest variant annotations along with phenotype information for predicting whether a variant will ultimately be clinically reported and returned to a patient. To test the classifiers, we performed a retrospective study on variants that were clinically reported to 237 patients in the Undiagnosed Diseases Network. We treated the classifiers as variant prioritization systems and compared them to four variant prioritization algorithms and two single-measure controls. We showed that these classifiers outperformed the other methods with the best classifiers ranking 72% of all reported variants and 94% of reported pathogenic variants in the top 20.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.