Context.—Syphilis serology screening in laboratory practice is evolving. Traditionally, the syphilis screening algorithm begins with a nontreponemal immunoassay, which is manually performed by a laboratory technologist. In contrast, the reverse algorithm begins with a treponemal immunoassay, which can be automated. The Centers for Disease Control and Prevention has recognized both approaches, but little is known about the current state of laboratory practice, which could impact test utilization and interpretation. Objective.—To assess the current state of laboratory practice for syphilis serologic screening. Design.—In August 2015, a voluntary questionnaire was sent to the 2360 laboratories that subscribe to the College of American Pathologists syphilis serology proficiency survey. Results.—Of the laboratories surveyed, 98% (2316 of 2360) returned the questionnaire, and about 83% (1911 of 2316) responded to at least some questions. Twenty-eight percent (378 of 1364) reported revision of their syphilis screening algorithm within the past 2 years, and 9% (170 of 1905) of laboratories anticipated changing their screening algorithm in the coming year. Sixty-three percent (1205 of 1911) reported using the traditional algorithm, 16% (304 of 1911) reported using the reverse algorithm, and 2.5% (47 of 1911) reported using both algorithms, whereas 9% (169 of 1911) reported not performing a reflex confirmation test. Of those performing the reverse algorithm, 74% (282 of 380) implemented a new testing platform when introducing the new algorithm. Conclusion.—The majority of laboratories still perform the traditional algorithm, but a significant minority have implemented the reverse-screening algorithm. Although the nontreponemal immunologic response typically wanes after cure and becomes undetectable, treponemal immunoassays typically remain positive for life, and it is important for laboratorians and clinicians to consider these assay differences when implementing, using, and interpreting serologic syphilis screening algorithms.
- Ordering and testing practices for the screening and diagnosis of monoclonal gammopathy vary widely across laboratories. Improving utilization management and report content, as well as recognition and development of laboratory-directed testing guidelines, may serve to enhance the clinical value of testing.
Objective This study was conducted to determine the spectrum of laboratory practices in ANA test target, performance and result reporting. Methods A questionnaire on ANA testing was distributed by the Diagnostic Immunology and Flow Cytometry Committee (DIFCC) of the College of American Pathologists (CAP) to laboratories participating in the 2016 CAP ANA proficiency survey. Results Of 5847 survey kits distributed, 1206 (21%) responded. ANA screening method varied: 55% indirect immunofluorescence assay (IFA), 21% enzyme linked immunosorbent assay (ELISA), 12% multi-bead immunoassay, and 18% “other” methods. Ordering test name indicated method used in only 32% of laboratories; only 39% stated method used on the report. Of 644 laboratories, 80% used HEp-2 cell substrate, 18% HEp-2000 (HEp-2 cell line engineered to overexpress SSA antigen, Ro60), and 2% “other.” Slides were prepared manually (67%) or on an automated platform (33%), and examined by direct microscopy (84%) or images captured by an automated platform (16%). Only 50% reported a positive result at the customary 1:40 dilution. Titre was reported to endpoint routinely by 43%, only upon request by 23%, or never by 35%. 8% of laboratories did not report dual patterns. Of those reporting multiple patterns, 23% did not report a titre with each pattern. Conclusion ANA methodology and practice and test naming and reporting varies significantly between laboratories. Lack of uniformity in testing and reporting practice and lack of transparency in communicating the testing method may misdirect clinicians in their management of patients.
The Q-Tracks program has established multiple benchmarks in most disciplines of the laboratory and has demonstrated significant performance improvement in benchmarks and individual laboratories over time.
Context.-Many production systems employ standardized statistical monitors that measure defect rates and cycle times, as indices of performance quality. Clinical laboratory testing, a system that produces test results, is amenable to such monitoring.Objective.-To demonstrate patterns in clinical laboratory testing defect rates and cycle time using 7 College of American Pathologists Q-Tracks program monitors.Design.-Subscribers measured monthly rates of outpatient order-entry errors, identification band defects, and specimen rejections; median troponin order-to-report cycle times and rates of STAT test receipt-to-report turnaround time outliers; and critical values reporting event defects, and corrected reports. From these submissions Q-Tracks program staff produced quarterly and annual reports. These charted each subscriber's performance relative to other participating laboratories and aggregate and subgroup performance over time, dividing participants into best and median performers and performers with the most room to improve. Each monitor's patterns of change present percentile distributions of subscribers' performance in relation to monitoring durations and numbers of participating subscribers. Changes over time in defect frequencies and the cycle duration quantify effects on performance of monitor participation.Results.-All monitors showed significant decreases in defect rates as the 7 monitors ran variously for 6, 6, 7, 11, 12, 13, and 13 years. The most striking decreases occurred among performers who initially had the most room to improve and among subscribers who participated the longest. All 7 monitors registered significant improvement. Participation effects improved between 0.85% and 5.1% per quarter of participation.Conclusions.-Using statistical quality measures, collecting data monthly, and receiving reports quarterly and yearly, subscribers to a comparative monitoring program documented significant decreases in defect rates and shortening of a cycle time for 6 to 13 years in all 7 ongoing clinical laboratory quality monitors.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.