Context.-Immunohistochemistry (IHC) is important for cytology but poses special challenges because preanalytic conditions may differ from the conditions of IHC-positive controls.Objectives.-To broadly survey cytology laboratories to quantify preanalytic platforms for cytology IHC and identify problems with particular platforms or antigens. To discover how validation guidelines for HER2 testing have affected cytology.Design. Conclusions.-The platforms for cytology IHC and positive controls differ for most laboratories, yet conditions are uncommonly adjusted for cytology specimens. Except for the unsuitability of air-dried smears for HER2 testing, the survey did not reveal evidence of systematic problems with any antibody or platform. (Arch Pathol Lab Med. 2014;138:1167-1172 doi: 10.5858/arpa.2013-0259-CP) T he goal of cytology is to use the smallest possible biopsy for diagnosis, thereby reducing risk and discomfort for patients, facilitating population-based cancer detection programs, allowing faster diagnosis than can be achieved with larger biopsies, and saving money for the health care system. Even with large-sized surgical biopsies, immunohistochemistry (IHC) is often needed for the diagnosis and determination of prognostic markers. The cytology literature documents the suitability of cytology specimens for IHC 1 and the importance of IHC in allowing patients to realize the benefits of cytology.
Cytologic-histologic correlations can be performed retrospectively, during initial case review, or both. At minimum, all available slides should be reviewed for a high-grade squamous intraepithelial lesion Papanicolaou test with negative biopsies. The preferred monitor for correlations is the positive predictive value of a Papanicolaou test. Laboratories should design cytologic-histologic correlation programs to explore existing or perceived quality deficiencies.
Objective.-To characterize Pap test result interpretive bias when the HPV status is known at the microscopic evaluation.Design.-Forty HPV-positive liquid-based Pap test results initially interpreted as negative for squamous intraepithelial lesion or malignancy were selected from a quality assurance program, separated into 2 groups of 20 slides each, and circulated in 2 groups to 22 members of the College of American Pathologists Cytopathology Committee. Each member reviewed each case and indicated whether the result was negative for squamous intraepithelial lesion or malignancy or was an epithelial cell abnormality (ECA). The participants assessed the severity of ECAs using the Bethesda System. The participants were not informed of the HPV status in the initial review round. Each group of 20 slides was then distributed to the opposite group (to avoid slide recall), and the participants were informed that all slides were from patients who were high-risk HPV positive. Differences in the responses between groups were analyzed by v 2 test and Cochran-Mantel-Haenszel test at the .05 significance level.Results.-Without knowledge of the HPV status, slides were more often categorized as negative for squamous intraepithelial lesion or malignancy and less likely identified as an ECA (P , .001). There was an increase across all categories of ECAs in the biased responses compared with the unbiased responses (P ¼ .002).Conclusions.-Knowledge of positive HPV status biases morphologic Pap test result interpretation. If the HPV status is positive, observers are more likely to report a Pap test result as abnormal across all categories of ECAs.
In this interlaboratory comparison educational program, accurate identification of ACC has shown to be problematic, with ACC representing an important cause of false-negative responses. The most common diagnostic pitfall is distinguishing this entity from pleomorphic and monomorphic adenoma in the benign category and from lymphoma and adenocarcinoma in the malignant one.
Context.-A common laboratory practice is to repeat critical values before reporting the test results to the clinical care provider. This may be an unnecessary step that delays the reporting of critical test results without adding value to the accuracy of the test result.Objectives.-To determine the proportions of repeated chemistry and hematology critical values that differ significantly from the original value as defined by the participating laboratory, to determine the threshold differences defined by the laboratory as clinically significant, and to determine the additional time required to analyze the repeat test.Design.-Participants prospectively reviewed critical test results for 4 laboratory tests: glucose, potassium, white blood cell count, and platelet count. Participants reported the following information: initial and repeated test result; time initial and repeat results were first known to laboratory staff; critical result notification time; if the repeat result was still a critical result; if the repeat result was significantly different from the initial result, as judged by the laboratory professional or policy; significant difference threshold, as defined by the laboratory; the make and model of the instrument used for primary and repeat testing.Results.-Routine, repeat analysis of critical values is a common practice. Most laboratories did not formally define a significant difference between repeat results. Repeated results were rarely considered significantly different. Median repeated times were at least 17 to 21 minutes for 10% of laboratories. Twenty percent of laboratories reported at least 1 incident in the last calendar year of delayed result reporting that clinicians indicated had adversely affected patient care.Conclusion.-Routine repeat analysis of automated chemistry and hematology critical values is unlikely to be clinically useful and may adversely affect patient care.( This Q-Probes study was designed to determine the proportion of repeated critical results, the proportion of
- Dissemination of the 2014 evidence-based guideline validation practices had a positive impact on laboratory performance; some or all of the recommendations have been adopted by nearly 80% of respondents.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.