BackgroundOnly prototypes 5 years ago, high-speed, automated whole slide imaging (WSI) systems (also called digital slide systems, virtual microscopes or wide field imagers) are becoming increasingly capable and robust. Modern devices can capture a slide in 5 minutes at spatial sampling periods of less than 0.5 micron/pixel. The capacity to rapidly digitize large numbers of slides should eventually have a profound, positive impact on pathology. It is important, however, that pathologists validate these systems during development, not only to identify their limitations but to guide their evolution.MethodsThree pathologists fully signed out 25 cases representing 31 parts. The laboratory information system was used to simulate real-world sign-out conditions including entering a full diagnostic field and comment (when appropriate) and ordering special stains and recuts. For each case, discrepancies between diagnoses were documented by committee and a "consensus" report was formed and then compared with the microscope-based, sign-out report from the clinical archive.ResultsIn 17 of 25 cases there were no discrepancies between the individual study pathologist reports. In 8 of the remaining cases, there were 12 discrepancies, including 3 in which image quality could be at least partially implicated. When the WSI consensus diagnoses were compared with the original sign-out diagnoses, no significant discrepancies were found. Full text of the pathologist reports, the WSI consensus diagnoses, and the original sign-out diagnoses are available as an attachment to this publication.ConclusionThe results indicated that the image information contained in current whole slide images is sufficient for pathologists to make reliable diagnostic decisions and compose complex diagnostic reports. This is a very positive result; however, this does not mean that WSI is as good as a microscope. Virtually every slide had focal areas in which image quality (focus and dynamic range) was less than perfect. In some cases, there was evidence of over-compression and regions made "soft" by less than perfect focus. We expect systems will continue to get better, image quality and speed will continue to improve, but that further validation studies will be needed to guide development of this promising technology.
SummaryThe process of digital imaging in microscopy is a series of operations, each contributing to the quality of the final image that is displayed on the computer monitor. The operations include sample preparation and staining by histology, optical image formation by the microscope, digital image sampling by the camera sensor, postprocessing and compression, transmission across the network and display on the monitor. There is an extensive literature about digital imaging and each step of the process is fairly well understood. However, the complete process is very hard to standardize or even to understand fully. The important concepts for pathology imaging standards are: (1) systems should be able to share image files, (2) the standards should allow the transmission of information on baseline colours and recommended display parameters, (3) the images should be useful to the pathologist, not necessarily better or worse than direct examination of a slide under the microscope, (4) a mechanism to evaluate image quality objectively should be present, (5) a mechanism to adjust and correct the minor errors of tissue processing should be developed, (6) a public organization should support pathologists in the development of standards.
Context We define the scope and needs within the new discipline of computational pathology, a discipline critical to the future of both the practice of pathology and, more broadly, medical practice in general. Objective To define the scope and needs of computational pathology. Data Sources A meeting was convened in Boston, Massachusetts, in July 2014 prior to the annual Association of Pathology Chairs meeting, and it was attended by a variety of pathologists, including individuals highly invested in pathology informatics as well as chairs of pathology departments. Conclusions The meeting made recommendations to promote computational pathology, including clearly defining the field and articulating its value propositions; asserting that the value propositions for health care systems must include means to incorporate robust computational approaches to implement data-driven methods that aid in guiding individual and population health care; leveraging computational pathology as a center for data interpretation in modern health care systems; stating that realizing the value proposition will require working with institutional administrations, other departments, and pathology colleagues; declaring that a robust pipeline should be fostered that trains and develops future computational pathologists, for those with both pathology and non-pathology backgrounds; and deciding that computational pathology should serve as a hub for data-related research in health care systems. The dissemination of these recommendations to pathology and bioinformatics departments should help facilitate the development of computational pathology.
We evaluated a comprehensive deidentification engine at the University of Pittsburgh Medical Center (UPMC), Pittsburgh, PA, that uses a complex set of rules, dictionaries, pattern-matching algorithms, and the Unified Medical Language System to identify and replace identifying text in clinical reports while preserving medical information for sharing in research. In our initial data set of 967 surgical pathology reports, the software did not suppress outside (103), UPMC (47), and non-UPMC (56) accession numbers; dates (7); names (9) or initials (25) of case pathologists; or hospital or laboratory names (46). In 150 reports, some clinical information was suppressed inadvertently (overmarking). The engine retained eponymic patient names, eg, Barrett and Gleason. In the second evaluation (1,000 reports), the software did not suppress outside (90) or UPMC (6) accession numbers or names (4) or initials (2) of case pathologists. In the third evaluation, the software removed names of patients, hospitals (297/300), pathologists (297/300), transcriptionists, residents and physicians, dates of procedures, and accession numbers (298/300). By the end of the evaluation, the system was reliably and specifically removing safe-harbor identifiers and producing highly readable deidentified text without removing important clinical information. Collaboration between pathology domain experts and system developers and continuous quality assurance are needed to optimize ongoing deidentification processes.
A prototype, content-based image retrieval system has been built employing a client/server architecture to access supercomputing power from the physician's desktop. The system retrieves images and their associated annotations from a networked microscopic pathology image database based on content similarity to user supplied query images. Similarity is evaluated based on four image feature types: color histogram, image texture, Fourier coefficients, and wavelet coefficients, using the vector dot product as a distance metric. Current retrieval accuracy varies across pathological categories depending on the number of available training samples and the effectiveness of the feature set. The distance measure of the search algorithm was validated by agglomerative cluster analysis in light of the medical domain knowledge. Results show a correlation between pathological significance and the image document distance value generated by the computer algorithm. This correlation agrees with observed visual similarity. This validation method has an advantage over traditional statistical evaluation methods when sample size is small and where domain knowledge is important. A multi-dimensional scaling analysis shows a low dimensionality nature of the embedded space for the current test set.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.