BackgroundThere are many scanners of glass slides on the market now. Quality of digital images produced by them may be different and pathologists who examine virtual slides on a monitor may subjectively evaluate it. However, objective comparison of quality of digital slides captured by various devices requires assessment algorithms, which will be automatically executed.MethodsIn this work such an algorithm is proposed and implemented. It is dedicated for comparing quality of virtual slides which show the same glass slide captured by two or more scanners. In the first step this method looks for the largest corresponding areas in the slides. This task is realized by defining boundaries of tissues and providing the relative scale factor. Then, a certain number of smaller areas, which show the same fragments of both slides, is selected. The chosen fragments are analyzed using Gray Level Co-occurrence Matrix (GLCM). For GLCM matrices some of the Haralick features are calculated, like contrast or entropy. Basing on results for some sample images, features appropriate for quality assessment are chosen. Aggregation of values from all selected fragments allows to compare the quality of images captured by tested devices.ResultsDescribed method was tested on two sets of ten virtual slides, acquired by scanning the same set of ten glass slides by two different devices. First set was scanned and digitized using the robotic microscope Axioscope2 (Zeiss) equipped with AxioCam Hrc CCD camera. Second set was scanned by DeskScan (Zeiss) with standard equipment. Before analyzing captured virtual slides, images were stitched and converted using software which utilizes advances in aerial and satellite imaging.The results of the experiment show that calculated quality factors are higher for virtual slides acquired using first mentioned device (Axioscope2 with AxioCam).ConclusionsResults of the tests are consistent with opinion of the pathologists who assessed quality of virtual slides captured by these devices. This shows that the method has potential in automatic evaluation of virtual slides’ quality.
BackgroundWhole slide images (WSIs) used in medical education can provide new insights into how histological slides are viewed by students. We created software infrastructure which tracks viewed WSI areas, used it during a practical exam in oral pathology and analyzed collected data to discover students’ viewing behavior.MethodsA view path tracking solution, which requires no specialized equipment, has been implemented on a virtual microscopy software platform (WebMicroscope, Fimmic Ltd, Helsinki, Finland). Our method dynamically tracks view paths across the whole WSI area and all zoom levels, while collecting the viewing behavior data centrally from many simultaneous WSI users. We used this approach during the exam to track how all students (N = 88) viewed WSIs (50 per student) when answering exam questions (with no time limit). About 74,000 records with information about subsequently displayed WSI areas were saved in the central database. Gathered data was processed and analyzed in multiple ways. Generated images and animations showed view fields and paths marked on WSI thumbnails, either for a single student or multiple students answering the same question. A set of statistics was designed and implemented to automatically discover certain viewing patterns, especially for multiple students and WSIs. Calculated metrics included average magnification level on which a WSI was displayed, dispersion of view fields, total viewing time, total number of view fields and a measure depicting how much a student was focused on diagnostic areas of a slide.ResultsGenerated visualizations allowed us to visually discover some characteristic viewing patterns for selected questions and students. Calculated measures confirmed certain observations and enabled generalization of some findings across many students or WSIs. In most questions selected for the analysis, students answering incorrectly tended to view the slides longer, go through more view fields, which were also more dispersed – all compared to students who answered the questions correctly.ConclusionsDesigned and implemented view path tracking appeared to be a useful method of uncovering how students view WSIs during an exam in oral pathology. Proposed analysis methods, which include visualizations and automatically calculated statistics, were successfully used to discover viewing patterns.Virtual slidesThe virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/13000_2014_208Electronic supplementary materialThe online version of this article (doi:10.1186/s13000-014-0208-6) contains supplementary material, which is available to authorized users.
The way of viewing whole slide images (WSI) can be tracked and analyzed. In particular, it can be useful to learn how medical students view WSIs during exams and how their viewing behavior is correlated with correctness of the answers they give. We used software-based view path tracking method that enabled gathering data about viewing behavior of multiple simultaneous WSI users. This approach was implemented and applied during two practical exams in oral pathology in 2012 (88 students) and 2013 (91 students), which were based on questions with attached WSIs. Gathered data were visualized and analyzed in multiple ways. As a part of extended analysis, we tried to use machine learning approaches to predict correctness of students’ answers based on how they viewed WSIs. We compared the results of analyses for years 2012 and 2013 – done for a single question, for student groups, and for a set of questions. The overall patterns were generally consistent across these 3 years. Moreover, viewing behavior data appeared to have certain potential for predicting answers’ correctness and some outcomes of machine learning approaches were in the right direction. However, general prediction results were not satisfactory in terms of precision and recall. Our work confirmed that the view path tracking method is useful for discovering viewing behavior of students analyzing WSIs. It provided multiple useful insights in this area, and general results of our analyses were consistent across two exams. On the other hand, predicting answers’ correctness appeared to be a difficult task – students’ answers seem to be often unpredictable.
Background: Making an automatic diagnosis based on virtual slides and whole slide imaging or even determining whether a case belongs to a single class, representing a specific disease, is a big challenge. In this work we focus on WHO Classification of Tumours of the Central Nervous System. We try to design a method which allows to automatically distinguish virtual slides which contain histopathologic patterns characteristic of glioblastoma – pseudopalisading necrosis and discriminate cases with neurinoma (schwannoma), which contain similar structures – palisading (Verocay bodies). Methods: Our method is based on computer vision approaches like structural analysis and shape descriptors. We start with image segmentation in a virtual slide, find specific patterns and use a set of features which can describe pseudopalisading necrosis and distinguish it from palisades. Type of structures found in a slide decides about its classification. Results: Described method is tested on a set of 49 virtual slides, captured using robotic microscope. Results show that 82% of glioblastoma cases and 90% of neurinoma cases were correctly identified by the proposed algorithm. Conclusion: Our method is a promising approach to automatic detection of nervous system tumors using virtual slides.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.