The rise of Artificial Intelligence (AI) has shown promising performance as a support tool in clinical pathology workflows. In addition to the well-known interobserver variability between dermatopathologists, melanomas present a significant challenge in their histological interpretation. This study aims to analyze all previously published studies on whole-slide images of melanocytic tumors that rely on deep learning techniques for automatic image analysis. Embase, Pubmed, Web of Science, and Virtual Health Library were used to search for relevant studies for the systematic review, in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) checklist. Articles from 2015 to July 2022 were included, with an emphasis placed on the used artificial intelligence methods. Twenty-eight studies that fulfilled the inclusion criteria were grouped into four groups based on their clinical objectives, including pathologists versus deep learning models (n = 10), diagnostic prediction (n = 7); prognosis (n = 5), and histological features (n = 6). These were then analyzed to draw conclusions on the general parameters and conditions of AI in pathology, as well as the necessary factors for better performance in real scenarios.
Abstract:The presence of surveillance systems in our lives has drastically increased during the last years. Camera networks can be seen in almost every crowded public and private place, which generate huge amount of data with valuable information. The automatic analysis of data plays an important role to extract relevant information from the scene. In particular, the problem of person re-identification is a prominent topic that has become of great interest, specially for the fields of security or marketing. However, there are some factors, such as changes in the illumination conditions, variations in the person pose, occlusions or the presence of outliers that make this topic really challenging. Fortunately, the recent introduction of new technologies such as depth cameras opens new paradigms in the image processing field and brings new possibilities. This Thesis proposes a new complete framework to tackle the problem of person re-identification using commercial rgb-depth cameras. This work includes the analysis and evaluation of new approaches for the modules of segmentation, tracking, description and matching. To evaluate our contributions, a public dataset for person re-identification using rgb-depth cameras has been created.Rgb-depth cameras provide accurate 3D point clouds with color information. Based on the analysis of the depth information, an novel algorithm for person segmentation is proposed and evaluated. This method accurately segments any person in the scene, and naturally copes with occlusions and connected people. The segmentation mask of a person generates a 3D person cloud, which can be easily tracked over time based on proximity.The accumulation of all the person point clouds over time generates a set of high dimensional color features, named raw features, that provides useful information about the person appearance. In this Thesis, we propose a family of methods to extract relevant information from the raw features in different ways. The first approach compacts the raw features into a single color vector, named Bodyprint, that provides a good generalisation of the person appearance over time. Second, we introduce the concept of 3D Bodyprint, which is an extension of the Bodyprint descriptor that includes the angular distribution of the color features. Third, we characterise the person appearance as a bag of color features that are independently generated over time. This descriptor receives the name of Bag of Appearances because its similarity with the concept of Bag of Words. Finally, we use different probabilistic latent variable models to reduce the feature vectors from a statistical perspective. The evaluation of the methods demonstrates that our proposals outperform the state of the art. Re-identificación de personas usando cámaras RGB-profundidadResumen:La presencia de sistemas de vigilancia se ha incrementado notablemente en los últimos años. Las redes de videovigilancia pueden verse en casi cualquier espacio público y privado concurrido, lo cual genera una gran cantidad de datos de gra...
Background: Digital pathology has significantly impacted the cancer diagnosis field, with Content-Based Medical Image Retrieval (CBMIR) emerging as a powerful tool for analyzing histopathological Whole Slide Images (WSIs). CBMIR allows users to search a database for similar content to a query, providing pathologists with access to collections of cases with comparable features. This can improve the reliability of diagnostic references and help in making more accurate and timely diagnoses. Objective: In 2020, the Global Cancer Observatory (GCO) reported that breast cancer is the most prevalent cancer type in both men and women, accounting for 11.7% of all cases, while prostate cancer is the second most common cancer type in men, comprising 14.1% of cases. The aim of the proposed Unsupervised CBMIR (UCBMIR) is to replicate the traditional cancer diagnosis workflow and provide a dependable method for supporting pathologists when making diagnostic conclusions based on WSIs. By reducing the workload of pathologists, this approach could potentially enhance the accuracy and efficiency of cancer diagnosis. Method and results: The study presents an innovative approach to address the problem of the lack of labeled histopathological images in CBMIR. A customized unsupervised Convolutional Auto Encoder (CAE) was developed to extract 200 features per image, which were then used by the search engine component. The proposed UCBMIR was evaluated using two widely used numerical techniques in CBMIR and visual evaluation, and compared with a classifier to determine if retrieved images belong to the same cancer type as the query. The validation process was conducted using three distinct data sets, with an external evaluation to demonstrate its effectiveness. The UCBMIR outperformed previous studies, achieving a top 5 recall of 99% and 80% on BreaKHis and SICAPv2, respectively, using the first evaluation technique. Using the second evaluation technique, UCBMIR achieved precision rates of 91% and 70% for BreaKHis and SICAPv2, respectively. Moreover, the UCBMIR was able to identify various patterns in patches and achieved an accuracy of 81% in the top 5 when tested on an external image from Arvaniti, having been trained using SICAPv2 with the second evaluation technique.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.