Background. The image-based identification of distinct tissues within dermatological wounds enhances patients' care since it requires no intrusive evaluations. This manuscript presents an approach, we named QTDU , that combines deep learning models with superpixel-driven segmentation methods for assessing the quality of tissues from dermatological ulcers. Method. QTDU consists of a three-stage pipeline for the obtaining of ulcer segmentation, tissues' labeling, and wounded area quantification. We set up our approach by using a real and annotated set of dermatological ulcers for training several deep learning models to the identification of ulcered superpixels. Results. Empirical evaluations on 179,572 superpixels divided into four classes showed QTDU accurately spot wounded tissues (AUC = 0.986, sensitivity = 0.97, and specificity = 0.974) and outperformed machinelearning approaches in up to 8.2% regarding F1-Score through fine-tuning of a ResNet-based model. Last, but not least, experimental evaluations also showed QTDU correctly quantified wounded tissue areas within a 0.089 Mean Absolute Error ratio. Conclusions. Results indicate QTDU effectiveness for both tissue segmentation and wounded area quantification tasks. When compared to existing machine-learning approaches, the combination of superpixels and deep learning models outperformed the competitors within strong significant levels. can be automatically evaluated by Computer-Aided Diagnosis (CAD) tools, or even used for the searching of massive databases through content-only queries, as in Content-Based Image Retrieval (CBIR) applications. In both CAD and CBIR cases, the detection of abnormalities requires the extraction of patterns from images, while a decision-making strategy is necessary for juxtaposing new images to those in the database [4,5].Since dermatological lesions are routinely diagnosed by biopsies and surrounding skin aspects, ulcers can be computationally characterized by particular types of tissues (and their areas) within the wounded region [6,7]. For instance, Mukherjee et al.[8] proposed a five-color classification model and applied a color-based low-level extractor further labeled by a Support-Vector Machine (SVM) strategy at an 87.61% hit ratio. Such idea of concatenating feature extraction and classification is found at the core of most wound segmentation strategies, as in the study of Kavitha et al.[9] that evaluated leg ulcerations by extracting patterns based on local spectral histograms to be labeled by a Multi-Layer Perceptron (MLP) classifier with 87.05% accuracy. Analogously, Pereyra et al.[10] discussed the use of color descriptors and an Instancebased Learning (IbL) classifier with a 61.7% hit ratio, whereas Veredas et al. [11] suggested the use of texture descriptors and an MLP classifier with 84.84% accuracy.Blanco et al.[4] and Chino et al.[12] followed a slightly different premise for finding proper similarity measures and comparison criteria for dermatological wounds. Their approaches are based on a divide-andconquer stra...
Abstract:Social media could provide valuable information to support decision making in crisis management, such as in accidents, explosions and fires. However, much of the data from social media are images, which are uploaded in a rate that makes it impossible for human beings to analyze them. Despite the many works on image analysis, there are no fire detection studies on social media. To fill this gap, we propose the use and evaluation of a broad set of content-based image retrieval and classification techniques for fire detection. Our main contributions are: (i) the development of the Fast-Fire Detection method (FFireDt), which combines feature extractor and evaluation functions to support instance-based learning; (ii) the construction of an annotated set of images with ground-truth depicting fire occurrences -the Flickr-Fire dataset; and (iii) the evaluation of 36 efficient image descriptors for fire detection. Using real data from Flickr, our results showed that FFireDt was able to achieve a precision for fire detection comparable to that of human annotators. Therefore, our work shall provide a solid basis for further developments on monitoring images from social media.
Abstract. Social media can provide valuable information to support decision making in crisis management, such as in accidents, explosions, and fires. However, much of the data from social media are images, which are uploaded at a rate that makes it impossible for human beings to analyze them. To cope with that problem, we design and implement a databasedriven architecture for fast and accurate fire detection named FFireDt. The design of FFireDt uses the instance-based learning through indexed similarity queries expressed as an extension of the relational Structured Query Language. Our contributions are: (i) the design of the Fast-Fire Detection (F F ireDt), which achieves efficiency and efficacy rates that rival to the state-of-the-art techniques; (ii) the sound evaluation of 36 image descriptors, for the task of image classification in social media; (iii) the evaluation of content-based indexing with respect to the construction of instance-based classification systems; and (iv ) the curation of a ground-truth annotated dataset of fire images from social media. Using real data from Flickr, the experiments showed that system F F ireDt was able to achieve a precision for fire detection comparable to that of human annotators. Our results are promising for the engineering of systems to monitor images uploaded to social media services.
Sistemas de recuperação de imagens por conteúdo (do inglês "Content-based Image Retrieval"-CBIR) têm sido cada vez mais utilizados em diversas aplicações de tratamento e análise de imagens, devido a dois fatores: CBIR é um procedimento que pode ser feito automaticamente, permitindo tratar o grande volume de imagens adquiridos em hospitais, e também é a base para o processamento de consultas por similaridade. No contexto médico tais sistemas auxiliam em diversas tarefas, desde treinamento de profissionais até em sistemas de auxílio a diagnóstico (do inglês "Computer-Aided Diagnosis"-CAD). Um sistema computacional capaz de comparar e classificar imagens obtidas em exames de pacientes utilizando uma base prévia de conhecimento poderia agilizar o atendimento da população e fornecer aos especialistas informações relevantes de forma rápida e simples. Neste trabalho, o foco foi na análise de imagens de úlceras venosas. Foram desenvolvidas duas técnicas para classificação dessas imagens. A primeira, denominada Counting-Labels Similarity Measure (CL-Measure) possui a vantagem de lidar com imagens segmentadas de forma automática, por superpixels, e ser versátil o suficiente para permitir adaptação para outros domínios. A ideia principal do CL-Measure consiste na criação de sub-imagens baseadas em uma classificação prévia, calcular a distância entre elas e agregar as distâncias parciais obtidas a partir de uma função apropriada. A segunda técnica, denominada Quality of Tissues from Dermatological Ulcers (QTDU), faz uso de redes convolucionais (CNNs) para rotulação dos superpixels com a vantagem de compor todo o processo de identificação de características e classificação, dispensando a necessidade de identificar qual o extrator de características mais adequado para o contexto em questão. Experimentos realizados sobre a base de imagens analizada, utilizando 179572 superpixels divididos em 4 classes, indicam que a QTDU é a abordagem mais eficaz até o momento para o contexto de classificação de imagens dermatológicas, com médias de AUC=0,986, sensitividade=0,97, e especificidade=0,974 superando as abordagens anteriores baseadas em aprendizado de máquina em 11, 7% e 8, 2% considerando o coeficiente KAPPA e F-Measure, respectivamente.
Abstract-This study presents an analysis of classification techniques for Computer-Aided Diagnosis (CAD) regarding ulcerated lesions. We focus on determining influence of both color and texture in the automated image classification and its implication. To do so, we assayed a dataset of dermatological ulcers containing five variations in terms of tissue composition of lesion skin: granulation (red), fibrin (yellow), callous (white), necrotic (black), and a mix of the previous variations (mixed). Every image was previously labeled by experts regarding this red-yellow-black-white-mixed model. We employed specially designed color and texture extractors to represent the dataset images, namely: Color Layout, Color Structure, Scalable Color, Edge Histogram, Haralick, and Texture-Spectrum. The first three are color feature extractors and the last three are texture extractors. Following, we employed the SymmetricaUncertAttributeEval method to determine the features suitable for image classification. We tested a set of classifiers that follows distinct paradigms over the selected features, achieving an accuracy ratio of up to 77% in terms of images correctly classified, with the area under the receiver operating characteristic (ROC) curve up to 0.84. The classification performance and the selected features enabled us to determine that texture features were more predominant than color in the entire classification process.
Study question Is it possible to remove cumulus cells using a 16-well microfluidic device with automated flows to facilitate vitrification, ICSI, NI-PGT or non-invasive metabolomics analysis? Summary answer The designed automated system and protocol efficiently denude 16 samples simultaneously with a x10 lower shear stress than the manual process and without human intervention. What is known already Most processes involved in IVF such as insemination, washing, denudation, embryo culture and selection are still manually performed, labor-intensive and require highly skilled professionals. This leads to a significant variability in the clinical outcomes achieved by different embryologists and labs. The automation of these processes is a promising approach to reduce costs and improve the accessibility to assisted reproductive therapies. Although a simple procedure, standardization of cumulus oocyte complex (COCs) and zygotes denudation is key to facilitate ICSI, vitrification and to avoid DNA contamination for NI-embryo testing (PGT or metabolomics), while avoiding damage to the oocyte by excessive shear stress. Study design, size, duration A total of 160 cow COCs were used due to their size similarity with human COCs. Half were denuded 16–20 hours post-insemination and half pre-insemination for 5–10 minutes. COCs were classified as partially denuded if fertilization assessment, ICSI or vitrification was possible, and completely denuded if no cumulus cells remained. COCs controls were manually denuded (Stripper® capillary 145μm ID) to compare shear stress between procedures. This study was conducted during 2020 – 2021. Participants/materials, setting, methods We developed a customized microfluidic biochip that exerts a particular fluid motion while avoiding egg entrapment within microfluidic channels. The denudation efficacy was established by subjectively scoring images of bovine oocytes after generating a continuous “Push & Pull” fluid motion inside the biochip wells. A Computer Vision model was developed in parallel in order to optically assess denudation completion. The model used was a Pytorch implementation of Faster-RCNN with ImageNet pretrained weights Main results and the role of chance 96 bovine COCs were microfuidically handled post insemination achieving complete (56/96) or partial (40/96) removal of the cumulus cells on day 1, while for day 3 double denudation group, 89/96 (92.7%) were completely denuded while the rest remained partially denuded. In comparison, 80/80 (100%) of manually denuded cow COCs, achieved complete denudation (50% post-insemination group and 50% pre-insemination group). In addition, 48/64 (75%) cow COCs treated pre-insemination were partially denuded, enough to carry out ICSI after 5–10 min of treatment. The results here obtained indicate that media needs to flow through the device at a rate that can generate enough shear to strip off the cumulus-corona cells while avoiding emptying of the reservoirs containing the fertilization or culture medium. The shear stress of our design was calculated to be smaller than 4.4 Pa, about ten times lower than the one applied by the manual process (∼44Pa). The deep learning algorithm was tested on 20 unseen human oocytes on day 1, with 10 true positives 9 true negatives, and 1 false negative (95% accuracy). Limitations, reasons for caution The success of the denudation procedure was dependent on the design of the biochip wells and the microfluidic protocol used. The accuracy of our findings is still limited because of the difficulty in manufacturing prototype biochips. Wider implications of the findings: Complete denudation is key to avoid DNA contamination for NI-PGT or metabolomics analysis, while avoiding damage to the oocyte by excessive shear stress. Our device, which has the potential of scaling up and treat each oocyte individually, can improve automation and increase efficiency of current ART procedures Trial registration number NA
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.