One of the most common modalities to examine the human eye is the eye-fundus photograph. The evaluation of fundus photographs is carried out by medical experts during time-consuming visual inspection. Our aim is to accelerate this process using computer aided diagnosis. As a first step, it is necessary to segment structures in the images for tissue differentiation. As the eye is the only organ, where the vasculature can be imaged in an in vivo and noninterventional way without using expensive scanners, the vessel tree is one of the most interesting and important structures to analyze. The quality and resolution of fundus images are rapidly increasing. Thus, segmentation methods need to be adapted to the new challenges of high resolutions. In this paper, we present a method to reduce calculation time, achieve high accuracy, and increase sensitivity compared to the original Frangi method. This method contains approaches to avoid potential problems like specular reflexes of thick vessels. The proposed method is evaluated using the STARE and DRIVE databases and we propose a new high resolution fundus database to compare it to the state-of-the-art algorithms. The results show an average accuracy above 94% and low computational needs. This outperforms state-of-the-art methods.
Oral Squamous Cell Carcinoma (OSCC) is a common type of cancer of the oral epithelium. Despite their high impact on mortality, sufficient screening methods for early diagnosis of OSCC often lack accuracy and thus OSCCs are mostly diagnosed at a late stage. Early detection and accurate outline estimation of OSCCs would lead to a better curative outcome and a reduction in recurrence rates after surgical treatment. Confocal Laser Endomicroscopy (CLE) records sub-surface micro-anatomical images for in vivo cell structure analysis. Recent CLE studies showed great prospects for a reliable, real-time ultrastructural imaging of OSCC in situ. We present and evaluate a novel automatic approach for OSCC diagnosis using deep learning technologies on CLE images. The method is compared against textural feature-based machine learning approaches that represent the current state of the art. For this work, CLE image sequences (7894 images) from patients diagnosed with OSCC were obtained from 4 specific locations in the oral cavity, including the OSCC lesion. The present approach is found to outperform the state of the art in CLE image recognition with an area under the curve (AUC) of 0.96 and a mean accuracy of 88.3% (sensitivity 86.6%, specificity 90%).
Chest X-ray is the most common medical imaging exam used to assess multiple pathologies. Automated algorithms and tools have the potential to support the reading workflow, improve efficiency, and reduce reading errors. With the availability of large scale data sets, several methods have been proposed to classify pathologies on chest X-ray images. However, most methods report performance based on random image based splitting, ignoring the high probability of the same patient appearing in both training and test set. In addition, most methods fail to explicitly incorporate the spatial information of abnormalities or utilize the high resolution images. We propose a novel approach based on location aware Dense Networks (DNetLoc), whereby we incorporate both high-resolution image data and spatial information for abnormality classification. We evaluate our method on the largest data set reported in the community, containing a total of 86,876 patients and 297,541 chest X-ray images. We achieve (i) the best average AUC score for published training and test splits on the single benchmarking data set (ChestX-Ray14 [1]), and (ii) improved AUC scores when the pathology location information is explicitly used. To foster future research we demonstrate the limitations of the current benchmarking setup [1] and provide new reference patient-wise splits for the used data sets. This could support consistent and meaningful benchmarking of future methods on the largest publicly available data sets.
This is a repository copy of A global benchmark of algorithms for segmenting the left atrium from late gadolinium-enhanced cardiac magnetic resonance imaging.
Robust image registration in medical imaging is essential for comparison or fusion of images, acquired from various perspectives, modalities or at different times. Typically, an objective function needs to be minimized assuming specific a priori deformation models and predefined or learned similarity measures. However, these approaches have difficulties to cope with large deformations or a large variability in appearance. Using modern deep learning (DL) methods with automated feature design, these limitations could be resolved by learning the intrinsic mapping solely from experience. We investigate in this paper how DL could help organ-specific (ROI-specific) deformable registration, to solve motion compensation or atlas-based segmentation problems for instance in prostate diagnosis. An artificial agent is trained to solve the task of non-rigid registration by exploring the parametric space of a statistical deformation model built from training data. Since it is difficult to extract trustworthy ground-truth deformation fields, we present a training scheme with a large number of synthetically deformed image pairs requiring only a small number of real inter-subject pairs. Our approach was tested on inter-subject registration of prostate MR data and reached a median DICE score of .88 in 2-D and .76 in 3-D, therefore showing improved results compared to state-of-the-art registration algorithms.
No abstract
PurposeTo examine outer retinal band changes after flash stimulus and subsequent dark adaptation with ultrahigh-resolution optical coherence tomography (UHR-OCT).MethodsFive dark-adapted left eyes of five normal subjects were imaged with 3-μm axial-resolution UHR-OCT during 30 minutes of dark adaptation following 96%, 54%, 23%, and 0% full-field and 54% half-field rhodopsin bleach. We identified the ellipsoid zone inner segment/outer segment (EZ[IS/OS]), cone interdigitation zone (CIZ), rod interdigitation zone (RIZ), retinal pigment epithelium (RPE), and Bruch's membrane (BM) axial positions and generated two-dimensional thickness maps of the EZ(IS/OS) to the four bands. The average thickness over an area of the thickness map was compared against that of the dark-adapted baselines. The time-dependent thickness changes (photoresponses) were statistically compared against 0% bleach. Dark adaptometry was performed with the same bleaching protocol.ResultsThe EZ(IS/OS)-CIZ photoresponse was significantly different at 96% (P < 0.0001) and 54% (P = 0.006) bleach. At all three bleaching levels, the EZ(IS/OS)-RIZ, -RPE, and -BM responses were significantly different (P < 0.0001). The EZ(IS/OS)-CIZ and EZ(IS/OS)-RIZ time courses were similar to the recovery of rod- and cone-mediated sensitivity, respectively, measured with dark adaptometry. The maximal EZ(IS/OS)-CIZ and EZ(IS/OS)-RIZ response magnitudes doubled from 54% to 96% bleach. Both EZ(IS/OS)-RPE and EZ(IS/OS)-BM responses resembled dampened oscillations that were graded in amplitude and duration with bleaching intensity. Half-field photoresponses were localized to the stimulated retina.ConclusionsWith noninvasive, near-infrared UHR-OCT, we characterized three distinct, spatially localized photoresponses in the outer retinal bands. These photoresponses have potential value as physical correlates of photoreceptor function.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.