In this paper we report the set-up and results of the Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized in conjunction with the MICCAI 2012 and 2013 conferences. Twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low- and high-grade glioma patients—manually annotated by up to four raters—and to 65 comparable scans generated using tumor image simulation software. Quantitative evaluations revealed considerable disagreement between the human raters in segmenting various tumor sub-regions (Dice scores in the range 74%–85%), illustrating the difficulty of this task. We found that different algorithms worked best for different sub-regions (reaching performance comparable to human inter-rater variability), but that no single algorithm ranked in the top for all sub-regions simultaneously. Fusing several good algorithms using a hierarchical majority vote yielded segmentations that consistently ranked above all individual algorithms, indicating remaining opportunities for further methodological improvements. The BRATS image data and manual annotations continue to be publicly available through an online evaluation system as an ongoing benchmarking resource.
We examined optokinetic and optomotor responses of 450 zebrafish mutants, which were isolated previously based on defects in organ formation, tissue patterning, pigmentation, axon guidance, or other visible phenotypes. These strains carry single point mutations in Ͼ400 essential loci. We asked which fraction of the mutants develop blindness or other types of impairments specific to the visual system. Twelve mutants failed to respond in either one or both of our assays. Subsequent histological and electroretinographic analysis revealed unique deficits at various stages of the visual pathway, including lens degeneration (bumper), melanin deficiency (sandy), lack of ganglion cells (lakritz), ipsilateral misrouting of axons (belladonna), optic-nerve disorganization ( grumpy and sleepy), inner nuclear layer or outer plexiform layer malfunction (noir, dropje, and possibly steifftier), and disruption of retinotectal impulse activity (macho and blumenkohl). Surprisingly, mutants with abnormally large or small eyes or severe wiring defects frequently exhibit no discernible behavioral deficits. In addition, we identified 13 blind mutants that display outer-retina dystrophy, making this syndrome the single-most common cause of inherited blindness in zebrafish. Our screen showed that a significant fraction (ϳ5%) of the essential loci also participate in visual functions but did not reveal any systematic genetic linkage to particular morphological traits. The mutations uncovered by our behavioral assays provide distinct entry points for the study of visual pathways and set the stage for a genetic dissection of vertebrate vision.
Two-photon excitation microscopy was used to reconstruct cell divisions in living zebrafish embryonic retinas. Contrary to proposed models for vertebrate asymmetric divisions, no apico-basal cell divisions take place in the zebrafish retina during the generation of postmitotic neurons. However, a surprising shift in the orientation of cell division from central-peripheral to circumferential occurs within the plane of the ventricular surface. In the sonic you (syu) and lakritz (lak) mutants, the shift from central-peripheral to circumferential divisions is absent or delayed, correlating with the delay in neuronal differentiation and neurogenesis in these mutants. The reconstructions here show that mitotic cells always remain in contact with the opposite basal surface by means of a thin basal process that can be inherited asymmetrically.
Abstract. We present a method for automatic segmentation of highgrade gliomas and their subregions from multi-channel MR images. Besides segmenting the gross tumor, we also differentiate between active cells, necrotic core, and edema. Our discriminative approach is based on decision forests using context-aware spatial features, and integrates a generative model of tissue appearance, by using the probabilities obtained by tissue-specific Gaussian mixture models as additional input for the forest. Our method classifies the individual tissue types simultaneously, which has the potential to simplify the classification task. The approach is computationally efficient and of low model complexity. The validation is performed on a labeled database of 40 multi-channel MR images, including DTI. We assess the effects of using DTI, and varying the amount of training data. Our segmentation results are highly accurate, and compare favorably to the state of the art.
Background CT is the most common imaging modality in traumatic brain injury (TBI). However, its conventional use requires expert clinical interpretation and does not provide detailed quantitative outputs, which may have prognostic importance. We aimed to use deep learning to reliably and efficiently quantify and detect different lesion types.Methods Patients were recruited between Dec 9, 2014, and Dec 17, 2017, in 60 centres across Europe. We trained and validated an initial convolutional neural network (CNN) on expert manual segmentations (dataset 1). This CNN was used to automatically segment a new dataset of scans, which we then corrected manually (dataset 2). From this dataset, we used a subset of scans to train a final CNN for multiclass, voxel-wise segmentation of lesion types. The performance of this CNN was evaluated on a test subset. Performance was measured for lesion volume quantification, lesion progression, and lesion detection and lesion volume classification. For lesion detection, external validation was done on an independent set of 500 patients from India. Findings 98 scans from one centre were included in dataset 1. Dataset 2 comprised 839 scans from 38 centres: 184 scans were used in the training subset and 655 in the test subset. Compared with manual reference, CNN-derived lesion volumes showed a mean difference of 0•86 mL (95% CI -5•23 to 6•94) for intraparenchymal haemorrhage, 1•83 mL (-12•01 to 15•66) for extra-axial haemorrhage, 2•09 mL (-9•38 to 13•56) for perilesional oedema, and 0•07 mL (-1•00 to 1•13) for intraventricular haemorrhage.Interpretation We show the ability of a CNN to separately segment, quantify, and detect multiclass haemorrhagic lesions and perilesional oedema. These volumetric lesion estimates allow clinically relevant quantification of lesion burden and progression, with potential applications for personalised treatment strategies and clinical research in TBI.
ObjectiveAccurate and precise measurement of vestibular schwannoma (VS) size is key to clinical management decisions. Linear measurements are used in routine clinical practice but are prone to measurement error. This study aims to compare a semi-automated volume segmentation tool against standard linear method for measuring small VS. This study also examines whether oblique tumour orientation can contribute to linear measurement error.Study designExperimental comparison of observer agreement using two measurement techniques.SettingTertiary skull base unit.ParticipantsTwenty-four patients with unilateral sporadic small (< 15 mm maximum intracranial dimension) VS imaged with 1 mm-thickness T1-weighted Gadolinium enhanced MRI.Main outcome measures(1) Intra and inter-observer intraclass correlation coefficients (ICC), repeatability coefficients (RC), and relative smallest detectable difference (%SDD). (2) Mean change in maximum linear dimension following reformatting to correct for oblique orientation of VS.ResultsIntra-observer ICC was higher for semi-automated volumetric when compared with linear measurements, 0.998 (95% CI 0.994–0.999) vs 0.936 (95% CI 0.856–0.972), p < 0.0001. Inter-observer ICC was also higher for volumetric vs linear measurements, 0.989 (95% CI 0.975–0.995) vs 0.946 (95% CI 0.880–0.976), p = 0.0045. The intra-observer %SDD was similar for volumetric and linear measurements, 9.9% vs 11.8%. However, the inter-observer %SDD was greater for volumetric than linear measurements, 20.1% vs 10.6%. Following oblique reformatting to correct tumour angulation, the mean increase in size was 1.14 mm (p = 0.04).ConclusionSemi-automated volumetric measurements are more repeatable than linear measurements when measuring small VS and should be considered for use in clinical practice. Oblique orientation of VS may contribute to linear measurement error.
Imaging in thyroid eye disease (TED) is used to exclude other diagnoses, assess for apical crowding and plan surgery. But to quantify TED activity objectively, subjective clinical scoring assessments remain the norm. Magnetic resonance imaging (MRI) T2-relaxation times correlate with extra-ocular muscle (EOM) inflammation, but are confounded by signal from fat. We investigated whether T2-relaxation mapping in combination with fat fraction (FF) measurements could quantify disease activity in EOMs objectively. Sixty-two TED patients and six controls were enroled for coronal short tau inversion recovery (STIR), T2 multi-echo fast-spin echo and multi-echo fast-gradient echo MRI of the orbits. STIR signal intensity ratios (SIRs), T2-relaxation times and percentage FF were derived for inferior, lateral, superior and medial recti bilaterally. Twelve patients were re-scanned following immunosuppressive treatment. The results found a positive correlation for all subjects between T2 and SIR (p < 0.001), but only mean T2 differed significantly between patients and controls (p < 0.001). We measured FF in EOMs for the first time and found it greater in TED (p < 0.001). There was also a significant reduction in mean T2 after treatment, with a corresponding reduction in the clinical activity score (CAS) in almost all patients. We show that T2-relaxation times differentiate between normal and inflamed EOMs and are responsive to treatment. Combined, uniquely, with FF measurement in EOMs, an objective, quantitative marker of inflammation in TED-affected muscles could be derived. T2-relaxation times mirrored improvements in CAS after treatment, occasionally preceding them. Rarely, they diverged, suggesting limitations in the CAS as a disease burden marker.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.