In this paper we report the set-up and results of the Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized in conjunction with the MICCAI 2012 and 2013 conferences. Twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low- and high-grade glioma patients—manually annotated by up to four raters—and to 65 comparable scans generated using tumor image simulation software. Quantitative evaluations revealed considerable disagreement between the human raters in segmenting various tumor sub-regions (Dice scores in the range 74%–85%), illustrating the difficulty of this task. We found that different algorithms worked best for different sub-regions (reaching performance comparable to human inter-rater variability), but that no single algorithm ranked in the top for all sub-regions simultaneously. Fusing several good algorithms using a hierarchical majority vote yielded segmentations that consistently ranked above all individual algorithms, indicating remaining opportunities for further methodological improvements. The BRATS image data and manual annotations continue to be publicly available through an online evaluation system as an ongoing benchmarking resource.
The specific computer-driven system used in this study can reduce mechanical ventilation duration and ICU length of stay, as compared with a physician-controlled weaning process.
We present a study of multiple sclerosis segmentation algorithms conducted at the international MICCAI 2016 challenge. This challenge was operated using a new open-science computing infrastructure. This allowed for the automatic and independent evaluation of a large range of algorithms in a fair and completely automatic manner. This computing infrastructure was used to evaluate thirteen methods of MS lesions segmentation, exploring a broad range of state-of-theart algorithms, against a high-quality database of 53 MS cases coming from four centers following a common definition of the acquisition protocol. Each case was annotated manually by an unprecedented number of seven different experts. Results of the challenge highlighted that automatic algorithms, including the recent machine learning methods (random forests, deep learning, …), are still trailing human expertise on both detection and delineation criteria. In addition, we demonstrate that computing a statistically robust consensus of the algorithms performs closer to human expertise on one score (segmentation) although still trailing on detection scores.
fMRI retinotopic mapping provides detailed information about the correspondence between the visual field and its cortical representation in the individual subject. Besides providing for the possibility of unambiguously localizing functional imaging data with respect to the functional architecture of the visual system, it is a powerful tool for the investigation of retinotopic properties of visual areas in the healthy and impaired brain. fMRI retinotopic mapping differs conceptually from a more traditional volume-based, block-type, or event-related analysis, in terms of both the surface-based analysis of the data and the phase-encoded paradigm. Several methodological works related to fMRI retinotopic mapping have been published. However, a detailed description of all the methods involved, discussing the steps from stimulus design to the processing of phase data on the surface, is still missing. We describe here step by step our methodology for the complete processing chain. Besides reusing methods proposed by other researchers in the field, we introduce original ones: improved stimuli for the mapping of polar angle retinotopy, a method of assigning volume-based functional data to the surface, and a way of weighting phase information optimally to account for the SNR obtained locally. To assess the robustness of these methods we present a study performed on three subjects, demonstrating the reproducibility of the delineation of low order visual areas.
The subjective experience of color by synesthetes when viewing achromatic letters and numbers supposedly relates to real color experience, as exemplified by the recruitment of the V4 color center observed in some brain imaging studies. Phenomenological reports and psychophysics tests indicate, however, that both experiences are different. Using functional magnetic resonance imaging, we tried to precise the degree of coactivation by real and synesthetic colors, by evaluating each color center individually, and applying adaptation protocols across real and synesthetic colors. We also looked for structural differences between synesthetes and nonsynesthetes. In 10 synesthetes, we found that color areas and retinotopic areas were not activated by synesthetic colors, whatever the strength of synesthetic associations measured objectively for each subject. Voxel-based morphometry revealed no white matter (WM) or gray matter difference in those regions when compared with 25 control subjects. But synesthetes had more WM in the retrosplenial cortex bilaterally. The joint coding of real and synesthetic colors, if it exists, must therefore be distributed rather than localized in the visual cortex. Alternatively, the key to synesthetic color experience might not lie in the color system.
Email address: Olivier.Commowick@inria.fr (Olivier Commowick) Preprint submitted to Nature Scientific Reports July 12, 2018 . CC-BY 4.0 International license It is made available under a (which was not peer-reviewed) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity.The copyright holder for this preprint . http://dx.doi.org/10.1101/367557 doi: bioRxiv preprint first posted online Jul. 13, 2018; We present a study of multiple sclerosis segmentation algorithms conducted at the international MICCAI 2016 challenge. This challenge was operated using a new open-science computing infrastructure. This allowed for the automatic and independent evaluation of a large range of algorithms in a fair and completely automatic manner. This computing infrastructure was used to evaluate thirteen methods of MS lesions segmentation, exploring a broad range of state-of-theart algorithms, against a high-quality database of 53 MS cases coming from four centers following a common definition of the acquisition protocol. Each case was annotated manually by an unprecedented number of seven different experts. Results of the challenge highlighted that automatic algorithms, including the recent machine learning methods (random forests, deep learning, . . . ), are still trailing human expertise on both detection and delineation criteria.In addition, we demonstrate that computing a statistically robust consensus of the algorithms performs closer to human expertise on one score (segmentation) although still trailing on detection scores.
We have designed a computerized system providing closed-loop control of the level of pressure support ventilation (PSV). The system sets itself at the lowest level of PSV that maintains respiratory rate (RR), tidal volume (VT), and end-tidal CO(2) pressure (PET(CO(2))) within predetermined ranges defining acceptable ventilation (i.e., 12 < RR < 28 cycles/min, VT > 300 ml [> 250 if weight < 55 kg], and PET(CO(2)) < 55 mm Hg [< 65 mm Hg if chronic CO(2) retention]). Ten patients received computer-controlled (automatic) PSV and physician-controlled (standard) PSV, in random order, during 24 h for each mode. An estimation of occlusion pressure (P(0.1)) was recorded continuously. The average time spent with acceptable ventilation as previously defined was 66 +/- 24% of the total ventilation time with standard PSV versus 93 +/- 8% with automatic PSV (p < 0.05), whereas the level of PSV was similar during the two periods (17 +/- 4 cm H(2)O versus 19 +/- 6 cm H(2)O). The time spent with an estimated P(0.1) above 4 cm H(2)O was 34 +/- 35% of the standard PSV time versus only 11 +/- 17% of the automatic PSV time (p < 0.01). Automatic PSV increased the time spent within desired ventilation parameter ranges and apparently reduced periods of excessive workload.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.