2018
DOI: 10.1007/978-3-319-75238-9_38
|View full text |Cite
|
Sign up to set email alerts
|

Ensembles of Multiple Models and Architectures for Robust Brain Tumour Segmentation

Abstract: Deep learning approaches such as convolutional neural nets have consistently outperformed previous methods on challenging tasks such as dense, semantic segmentation. However, the various proposed networks perform differently, with behaviour largely influenced by architectural choices and training settings. This paper explores Ensembles of Multiple Models and Architectures (EMMA) for robust performance through aggregation of predictions from a wide range of methods. The approach reduces the influence of the met… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

2
230
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 335 publications
(241 citation statements)
references
References 29 publications
2
230
0
Order By: Relevance
“…Rather than truly correcting for scatter or other physical effects, a machine learning‐based method will learn to map voxels from a source distribution (CBCT) to a target distribution (planning CT) . A variety of machine learning‐based methods have been used in other applications of radiation therapy, including conversion of MR images to CT images, prediction of radiation toxicities, dose calculation, and automatic segmentation . Although machine learning‐based methods can be computationally expensive to train, once a well‐trained model is developed, the image correction can be applied in seconds, making this approach ideal for online ART.…”
Section: Introductionmentioning
confidence: 99%
“…Rather than truly correcting for scatter or other physical effects, a machine learning‐based method will learn to map voxels from a source distribution (CBCT) to a target distribution (planning CT) . A variety of machine learning‐based methods have been used in other applications of radiation therapy, including conversion of MR images to CT images, prediction of radiation toxicities, dose calculation, and automatic segmentation . Although machine learning‐based methods can be computationally expensive to train, once a well‐trained model is developed, the image correction can be applied in seconds, making this approach ideal for online ART.…”
Section: Introductionmentioning
confidence: 99%
“…Different feature fusion approaches using deep neural networks are discussed: The first and most basic method is to combine the input images/features and process them jointly in a single UNET. This describes most methods in the literature for handling multi‐image modality datasets such as the fusion of multiparametric brain MR images of T1‐ and T2‐ weighted MRI for brain tumor segmentation The second approach is to conduct feature fusion on images of different resolutions through two steps of extracting different sizes of input patches in the input images and giving them as inputs of different networks to obtain the different feature levels to conduct the feature fusion. The last method is to conduct feature fusion based on deep convolutional and recurrent neural networks (RNN), where the RNNs are responsible for exploiting the intraslice and interslice contexts respectively …”
Section: Discussionmentioning
confidence: 99%
“…In terms of the network architecture design, our DFCN‐CoSeg network was inspired by the encoder‐decoder based 3D fully convolutional networks (3D‐FCN)and the 3D‐UNets . As a natural extension of the well‐known 2D FCN proposed by Long et al ,.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…This network performed on par with the proposed DLM for all metrics except the HD95 metric, where it had a significant lower performance ( P < 0.01) (Data ). In future work, we may explore other neural network architectures, such as those with multiscale patch‐based networks or ensembles of different architectures for improved results.…”
Section: Discussionmentioning
confidence: 99%