2022
DOI: 10.1007/978-3-030-98253-9_26
|View full text |Cite
|
Sign up to set email alerts
|

An Ensemble Approach for Patient Prognosis of Head and Neck Tumor Using Multimodal Data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
9
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5
1

Relationship

2
4

Authors

Journals

citations
Cited by 17 publications
(11 citation statements)
references
References 11 publications
1
9
0
Order By: Relevance
“…Outcome prediction: summary of participants' methods The following describes the approach of each team participating in Task 2 (and 3 for some), in the order of the Task 2 ranking. In [54], Saeed et al (team "BiomedIA") first experimented with the clinical variables and determined that better prediction was achieved using only variables with complete values, compared to using all variables with imputing missing values. They elected to first implement a fusion of PET and CT images by averaging them into a new single PET/CT image that would be further cropped (2 different sizes of 50x50x50 and 80x80x80 were tested) to form the main input to their solution based on a 3D CNN (Deep-CR) trained to extract features which were then fed into Multi-Task Logistic Regression (MTLR, a sequence of logistic regression models created at various timelines to evaluate the probability of the event happening) improved by integrating neural networks to achieve nonlinearity, along with the clinical variables.…”
Section: Results: Reporting Of Challenge Outcomementioning
confidence: 99%
See 1 more Smart Citation
“…Outcome prediction: summary of participants' methods The following describes the approach of each team participating in Task 2 (and 3 for some), in the order of the Task 2 ranking. In [54], Saeed et al (team "BiomedIA") first experimented with the clinical variables and determined that better prediction was achieved using only variables with complete values, compared to using all variables with imputing missing values. They elected to first implement a fusion of PET and CT images by averaging them into a new single PET/CT image that would be further cropped (2 different sizes of 50x50x50 and 80x80x80 were tested) to form the main input to their solution based on a 3D CNN (Deep-CR) trained to extract features which were then fed into Multi-Task Logistic Regression (MTLR, a sequence of logistic regression models created at various timelines to evaluate the probability of the event happening) improved by integrating neural networks to achieve nonlinearity, along with the clinical variables.…”
Section: Results: Reporting Of Challenge Outcomementioning
confidence: 99%
“…Individual participants' papers reporting their methods and results were submitted to the challenge organizers. Reviews were organized by the organizers and the papers of the participants are published in the LNCS challenges proceedings [60,1,52,56,59,12,53,62,21,32,43,37,54,6,67,16,9,48,49,58,40,51,17,46,65,39,33,45,27]. When participating in multiple tasks, participants could submit one or multiple papers.…”
Section: Introduction: Research Contextmentioning
confidence: 99%
“…On the other hand, Z-score normalization was used for PET images. Furthermore, the images are cropped down to 80 × 80 × 48mm 3 as in [21] for two main purposes; the first is to fairly compare our results to the state-of-the-art in [21], which also used images with these dimensions. The second is that this reduction of image dimensions, in turn, speeds up training and inference processes and allows to run multiple experiments.…”
Section: Data Preprocessingmentioning
confidence: 99%
“…In [21], authors tackled the prognosis task for oropharyngeal squamous cell carcinoma patients using CT and PET images, along with clinical data. Their proposed solution was ranked the first in the progression-free survival prediction task of MICCAI 2021 HEad and neCK TumOR (HECKTOR) segmentation and outcome prediction challenge [20].…”
Section: Introductionmentioning
confidence: 99%
“…Bounding box information comes with the dataset for localization of the tumor region, which was used to crop the scans and the mask down to the size of 144 × 144 × 144mm 3 with consistency between the scans. Further cropping down to 80 × 80 × 48 was performed around the tumor region for faster and better performance as in [24]. This highlights the tumor region, allowing the models to learn more easily.…”
Section: Datasets and Preprocessingmentioning
confidence: 99%