2022
DOI: 10.1002/mp.15854
|View full text |Cite
|
Sign up to set email alerts
|

Combining natural and artificial intelligence for robust automatic anatomy segmentation: Application in neck and thorax auto‐contouring

Abstract: Background Automatic segmentation of 3D objects in computed tomography (CT) is challenging. Current methods, based mainly on artificial intelligence (AI) and end‐to‐end deep learning (DL) networks, are weak in garnering high‐level anatomic information, which leads to compromised efficiency and robustness. This can be overcome by incorporating natural intelligence (NI) into AI methods via computational models of human anatomic knowledge. Purpose We formulate a hybrid intelligence (HI) approach that integrates t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
26
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 16 publications
(35 citation statements)
references
References 110 publications
0
26
0
Order By: Relevance
“…On the other hand, from the perspective of OAR annotation, it contains reference segmentations of the most complete set of 30 OARs in the HaN region. Finally, from the perspective of auto‐segmentation performance, the baseline results obtained by an off‐the‐shelf DL solution (i.e., nnU‐Net) are inferior to the results reported by existing CT‐only DL methods 9 but still comparable to some recent state‐of‐the‐art DL methods 44 . With the release of the HaN‐Seg dataset and deployment of the accompanying HaN‐Seg challenge, our aim is, therefore, to test the hypothesis that the accuracy and reliability of OAR segmentation can be improved by exploiting the fused information from both CT and MR images, with the objective to design, develop, and evaluate novel auto‐segmentation algorithms that rely on robust and accurate registration algorithms and can be benchmarked on a common dataset.…”
Section: Discussionmentioning
confidence: 86%
See 1 more Smart Citation
“…On the other hand, from the perspective of OAR annotation, it contains reference segmentations of the most complete set of 30 OARs in the HaN region. Finally, from the perspective of auto‐segmentation performance, the baseline results obtained by an off‐the‐shelf DL solution (i.e., nnU‐Net) are inferior to the results reported by existing CT‐only DL methods 9 but still comparable to some recent state‐of‐the‐art DL methods 44 . With the release of the HaN‐Seg dataset and deployment of the accompanying HaN‐Seg challenge, our aim is, therefore, to test the hypothesis that the accuracy and reliability of OAR segmentation can be improved by exploiting the fused information from both CT and MR images, with the objective to design, develop, and evaluate novel auto‐segmentation algorithms that rely on robust and accurate registration algorithms and can be benchmarked on a common dataset.…”
Section: Discussionmentioning
confidence: 86%
“…Finally, from the perspective of auto-segmentation performance, the baseline results obtained by an off -the-shelf DL solution (i.e., nnU-Net) are inferior to the results reported by existing CT-only DL methods 9 but still comparable to some recent state-ofthe-art DL methods. 44 With the release of the HaN-Seg dataset and deployment of the accompanying HaN-Seg challenge, our aim is, therefore, to test the hypothesis that the accuracy and reliability of OAR segmentation can be improved by exploiting the fused information from both CT and MR images, with the objective to design, develop, and evaluate novel auto-segmentation algorithms that rely on robust and accurate registration algorithms and can be benchmarked on a common dataset. Considering the recent growth in the application of AI, especially DL, 45 the devised HaN-Seg dataset has also the potential to contribute to a more objective and trustful reporting of research outcomes.…”
Section: Discussionmentioning
confidence: 99%
“…Therefore, 𝛿 and 𝜀 may need to be specifically set or optimized for each object of interest. In this work, we decide 𝛿 based on the differences between two sets of manual segmentation results for the same input images from our previous study [10]. Specifically, we calculate the HD between two manual segmentation masks for the same image.…”
Section: Selection Of δ and εmentioning
confidence: 99%
“…To validate how well the proposed MI metric can be used to estimate the effort required for manually editing autosegmentation of objects of interest, we collected CT data and annotation labels for 6 objects of interest from 3 institutes [10]. There are 64 cases of left submandibular gland (LSmG), 87 cases of mandible (Mnd), 89 cases of orohypopharyngeal constrictor muscles (OHPh), 87 cases of cervical trachea (N-Tr), 89 cases of thoracic esophagus (T-Es), and 89 cases of heart (Hrt).…”
Section: Experiments Settingmentioning
confidence: 99%
See 1 more Smart Citation