2021
DOI: 10.1016/j.media.2021.102164
|View full text |Cite
|
Sign up to set email alerts
|

Visually Navigated Bronchoscopy using three cycle-Consistent generative adversarial network for depth estimation

Abstract: Background-Electromagnetically Navigated Bronchoscopy (ENB) is currently the state-of-the art diagnostic and interventional bronchoscopy. CT-to-body divergence is a critical hurdle in ENB, causing navigation error and ultimately limiting the clinical efficacy of diagnosis and treatment. In this study, Visually Navigated Bronchoscopy (VNB) is proposed to address the aforementioned issue of CT-to-body divergence.Materials and Methods-We extended and validated an unsupervised learning method to generate a depth m… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 27 publications
(13 citation statements)
references
References 61 publications
0
12
0
Order By: Relevance
“…Navigation bronchoscopy has been implemented increasingly by thoracic surgeons in preoperative marking procedures, followed by sublobar resection [ 25 ]. An AI model only requires video bronchoscopic or CT images without any additional examinations [ 34 ]. Moreover, the predicted anatomical location and possible POE can be overlaid in the virtual road map.…”
Section: Discussionmentioning
confidence: 99%
“…Navigation bronchoscopy has been implemented increasingly by thoracic surgeons in preoperative marking procedures, followed by sublobar resection [ 25 ]. An AI model only requires video bronchoscopic or CT images without any additional examinations [ 34 ]. Moreover, the predicted anatomical location and possible POE can be overlaid in the virtual road map.…”
Section: Discussionmentioning
confidence: 99%
“…Year Image Size Tracking Type PE (mm) AE (º) CTF (%) (Bricault et al, 1998) 1998 100x100 Local/Global 2 5 - (Mori et al, 2001) 2001 -Local --79 (Helferty and Higgins, 2002) 2002 -Local --- (Mori et al, 2002) 2002 410x410 Local --73.37 (Deligianni et al, 2004) 2004 454x487 Local 3 ± 2.26 2.18 ± 1.63 - (Nagao et al, 2004) 2004 -Local --77.79 (Shinohara et al, 2006) (Banach et al, 2021) 2021 Local 6.2 ± 2.9 --Table 1: Comparison among bronchoscopic tracking studies with regards to data and evaluation characteristics. Notably, none of the methods share a dataset (currently there is no publicly available dataset for this task) or publish their code.…”
Section: Methodsmentioning
confidence: 99%
“…Moreover, temporal learning techniques have recently been applied to other endoscopic modalites (Turan et al, 2017), but has not been appropriately tested in bronchoscopy. Additionally, depth information has lately been extensively used to improve tracking (Recasens et al, 2021;Banach et al, 2021;Shen et al, 2019;Liu et al, 2020), mixing it with generative neural networks (Zhao et al, 2020;Shen et al, 2019;Liu et al, 2020;Banach et al, 2021).…”
Section: Methodsmentioning
confidence: 99%
“…3) Camera and Target: The camera for the sensing module should have a compact form factor, sufficient resolution, and surgery compatibility. For the prototype, we chose OVM6946 (Ominivision Inc., USA), which has been adopted in many medical and surgical applications [38], [39]. It is 1mm in width and 2.27mm in length and has a 400×400 resolution.…”
Section: Vision-based Force Sensing Modulementioning
confidence: 99%