2021
DOI: 10.1007/s11548-021-02432-y
|View full text |Cite
|
Sign up to set email alerts
|

Real-time deep learning semantic segmentation during intra-operative surgery for 3D augmented reality assistance

Abstract: Purpose The current study aimed to propose a Deep Learning (DL) and Augmented Reality (AR) based solution for a in-vivo robot-assisted radical prostatectomy (RARP), to improve the precision of a published work from our group. We implemented a two-steps automatic system to align a 3D virtual ad-hoc model of a patient’s organ with its 2D endoscopic image, to assist surgeons during the procedure. Methods This approach was carried out using a Convolutional Neu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
31
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
2
2
2

Relationship

2
4

Authors

Journals

citations
Cited by 41 publications
(35 citation statements)
references
References 30 publications
(34 reference statements)
0
31
0
Order By: Relevance
“…Indeed, the rotation on the Z-axis is irrelevant because of the catheter's symmetry, while the rotation along the Y-axis can be successfully retrieved from the semantic map, 16 as proven by results in Table 3. Thus, only the rotation along the X-axis should be predicted.…”
Section: Methodsmentioning
confidence: 86%
See 3 more Smart Citations
“…Indeed, the rotation on the Z-axis is irrelevant because of the catheter's symmetry, while the rotation along the Y-axis can be successfully retrieved from the semantic map, 16 as proven by results in Table 3. Thus, only the rotation along the X-axis should be predicted.…”
Section: Methodsmentioning
confidence: 86%
“…The reason why the rotation can be more easily retrieved, is that, among the three axes, only one is effectively unknown. Indeed, the rotation on the Z ‐axis is irrelevant because of the catheter's symmetry, while the rotation along the Y ‐axis can be successfully retrieved from the semantic map, 16 as proven by results in Table 3. Thus, only the rotation along the X ‐axis should be predicted. When the organ is the only information to fall back on, different problems arise, such as organs' deformability, difficulty in distinguishing organ's texture from the surrounding tissues, and the fact that their rotation is bound to a fixed anatomical constraint.…”
Section: Methodsmentioning
confidence: 87%
See 2 more Smart Citations
“…This idea was demonstrated in a previous work [6] from our research group, where a deep learning [7] based method was used to classify femur fractures and the performance of physicians with and without its help was compared. Deep learning is becoming more and more widely used, giving astonishing results in different fields of application, such as surgery [8,9,10] and face recognition [11]. In the vision domain, after the introduction of AlexNet [12] on the ImageNet competition in 2017, the applications of Convolutional Neural Networks (CNNs) have been increasing for their ability to capture the spatial dependencies in an image.…”
Section: Introductionmentioning
confidence: 99%