2020
DOI: 10.1080/21681163.2020.1835554
|View full text |Cite
|
Sign up to set email alerts
|

Towards markerless computer-aided surgery combining deep segmentation and geometric pose estimation: application in total knee arthroplasty

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
11
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(11 citation statements)
references
References 26 publications
0
11
0
Order By: Relevance
“…A deep convolutional neural network was trained to segment intraoperative RGB images and therewith the corresponding depth data, which was used for registration of the preoperative 3D model. An extension of their work additionally included segmentation and registration of the tibia [ 10 ]. Registration of an RGB-D sensor (attached to a mobile C-arm gantry) to intraoperative CBCT imaging was achieved in [ 11 , 12 ] by performing the Iterative Closest Point (ICP) technique on a calibration object visible both in the CBCT image and the RGB-D stream.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…A deep convolutional neural network was trained to segment intraoperative RGB images and therewith the corresponding depth data, which was used for registration of the preoperative 3D model. An extension of their work additionally included segmentation and registration of the tibia [ 10 ]. Registration of an RGB-D sensor (attached to a mobile C-arm gantry) to intraoperative CBCT imaging was achieved in [ 11 , 12 ] by performing the Iterative Closest Point (ICP) technique on a calibration object visible both in the CBCT image and the RGB-D stream.…”
Section: Introductionmentioning
confidence: 99%
“…However, except for a system using more expensive, proprietary hard- and software [ 14 , 15 ], depth-based surgical navigation could not translate into clinical practice yet because sensor accuracy and robustness are still unsatisfying, and the registration of the depth data to the anatomy is mainly achieved through conventional error-prone means (e.g., surface-based or point-based registration). For infrared-based ToF sensors, material dependent depth measurements have been identified as one major source of error [ 10 , 11 , 13 ]. More generally, comprehensive studies comparing depth accuracy of sensors using different technologies (structured light, ToF and active/passive stereoscopy) report ambiguous results between 0.6 mm and 3.9 mm error at 600 mm [ 16 , 17 , 18 ], which can be a realistic distance in a surgical environment.…”
Section: Introductionmentioning
confidence: 99%
“…With the aim of generating a reference clinical dataset, we organized a clinical trial 1 and gathered intraoperative data from 62 Total Knee Arthroplasty (TKA) surgeries. The study was approved by an ethics committee and took place over several months in France.…”
Section: Introductionmentioning
confidence: 99%
“…We hereby make use of a subset of our dataset to evaluate the performances of five deep learning-based approaches for medical image segmentation, applied to the segmentation of knee bones (femur and tibia) in RGB images. We chose this task since accurately localizing the bones in images from the exposed knee is crucial for enabling future marker-less registration and tracking systems [1]. Indeed, soft-tissues or surgical instruments surrounding the targeted anatomy can bias the registration result.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation