2021
DOI: 10.2967/jnumed.120.259796
|View full text |Cite
|
Sign up to set email alerts
|

Optical Navigation of the Drop-In γ-Probe as a Means to Strengthen the Connection Between Robot-Assisted and Radioguided Surgery

Abstract: Rationale: With translation of the DROP-IN gamma probe, radioguidance has advanced into laparoscopic robot-assisted surgery. 'GPS-like' navigation further enhances the symbiosis between nuclear medicine and surgery. Therefore, we developed a fluorescence-video-based tracking method that integrates the DROP-IN with navigated-robotic surgery.Methods: Fluorescent markers, integrated into the DROP-IN, were automatically detected using a daVinci Firefly laparoscope. Subsequently, a declipseSPECT-navigation platform… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5

Relationship

4
1

Authors

Journals

citations
Cited by 12 publications
(6 citation statements)
references
References 15 publications
(15 reference statements)
0
6
0
Order By: Relevance
“…In the framework of surgical navigation based on preoperative imaging, AI also has the potential to assist in surgical planning by providing (semi-)automatic segmentation of the structures of importance [219]. Various technologies are already used to register these roadmaps to the patient in the operating room, but to not overly complicate logistics, we expect that in the robotic setting, these registration technologies will eventually converge to such which are directly integrated into the robotic platforms (e.g., laparoscopic video-based and US-based registrations) [67,78]. However, due to patient deformation and repositioning, accurate registration of preoperatively acquired roadmaps remains challenging in soft-tissue anatomies.…”
Section: Discussion and Future Perspectivesmentioning
confidence: 99%
See 3 more Smart Citations
“…In the framework of surgical navigation based on preoperative imaging, AI also has the potential to assist in surgical planning by providing (semi-)automatic segmentation of the structures of importance [219]. Various technologies are already used to register these roadmaps to the patient in the operating room, but to not overly complicate logistics, we expect that in the robotic setting, these registration technologies will eventually converge to such which are directly integrated into the robotic platforms (e.g., laparoscopic video-based and US-based registrations) [67,78]. However, due to patient deformation and repositioning, accurate registration of preoperatively acquired roadmaps remains challenging in soft-tissue anatomies.…”
Section: Discussion and Future Perspectivesmentioning
confidence: 99%
“…This still leaves a lot of room to investigate the value of alternative approaches using, for example, PET-based signals (i.e., beta plus or high-energy gamma emissions), beta minus emissions, or SPION detection, as well as alternative hybrid approaches with Cerenkov, multispectral/multiwavelength fluorescence, MSOT, and Raman spectrometry [16]. To make these imaging modalities, which often find their origin in open surgery, compatible with (laparoscopic) robotic surgery, we observe trends such as miniaturization, tethering, and positional tracking (e.g., [33,78,105,222]). A likely future scenario is that more image-guidance modalities will eventually be wholly integrated into the robotic platform as done for fluorescence imaging.…”
Section: Discussion and Future Perspectivesmentioning
confidence: 99%
See 2 more Smart Citations
“…To this end, marker‐based video tracking of da Vinci instruments has been exploited in the form of patterned‐ [ 13 ] and fluorescent markers. [ 14 ] There is a growing body of evidence suggesting that such approaches can be used to monitor instrument use, [ 15 ] to compare the performance difference between experts and novices, [ 16 ] to perform scoring during virtual simulation training, [ 17 ] to generate freehand scans, [13a,18] and to assess how image guidance technologies effect performance [14b,16a,19] . To make video‐based instrument tracking accessible for a global audience of robotic surgeons, markerless detection algorithms are needed that can automatically extract the required data out of endoscopic videostreams.…”
Section: Introductionmentioning
confidence: 99%