2020
DOI: 10.1007/978-3-030-59716-0_44
|View full text |Cite
|
Sign up to set email alerts
|

Sensorless Freehand 3D Ultrasound Reconstruction via Deep Contextual Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
22
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 39 publications
(23 citation statements)
references
References 15 publications
1
22
0
Order By: Relevance
“…Considering this threshold as a target accuracy, the accuracies ranging between 5.7 and 16.4 mm obtained by our method are sufficient as an initialisation for refinement with other registration techniques. To use this initialisation reliably without a tracker, a LUS volume could then be estimated either by separately registering multiple images in time, or by using a single image registration and a freehand US compounding method [34], [35]. Another option would be to estimate the LUS probe position through laparoscopic video based tracking [36], [37].…”
Section: Discussionmentioning
confidence: 99%
“…Considering this threshold as a target accuracy, the accuracies ranging between 5.7 and 16.4 mm obtained by our method are sufficient as an initialisation for refinement with other registration techniques. To use this initialisation reliably without a tracker, a LUS volume could then be estimated either by separately registering multiple images in time, or by using a single image registration and a freehand US compounding method [34], [35]. Another option would be to estimate the LUS probe position through laparoscopic video based tracking [36], [37].…”
Section: Discussionmentioning
confidence: 99%
“…In the training phase, the ConvLSTM predicts relative transform parameters Θ between all adjacent frames in the skill sequence. Its loss function includes two items, the first item is the mean absolute error (MAE) loss (i.e., L1 normalization), and the second item is the case-wise correlation loss from [3], which is beneficial to improve the generalization performance.…”
Section: Loss Functionmentioning
confidence: 99%
“…Quantitative and Qualitative Analysis. The current commonly used evaluation indicator is the final drift [14,3], which is the drift of final frame of a sequence, and the drift is the distance between the center points of the frame according to the real relative position and the estimated relative position. On this basis, a series of indicators are used to evaluate the accuracy of our proposed framework in estimating the relative transform parameters among adjacent frames.…”
Section: Loss Functionmentioning
confidence: 99%
See 2 more Smart Citations