2023
DOI: 10.1109/tmi.2023.3234046
|View full text |Cite
|
Sign up to set email alerts
|

Conditional-Based Transformer Network With Learnable Queries for 4D Deformation Forecasting and Tracking

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 40 publications
0
4
0
Order By: Relevance
“…In 2023, L. V. R. et al [ 113 ] proposed an attention-based time prediction network that treats features extracted from input images as markers of prediction tasks. The network architecture consists of three modules: feature coding and decoding, conditional transformer network, and parallel prior-based latent modeling.…”
Section: Resultsmentioning
confidence: 99%
“…In 2023, L. V. R. et al [ 113 ] proposed an attention-based time prediction network that treats features extracted from input images as markers of prediction tasks. The network architecture consists of three modules: feature coding and decoding, conditional transformer network, and parallel prior-based latent modeling.…”
Section: Resultsmentioning
confidence: 99%
“…Attention mechanisms can be applied to time-series prediction and help networks focus on time intervals that significantly impact forecasting accuracy by increasing corresponding weights. Despite initial works showing the superiority of attention-based architectures on RNNs in respiratory motion forecasting [16,39,44] and the general high performance of transformers on many tasks due to parallel processing and the absence of a vanishing gradient, transformers "are impractical for training or inference in resourceconstrained environments due to their computational and memory requirements" [46]. Indeed, their complexity quadratically grows with the input window length, which hinders their ability to learn long-range dependencies [23,5].…”
Section: Online Learning Of Recurrent Neural Networkmentioning
confidence: 99%
“…Indeed, their complexity quadratically grows with the input window length, which hinders their ability to learn long-range dependencies [23,5]. For instance, it was observed in [39] that transformers predicting breathing signal representations from chest cine-MR imaging led to an inference time approximately three times higher than convolutional GRUs. These shortfalls motivate further research on RNNs for respiratory motion forecasting.…”
Section: Online Learning Of Recurrent Neural Networkmentioning
confidence: 99%
See 1 more Smart Citation