2023
DOI: 10.1007/978-3-031-25063-7_19
|View full text |Cite
|
Sign up to set email alerts
|

Hybrid Transformer Based Feature Fusion for Self-Supervised Monocular Depth Estimation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 48 publications
0
3
0
Order By: Relevance
“…However, this makes it possible for some useful feature information to be lost when passing features between each network, thus affecting the quality of the fused features. Therefore, a few methods [27,28] adopt a parallel structure, such as [28], which employs three encoders to obtain helpful information operating at different spatial resolutions, and then integrates these pieces of information using a multi-scale fusion block. However, this approach may suffer from insufficient feature fusion and semantic information.…”
Section: Transformermentioning
confidence: 99%
See 2 more Smart Citations
“…However, this makes it possible for some useful feature information to be lost when passing features between each network, thus affecting the quality of the fused features. Therefore, a few methods [27,28] adopt a parallel structure, such as [28], which employs three encoders to obtain helpful information operating at different spatial resolutions, and then integrates these pieces of information using a multi-scale fusion block. However, this approach may suffer from insufficient feature fusion and semantic information.…”
Section: Transformermentioning
confidence: 99%
“…However, the pure Transformer model lacks the ability to model local information due to the absence of spatial inductive bias. To achieve more satisfactory results, some methods have started to combine Transformer with CNNs [13,22,[26][27][28] to leverage the strengths of both approaches. This combination allows for better performance in MDE tasks [13,22,26], as illustrated in Fig.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Herein, we call such CNN-Transformer architecture hybrid methods. Soon after, several hybrid models were proposed in the field of MDE [19][20][21].…”
Section: Introductionmentioning
confidence: 99%