2023
DOI: 10.1016/j.engappai.2022.105587
|View full text |Cite|
|
Sign up to set email alerts
|

Self-supervised monocular depth estimation based on combining convolution and multilayer perceptron

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(2 citation statements)
references
References 46 publications
0
2
0
Order By: Relevance
“…By studying the interpretable relationship between the biological visual system and the monocular depth estimation network, it concretizes the attention mechanism in biological vision. Finally, Zheng et al 30 . designed a new framework, A hybrid of a Convolution, self-attention, and an Multilayer Perceptron (MLP) 31 network (CSMHNet), by combining decomposed large kernel convolutions and multi-layer perceptron to overcome the shortcomings of convolutional static weights and locality, while significantly reducing memory overhead compared to the Transformer architecture.…”
Section: Related Workmentioning
confidence: 97%
“…By studying the interpretable relationship between the biological visual system and the monocular depth estimation network, it concretizes the attention mechanism in biological vision. Finally, Zheng et al 30 . designed a new framework, A hybrid of a Convolution, self-attention, and an Multilayer Perceptron (MLP) 31 network (CSMHNet), by combining decomposed large kernel convolutions and multi-layer perceptron to overcome the shortcomings of convolutional static weights and locality, while significantly reducing memory overhead compared to the Transformer architecture.…”
Section: Related Workmentioning
confidence: 97%
“…This framework was integrated with parameter optimization algorithms to improve model accuracy without the need to adjust relevant parameters manually. Other models, namely multilayer perceptron (MLP) [13], random forest [14], support vector classification (SVC) [15], AdaBoost [16], and XGBoost [17], were also used for model training and were used in evaluations of the proposed transfer-learning model. The results of this study were also compared with the prediction results of [18] and [7], which were based on the same dataset.…”
Section: Introductionmentioning
confidence: 99%