2022
DOI: 10.21203/rs.3.rs-2360944/v1
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A lightweight neural network designed for fluid velocimetry

Abstract: We devise a novel Lightweight Image Matching Architecture (LIMA), which is designed and optimized for Particle Image Velocimetry (PIV). LIMA is a convolutional neural network (CNN) that performs symmetric image matching and employs an iterative residual refinement strategy, which allows us to optimize the total number of refinement steps to balance accuracy and computational efficiency. The network is trained on kinematic datasets with a loss function that penalizes larger gradients. We consider a six- (LIMA-6… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
0
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1
1

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 24 publications
0
0
0
Order By: Relevance
“…2 of 18 computational capabilities on unmanned aerial vehicle (UAV) platforms, it is crucial to enrich methods for aerial target recognition and detection by introducing Slimmable neural networks and incorporating the Squeeze and Excitation (SE) and Swin-Transformer mechanisms. These enhancements strengthen the model's adaptability to changes in perspective and scale, enabling it to possess stronger recognition capabilities [23]. Finally, comparative experiments using the publicly available VisDrone19 dataset have confirmed that both the SE-YOLOv5s and ST-YOLOv5s models outperform the YOLOv5s model in terms of performance [24].…”
Section: Introductionmentioning
confidence: 73%
“…2 of 18 computational capabilities on unmanned aerial vehicle (UAV) platforms, it is crucial to enrich methods for aerial target recognition and detection by introducing Slimmable neural networks and incorporating the Squeeze and Excitation (SE) and Swin-Transformer mechanisms. These enhancements strengthen the model's adaptability to changes in perspective and scale, enabling it to possess stronger recognition capabilities [23]. Finally, comparative experiments using the publicly available VisDrone19 dataset have confirmed that both the SE-YOLOv5s and ST-YOLOv5s models outperform the YOLOv5s model in terms of performance [24].…”
Section: Introductionmentioning
confidence: 73%