2021
DOI: 10.18178/joig.9.4.122-134
|View full text |Cite
|
Sign up to set email alerts
|

Survey of Video Based Small Target Detection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
16
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 82 publications
(16 citation statements)
references
References 0 publications
0
16
0
Order By: Relevance
“…Where FPR and FNR were the false positive rate and false negative rate calculated in leave-one-subjectout (LOSO) cross test [19,24]. The LOSO cross test was to leave data samples of one subject in the training set as the test samples, and the data samples of the remained subjects as the training samples.…”
Section: Methodsmentioning
confidence: 99%
“…Where FPR and FNR were the false positive rate and false negative rate calculated in leave-one-subjectout (LOSO) cross test [19,24]. The LOSO cross test was to leave data samples of one subject in the training set as the test samples, and the data samples of the remained subjects as the training samples.…”
Section: Methodsmentioning
confidence: 99%
“…Here, f, e, and l respectively denote the foreign body, the power equipment, and the label as shown in Figure 1. () 3 adds an extra edge from the original input to the output node if they are at the same level several to optimize the cross-scale connections, which allows scale-wise level reweighting to fuse more features without adding much cost [20]. Therefore, we use BiFPN layers to replace the FPN + PAN layers.…”
Section: Iccee-2022mentioning
confidence: 99%
“…In the early stage of video understanding, researchers pay more attention to the features of manual design, which is the basis for encoding the appearance and motion information of video 16 . With the great success of deep neural network in Imagenet 17,18 and object recognition and detection 43,44,45 , many video recognition and classification methods begin to extract features from 2D image convolution network after video frame extraction, and explore the improvement of video classification effect in combination with video optical flow information 20,19,21 . Such methods often process RGB image frames and optical flow image frames respectively, and fuse the features before recognition.…”
Section: Related Workmentioning
confidence: 99%