The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2008
DOI: 10.1155/2008/824726
|View full text |Cite
|
Sign up to set email alerts
|

A Review and Comparison of Measures for Automatic Video Surveillance Systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
26
0

Year Published

2011
2011
2018
2018

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 68 publications
(28 citation statements)
references
References 5 publications
0
26
0
Order By: Relevance
“…List of Tables Table 1 TRAIT Best Results Summary iv Table 2 Subtests under TRAIT 3 Table 3 TRAIT Dataset Statistics 6 Table 4 Sizes of Images 6 Table 5 Participated by Phases 13 Table 6 Participated by Class 13 Table 7 Class A: Precision, Recall and F1-Score 18 Table 8 Class C: Precision, Recall and F1-Score 18 Table 9 Class B: Accuracy of recognition 19 ii ______________________________________________________________________________________________________ Table 10 Class B: Accuracy of recognition (unordered) 20 Table 11 Class C: Accuracy of recognition 21 Table 12 Class C: Accuracy of recognition (unordered) 22 Table 13 Class D: Detection and recognition of URLs 23 Table 14 Class A: Text detection durations 23 Table 15 Class B: Text recognition durations 24 Table 16 Class C: Text detection and recognition durations 25 List of Figures iii ______________________________________________________________________________________________________…”
Section: Conclusion 28mentioning
confidence: 99%
See 1 more Smart Citation
“…List of Tables Table 1 TRAIT Best Results Summary iv Table 2 Subtests under TRAIT 3 Table 3 TRAIT Dataset Statistics 6 Table 4 Sizes of Images 6 Table 5 Participated by Phases 13 Table 6 Participated by Class 13 Table 7 Class A: Precision, Recall and F1-Score 18 Table 8 Class C: Precision, Recall and F1-Score 18 Table 9 Class B: Accuracy of recognition 19 ii ______________________________________________________________________________________________________ Table 10 Class B: Accuracy of recognition (unordered) 20 Table 11 Class C: Accuracy of recognition 21 Table 12 Class C: Accuracy of recognition (unordered) 22 Table 13 Class D: Detection and recognition of URLs 23 Table 14 Class A: Text detection durations 23 Table 15 Class B: Text recognition durations 24 Table 16 Class C: Text detection and recognition durations 25 List of Figures iii ______________________________________________________________________________________________________…”
Section: Conclusion 28mentioning
confidence: 99%
“…Previously there have been a number of papers on the evaluation of text detection and localization [23][24][25]. Usually, the detection results are evaluated by comparing the bounding box of the ground truth with the bounding box detected by the algorithm.…”
Section: Text Detection Metricsmentioning
confidence: 99%
“…Finally, the framework also includes a tool for comparing the GT data with the results created by the algorithm tests. This result set evaluation tool uses a variety of evaluation metrics [28,29] to decide what detection algorithm performs best and outputs the optimal parameters for a given set of videos. Prior experiments [28] indicate that the VFD performance evaluation framework is very promising and is a worthy alternative for the error-prone and time-consuming experimental evaluations that are used in many works today.…”
Section: Vfd Performance Evaluationmentioning
confidence: 99%
“…Computer vision finds its usefulness in automatic inspection [4], assisting humans in identification tasks, object recognition, controlling processes, detecting events, navigation the benefits of which could be effectively extracted for medical, military, industrial, traffic surveillance and safety purposes [5].…”
Section: Introductionmentioning
confidence: 99%