“…List of Tables Table 1 TRAIT Best Results Summary iv Table 2 Subtests under TRAIT 3 Table 3 TRAIT Dataset Statistics 6 Table 4 Sizes of Images 6 Table 5 Participated by Phases 13 Table 6 Participated by Class 13 Table 7 Class A: Precision, Recall and F1-Score 18 Table 8 Class C: Precision, Recall and F1-Score 18 Table 9 Class B: Accuracy of recognition 19 ii ______________________________________________________________________________________________________ Table 10 Class B: Accuracy of recognition (unordered) 20 Table 11 Class C: Accuracy of recognition 21 Table 12 Class C: Accuracy of recognition (unordered) 22 Table 13 Class D: Detection and recognition of URLs 23 Table 14 Class A: Text detection durations 23 Table 15 Class B: Text recognition durations 24 Table 16 Class C: Text detection and recognition durations 25 List of Figures iii ______________________________________________________________________________________________________…”
Section: Conclusion 28mentioning
confidence: 99%
“…Previously there have been a number of papers on the evaluation of text detection and localization [23][24][25]. Usually, the detection results are evaluated by comparing the bounding box of the ground truth with the bounding box detected by the algorithm.…”
“…List of Tables Table 1 TRAIT Best Results Summary iv Table 2 Subtests under TRAIT 3 Table 3 TRAIT Dataset Statistics 6 Table 4 Sizes of Images 6 Table 5 Participated by Phases 13 Table 6 Participated by Class 13 Table 7 Class A: Precision, Recall and F1-Score 18 Table 8 Class C: Precision, Recall and F1-Score 18 Table 9 Class B: Accuracy of recognition 19 ii ______________________________________________________________________________________________________ Table 10 Class B: Accuracy of recognition (unordered) 20 Table 11 Class C: Accuracy of recognition 21 Table 12 Class C: Accuracy of recognition (unordered) 22 Table 13 Class D: Detection and recognition of URLs 23 Table 14 Class A: Text detection durations 23 Table 15 Class B: Text recognition durations 24 Table 16 Class C: Text detection and recognition durations 25 List of Figures iii ______________________________________________________________________________________________________…”
Section: Conclusion 28mentioning
confidence: 99%
“…Previously there have been a number of papers on the evaluation of text detection and localization [23][24][25]. Usually, the detection results are evaluated by comparing the bounding box of the ground truth with the bounding box detected by the algorithm.…”
“…Finally, the framework also includes a tool for comparing the GT data with the results created by the algorithm tests. This result set evaluation tool uses a variety of evaluation metrics [28,29] to decide what detection algorithm performs best and outputs the optimal parameters for a given set of videos. Prior experiments [28] indicate that the VFD performance evaluation framework is very promising and is a worthy alternative for the error-prone and time-consuming experimental evaluations that are used in many works today.…”
To accomplish more valuable and more accurate video fire detection, this paper points out future directions and discusses first steps which are now being taken to improve the vision-based detection of smoke and flames. First, an overview is given of the state of the art detection methods in the visible and infrared spectral range. Then, a novel multi-sensor smoke and flame detector is proposed which combines the multimodal information of low-cost visual and thermal infrared detection results. Experiments on fire and nonfire multi-sensor sequences indicate that the combined detector yields more accurate results, with fewer false alarms, than either detector alone. Next, a framework for multi-view fire analysis is discussed to overcome the lack in a video-based fire analysis tool and to detect valuable fire characteristics at the early stage of the fire. As prior experimental results show, this combined analysis from different viewpoints provides more valuable fire characteristics. Information about 3D fire location, size and growing rate can be extracted from the video data at practically no time. Finally, directions towards standardized evaluation and video-driven fire forecasting are suggested.
“…Computer vision finds its usefulness in automatic inspection [4], assisting humans in identification tasks, object recognition, controlling processes, detecting events, navigation the benefits of which could be effectively extracted for medical, military, industrial, traffic surveillance and safety purposes [5].…”
Presence of fog and haze significantly reduces the visibility of a scene. Better visibility is crucial for all computer vision applications thus recovering images impaired by haze or dehazing finds its application in the fields of surveillance, tracking, detection and restoration. In this paper fusion based approach using principle component analysis (PCA) technique has been adopted. The novelty of this algorithm is that it does not require any haze depth generation as normally required in many existing methods. Using the original image two images are derived on these images contrast adjustment, and contrast normalization techniques are performed. PCA fusion improves fused image quality and resolution. This method only requires the original image and is simple and easy to implement. As the haze impaired image appears whitish and blurry the details of the road become less evident thus making driving in foggy weather conditions unsafe. Thus, the proposed method concentrates on dehazing for better road visibility. The qualitative and quantitative comparison as compared with existing color fidelity and contrast reveals that our proposed novel method is better at restoring color fidelity and enhancing contrast.
General TermsHaze, Air light, Contrast Color fidelity.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.