Achieving good recognition results from a single method for text lines in video/natural scene images captured by high resolution cameras or low resolution mobile cameras, and images in web pages, is often hard. In this paper, we propose new sharpness based features of textual portion of each input text line image using HSI color space for the classification of an input image into one of the four classes (video, scene, mobile or born digital). This helps in choosing an appropriate method based on the class type of the input text for its improved recognition rate. For a given input text line image, the proposed method obtains H, S and I images. Then Canny edge images are obtained for H, S and I spaces, which results in text candidates. We perform sliding window operation over the text candidate image of each text line of each color space to estimate new sharpness by calculating stroke width and gradient information. The sharpness values of the text lines of the three color spaces are then fed to k-means clustering with maximum, minimum and average guesses, which results in three respective clusters. The mean of each cluster for respective color spaces outputs a feature vector having nine feature values for image classification with the help of an SVM classifier. Experimental results on standard datasets, namely, ICDAR 2013, ICDAR 2015 video, ICDAR 2015 natural scene data, ICDAR 2013 born digital data and the images captured by a mobile camera (our own data) show that the proposed classification method helps in improving recognition results.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.