2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Workshops
DOI: 10.1109/cvpr.2005.526
|View full text |Cite
|
Sign up to set email alerts
|

Sign Classification using Local and Meta-Features

Abstract: Abstract

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 13 publications
(7 citation statements)
references
References 15 publications
(15 reference statements)
0
7
0
Order By: Relevance
“…Specifically on sign reading it is worth citing the work described in [8] , which introduced VIDI (Visual Integration and Dissemination of Information), a prototype system for detecting and recognizing signs, able to communicate their contents with a synthesized voice, and [12] , in which the authors describe a set of algorithms for sign detection and recognition for a wearable system to be used by the blind, capable of recognizing a broad variety of signs. Besides sign reading, the general goal of wayfinding has been first addressed by proposing slight modifications of the environment, in order to produce a system of signs that could be easily read by the user (e.g.…”
Section: Related Workmentioning
confidence: 99%
“…Specifically on sign reading it is worth citing the work described in [8] , which introduced VIDI (Visual Integration and Dissemination of Information), a prototype system for detecting and recognizing signs, able to communicate their contents with a synthesized voice, and [12] , in which the authors describe a set of algorithms for sign detection and recognition for a wearable system to be used by the blind, capable of recognizing a broad variety of signs. Besides sign reading, the general goal of wayfinding has been first addressed by proposing slight modifications of the environment, in order to produce a system of signs that could be easily read by the user (e.g.…”
Section: Related Workmentioning
confidence: 99%
“…Silapachote et al [16] and Mattar et al [14] propose a sign recognition system for the visually impaired. It is a preliminary study demonstrating that sign recognition is possible, at least within a moderate subset of images.…”
Section: Related Workmentioning
confidence: 99%
“…Year Interface Type of Text Response Time Adaptation Evaluation Reported Accuracy Ezaki et al [4] Mattar et al [11] 1 Pazio et al [14] 2007 Signage Slanted text ICDAR 2003 Yi and Tian [24] 2012 Glasses Signage, Products 1.5s Coloring VI users P 0.68 R 0.54 Shen and Coughlan [18] Kane et al [7] 2012 2013 PDA, Tactile Stationery Signage Printed page <1s Interactive VI users Warping VI users Stearns et al [23] 2014 Finger-worn Printed page Interactive Warping VI users Shilkrot et al [19] 2014 Finger-worn Printed page Interactive Slanting, Lighting VI users 1 This report is of the OCR / text extraction engine alone and not the complete system. Table 1: Recent efforts in academia of text-reading solutions for the VI.…”
Section: Publicationmentioning
confidence: 99%
“…Yi and Tian [24] placed a camera on shadeglasses to recognize and synthesize text written on objects in front of them, and Hanif and Prevost's [5] did the same while adding a handheld device for tactile cues. Mattar et al are us ing a head-worn camera [11], while Ezaki et al developed a shoulder-mountable camera paired with a PDA [4]. Differing from these systems, we proposed using the finger as a guide [12], and supporting sequential acquisition of text rather than reading text blocks [19].…”
Section: Wearable Devicesmentioning
confidence: 99%