2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Workshops 2010
DOI: 10.1109/cvprw.2010.5543821
|View full text |Cite
|
Sign up to set email alerts
|

VizWiz::LocateIt - enabling blind people to locate objects in their environment

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
53
0
2

Year Published

2012
2012
2022
2022

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 102 publications
(56 citation statements)
references
References 15 publications
1
53
0
2
Order By: Relevance
“…Whether performing real-time optical character recognition to help the blind [1], aggregating and mapping microblog data for disaster relief [2], or labeling and classifying images for ornithology research [3], people can be used to perform signal processing and information processing tasks that are difficult for machines. Although the use of human computation in larger information systems has a long history [4], [5], enabled by modern communication technologies the use of such crowd systems has flourished of late and now forms the basis for many enterprises [6], [7].…”
Section: Introductionmentioning
confidence: 99%
“…Whether performing real-time optical character recognition to help the blind [1], aggregating and mapping microblog data for disaster relief [2], or labeling and classifying images for ornithology research [3], people can be used to perform signal processing and information processing tasks that are difficult for machines. Although the use of human computation in larger information systems has a long history [4], [5], enabled by modern communication technologies the use of such crowd systems has flourished of late and now forms the basis for many enterprises [6], [7].…”
Section: Introductionmentioning
confidence: 99%
“…Most closely related to our work are the systems by Hub et al [8], Caperna et al [3], and Bigham et al [2]. In 2004, Hub et al [8] presented a system that assists blind users in orienting themselves in indoor environments.…”
Section: Related Workmentioning
confidence: 81%
“…However, the corresponding evaluation has been performed in a simplified scenario and computer vision was left as major aspect for future work. Bigham et al [2] use Speeded Up Robust Features (SURF; see [10]) for object identification, but instead of training an object database (see, e.g., [3]), they send images with user requests (e.g., where is the object in the image) to Amazon's Mechanical Turk [1] where humans can outline the objects. The outlines of the object can then be used to estimate the object's location in the environment and guide the user towards the object by informing the user how close he is to the target [2].…”
Section: Related Workmentioning
confidence: 99%
“…Provision of this information is feasible by using the technique of computer vision. As mentioned in Vázquez and Steinfeld (2012), some mobile applications have been developed to provide information of size, number and position of photographed objects and even direct the visually impaired to focalise interested objects for taking pictures (e.g., Bigham et al 2010;Jayant et al 2011). By implementing further information, we can expect that the vibration function of mobile devices could help them to perceive simple graphic information.…”
Section: Discussionmentioning
confidence: 99%