2013
DOI: 10.1007/978-3-642-41190-8_4
|View full text |Cite
|
Sign up to set email alerts
|

Mobile Visual Assistive Apps: Benchmarks of Vision Algorithm Performance

Abstract: Although the use of computer vision to analyse images from smartphones is in its infancy, the opportunity to exploit these devices for various assistive applications is beginning to emerge. In this paper, we consider two potential applications of computer vision in the assistive context for blind and partially sighted users. These two applications are intended to help provide answers to the questions of "Where am I?" and "What am I holding?". First, we suggest how to go about providing estimates of the indoor … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2014
2014
2020
2020

Publication Types

Select...
4
2
1

Relationship

2
5

Authors

Journals

citations
Cited by 8 publications
(6 citation statements)
references
References 5 publications
0
6
0
Order By: Relevance
“…In a pilot study (Rivera-Rubio et al, 2013), conducted within indoor, but highly ambiguous corridors, we have found that with relatively modest processes, paths can be distinguished with reasonable certainty using visual cues alone. In more extensive tests, verified with surveying equipment (Rivera-Rubio et al, 2014), we found that user location on a path can be inferred from hand-held and wearable cameras.…”
Section: Vision Challengesmentioning
confidence: 90%
See 1 more Smart Citation
“…In a pilot study (Rivera-Rubio et al, 2013), conducted within indoor, but highly ambiguous corridors, we have found that with relatively modest processes, paths can be distinguished with reasonable certainty using visual cues alone. In more extensive tests, verified with surveying equipment (Rivera-Rubio et al, 2014), we found that user location on a path can be inferred from hand-held and wearable cameras.…”
Section: Vision Challengesmentioning
confidence: 90%
“…Locating user on map based on visual cues: The task is to locate the user precisely on the map (within a given radius determined on the basis of GPS output) by identifying landmarks and visual cues in usergenerated live feed and matching these to the tags and images in the semantically enriched local maps. In a pilot study (Rivera-Rubio et al, 2013), conducted within indoor, but highly ambiguous corridors, we have found that with relatively modest processes, paths can be distinguished with reasonable certainty using visual cues alone. In more extensive tests, verified with surveying equipment (Rivera-Rubio et al, 2014), we found that user location on a path can be inferred from hand-held and wearable cameras.…”
Section: Vision Challengesmentioning
confidence: 90%
“…along a specific physical path, relative to start and end point, a person might be. We addressed the first of these questions in previous work (Rivera-Rubio et al, 2013).…”
Section: Visual Pathsmentioning
confidence: 99%
“…Machine learning and pattern recognition are not new to the geosciences but advances in AI (and related advances in robotics) have a significant potential to impact future workflows. Systems that are being developed to assist the blind through computer visualization and sensor data fusion (Rivera-Rubio et al 2013), security monitoring systems (Choi & Savarese 2014) and biomedical imaging (Mudry et al 2013;Toews & Fig. 11.…”
Section: Selected Advancesmentioning
confidence: 99%