Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Application 2019
DOI: 10.5220/0007582304760485
|View full text |Cite
|
Sign up to set email alerts
|

Active Object Search with a Mobile Device for People with Visual Impairments

Abstract: Modern smartphones can provide a multitude of services to assist people with visual impairments, and their cameras in particular can be useful for assisting with tasks, such as reading signs or searching for objects in unknown environments. Previous research has looked at ways to solve these problems by processing the camera's video feed, but very little work has been done in actively guiding the user towards specific points of interest, maximising the effectiveness of the underlying visual algorithms. In this… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2019
2019
2020
2020

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 18 publications
0
2
0
Order By: Relevance
“…Current technological limitations include prohibitive costs, bulky hardware requirements and nonuser-friendly interfaces Golledge et al 2004;Yusif et al 2016]. To address these issues, we implemented a handheld mobile system that is based on a concept proposed by Lock et al [2019a] and tested by Lock et al [2019b] using a Google Tango device that is able to localise itself in real-time. This system has the benefit of minimal hardware requirements and a compact, familiar form-factor, which will help to overcome the hurdle of user-acceptance and usability.…”
Section: System Descriptionmentioning
confidence: 99%
See 1 more Smart Citation
“…Current technological limitations include prohibitive costs, bulky hardware requirements and nonuser-friendly interfaces Golledge et al 2004;Yusif et al 2016]. To address these issues, we implemented a handheld mobile system that is based on a concept proposed by Lock et al [2019a] and tested by Lock et al [2019b] using a Google Tango device that is able to localise itself in real-time. This system has the benefit of minimal hardware requirements and a compact, familiar form-factor, which will help to overcome the hurdle of user-acceptance and usability.…”
Section: System Descriptionmentioning
confidence: 99%
“…The proposed system was introduced by Lock et al [2017], who describe the aforementioned autonomous guidance system paired with a co-adaptive human-machine interface that changes its own parameters over time to better match the user's perception strengths and limitations. In previous work we developed a prototype guidance system that uses active vision and machine learning models to gather information and help a person find objects within an unknown indoor environment, showing that these techniques can indeed successfully be applied to direct humans' attention [Lock et al 2019a]. However, in this work we examine the effectiveness of the proposed interface for a searching task and investigate a metric that can be used in the next phases of the project to enable the aforementioned co-adaptive paradigm that could benefit the user experience and boost long-term navigation performance.…”
Section: Introductionmentioning
confidence: 99%