2019
DOI: 10.1186/s12942-019-0193-9
|View full text |Cite
|
Sign up to set email alerts
|

Comparative analysis of computer-vision and BLE technology based indoor navigation systems for people with visual impairments

Abstract: BackgroundConsiderable number of indoor navigation systems has been proposed to augment people with visual impairments (VI) about their surroundings. These systems leverage several technologies, such as computer-vision, Bluetooth low energy (BLE), and other techniques to estimate the position of a user in indoor areas. Computer-vision based systems use several techniques including matching pictures, classifying captured images, recognizing visual objects or visual markers. BLE based system utilizes BLE beacons… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
11
0
1

Year Published

2020
2020
2023
2023

Publication Types

Select...
7
2
1

Relationship

2
8

Authors

Journals

citations
Cited by 34 publications
(12 citation statements)
references
References 48 publications
0
11
0
1
Order By: Relevance
“…The convolutional layers are responsible to extract significant feature maps from the inputs by computing the weighted sum (51,52). The extracted feature maps are then passed through the activation functions, and a bias is added to obtain an output.…”
Section: Convolutional Neural Networkmentioning
confidence: 99%
“…The convolutional layers are responsible to extract significant feature maps from the inputs by computing the weighted sum (51,52). The extracted feature maps are then passed through the activation functions, and a bias is added to obtain an output.…”
Section: Convolutional Neural Networkmentioning
confidence: 99%
“…CamNav, a vision-based navigation system captures images to recognize the current location of users without any indoor localization devices installed. To improve the accuracy of localization a combination of scale-invariant feature transform (SIFT) and multi-scale local binary pattern (MSLBP) features were used (11).…”
Section: Wearable Devices Based On Various Sensorsmentioning
confidence: 99%
“…Immediately after the input layer, convolutional layers are defined with the number of filters, filter window size, stride, padding and activation as the parameters. Convolutional layers are used to extract meaningful feature maps for the input location by calculating the weighted sum [ 17 , 18 ]. Then, each feature map is passed through an activation function, and bias is added to form the output.…”
Section: Introductionmentioning
confidence: 99%