A novel idea is presented to locate intersections in a video sequence captured from a moving vehicle. More specifically, we propose a Bayesian network approach to combine evidence extracted from a video sequence and evidence from a database, maximizing evidence from various sensors in a systematic manner and locating intersections robustly.
As Augmented Reality Navigation systems receive increasing attention as the next generation navigation systems, the importance of the layout of the elements and searching for the appropriate level of information cannot be overstated. Due to this, a series of experiments of usability tests had been designed. According to the experiments, the results for the screen arrangement showed that the MBN screen displayed on the left and the ARN screen on the right was preferred the most while the results for changing lanes showed that displaying information directly on the road and highlighting the path was the best method. As for switching direction, using the upper icons and displaying information directly on the road was preferred. The adequate amount of information was found out to be an average of 3 POI with 20pt font sizes for ARN displays and 10 POI with 18pt font sizes for MBN displays. Lastly, the Eye-tracking experiments showed that the ARN screen was viewed more often with a ratio of 7:3.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.