The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2017
DOI: 10.1186/s13640-017-0175-4
|View full text |Cite
|
Sign up to set email alerts
|

Glyph-based video visualization on Google Map for surveillance in smart cities

Abstract: Video visualization (VV) is considered to be an essential part of multimedia visual analytics. Many challenges have arisen from the enormous video content of cameras which can be solved with the help of data analytics and hence gaining importance. However, the rapid advancement of digital technologies has resulted in an explosion of video data, which stimulates the needs for creating computer graphics and visualization from videos. Particularly, in the paradigm of smart cities, video surveillance as a widely a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
12
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 13 publications
(12 citation statements)
references
References 38 publications
0
12
0
Order By: Relevance
“…These include one class as an R‐tree index based on the visual field (Wu et al, 2015), the determination of camera‐by‐camera topological relationships (Cho, Park, Kim, Lee, & Yoon, 2017), and the analysis of the field of view of the camera. Another method realizes the organization of multi‐camera video data by associating factors such as the moving object’s texture (Jian, Liao, Fan, & Xue, 2017), spatiotemporal behavior (Loy, Xiang, & Gong, 2010), and semantic aspects (Mehboob et al, 2017).…”
Section: Related Workmentioning
confidence: 99%
“…These include one class as an R‐tree index based on the visual field (Wu et al, 2015), the determination of camera‐by‐camera topological relationships (Cho, Park, Kim, Lee, & Yoon, 2017), and the analysis of the field of view of the camera. Another method realizes the organization of multi‐camera video data by associating factors such as the moving object’s texture (Jian, Liao, Fan, & Xue, 2017), spatiotemporal behavior (Loy, Xiang, & Gong, 2010), and semantic aspects (Mehboob et al, 2017).…”
Section: Related Workmentioning
confidence: 99%
“…Mehboob et al [12] propose an algorithm for 3D conversion from traffic video content to Google Map. Time-stamped glyph-based visualization is used in outdoor surveillance videos for the algorithm, which can be used for event-aware detection.…”
Section: Related Workmentioning
confidence: 99%
“…In some of these methods (e.g., view-based R-tree [3] and camera-based topology indexing [30]), video data organization is analyzed by examining the camera field of view. The other methods used moving object texture association [31], spatial-temporal behavior association [32], and semantic association [33]. A suitable mapping method must be selected to project the video on to the virtual scene model to integrate videos with geospatial information [34,35].…”
Section: Related Workmentioning
confidence: 99%
“…According to different mapping methods, the information fusion methods of surveillance video and virtual scene are divided into two categories: GIS-video image fusion (image projection) [37] and GIS-video moving object fusion (object projection) [38]. The implementation forms of GIS-video image fusion, including video image linked search analysis [4] and videos that are projected to the geographic scene [33], are easy to implement but lack the ability to analyze and understand video image contents. The object projection method extracts video semantic objects from the original video through object detection.…”
Section: Related Workmentioning
confidence: 99%