Abstract:The problem discussed in this research is connected with the functionality of the graphical user interface (GUI) of global web mapping services displayed on different devices. Displaying a mapping service on devices with diverse display screen size causes the adaptation of the graphical user interface to the size of the screen. This adaptation is the result of the responsive design technique which enables one to display the same web content on different devices. Eight global web mapping services: Google Maps, … Show more
“…GeoJSON encodes geographic data in JSON format [64]. It is a simple and lightweight data format, which is adapted to work with many map libraries and services, such as leaflets [65], OpenLayers, MapBox, and Cesium. It also supports different geographic coordinate systems and features [66].…”
Section: Sta Modelmentioning
confidence: 99%
“…It also supports different geographic coordinate systems and features [66]. and services, such as leaflets [65], OpenLayers, MapBox, and Cesium. It also supports different geographic coordinate systems and features [66].…”
Emerging deep learning (DL) approaches with edge computing have enabled the automation of rich information extraction, such as complex events from camera feeds. Due to the low speed and accuracy of object detection, some objects are missed and not detected. As objects constitute simple events, missing objects result in missing simple events, thus the number of detected complex events. As the main objective of this paper, an integrated cloud and edge computing architecture was designed and developed to reduce missing simple events. To achieve this goal, we deployed multiple smart cameras (i.e., cameras which connect to the Internet and are integrated with computerised systems such as the DL unit) in order to detect complex events from multiple views. Having more simple events from multiple cameras can reduce missing simple events and increase the number of detected complex events. To evaluate the accuracy of complex event detection, the F-score of risk behaviour regarding COVID-19 spread events in video streams was used. The experimental results demonstrate that this architecture delivered 1.73 times higher accuracy in event detection than that delivered by an edge-based architecture that uses one camera. The average event detection latency for the integrated cloud and edge architecture was 1.85 times higher than that of only one camera. However, this finding was insignificant with regard to the current case study. Moreover, the accuracy of the architecture for complex event matching with more spatial and temporal relationships showed significant improvement in comparison to the edge computing scenario. Finally, complex event detection accuracy considerably depended on object detection accuracy. Regression-based models, such as you only look once (YOLO), were able to provide better accuracy than region-based models.
“…GeoJSON encodes geographic data in JSON format [64]. It is a simple and lightweight data format, which is adapted to work with many map libraries and services, such as leaflets [65], OpenLayers, MapBox, and Cesium. It also supports different geographic coordinate systems and features [66].…”
Section: Sta Modelmentioning
confidence: 99%
“…It also supports different geographic coordinate systems and features [66]. and services, such as leaflets [65], OpenLayers, MapBox, and Cesium. It also supports different geographic coordinate systems and features [66].…”
Emerging deep learning (DL) approaches with edge computing have enabled the automation of rich information extraction, such as complex events from camera feeds. Due to the low speed and accuracy of object detection, some objects are missed and not detected. As objects constitute simple events, missing objects result in missing simple events, thus the number of detected complex events. As the main objective of this paper, an integrated cloud and edge computing architecture was designed and developed to reduce missing simple events. To achieve this goal, we deployed multiple smart cameras (i.e., cameras which connect to the Internet and are integrated with computerised systems such as the DL unit) in order to detect complex events from multiple views. Having more simple events from multiple cameras can reduce missing simple events and increase the number of detected complex events. To evaluate the accuracy of complex event detection, the F-score of risk behaviour regarding COVID-19 spread events in video streams was used. The experimental results demonstrate that this architecture delivered 1.73 times higher accuracy in event detection than that delivered by an edge-based architecture that uses one camera. The average event detection latency for the integrated cloud and edge architecture was 1.85 times higher than that of only one camera. However, this finding was insignificant with regard to the current case study. Moreover, the accuracy of the architecture for complex event matching with more spatial and temporal relationships showed significant improvement in comparison to the edge computing scenario. Finally, complex event detection accuracy considerably depended on object detection accuracy. Regression-based models, such as you only look once (YOLO), were able to provide better accuracy than region-based models.
“…Based on the respondents' responses, they redesigned the interface to suit preferences (e.g., they changed buttons location, removed navigation arrows, and changed the way of choosing layers). Some researchers [7] have drawn attention to the GUI differences arising not only from the placement of individual buttons but also that each map provider has a different graphic style of buttons. Even the same interactive functions, such as wayfinding, may work differently, e.g., by adding waypoints manually or typing the next location.…”
Section: Related Workmentioning
confidence: 99%
“…The interaction on the map mainly takes place using the graphical user interface (GUI). It consists of buttons that have specific functions and a symbolic icon [7,8]. The most popular interactive buttons include geolocation, searching, changing layers, and routing [9].…”
The purpose of this article is to show the differences in users’ experience when performing an interactive task with GUI buttons arrangement based on Google Maps and OpenStreetMap in a simulation environment. The graphical user interface is part of an interactive multimedia map, and the interaction experience depends mainly on it. For this reason, we performed an eye-tracking experiment with users to examine how people experience interaction through the GUI. Based on the results related to eye movement, we presented several valuable recommendations for the design of interactive multimedia maps. For better GUI efficiency, it is suitable to group buttons with similar functions in screen corners. Users first analyze corners and only then search for the desired button. The frequency of using a given web map does not translate into generally better performance while using any GUI. Users perform more efficiently if they work with the preferred GUI.
“…One study used eye-tracking to measure how well a mapping prototype, mimics real-life applications, stressing the importance of a properly designed user interface when completing tasks online [11]. Web maps that went through a usability assessment also need to have responsive web design that works in all platforms: desktop, mobile, and laptops, while balancing customization of functionalities [12]. In addition to a well-defined user interface, it is also critical for using a survey instrument (i.e., system usability scale (SUS) and participatory GIS usability scale (PGUS)) to insure that the scale can differentiate between usable and unusable systems [13,14].…”
The Penn State Cancer Initiative implemented LionVu 1.0 (Penn State University, United States) in 2017 as a web-based mapping tool to educate and inform public health professionals about the cancer burden in Pennsylvania and 28 counties in central Pennsylvania, locally known as the catchment area. The purpose of its improvement, LionVu 2.0, was to assist investigators answer person–place–time questions related to cancer and its risk factors by examining several data variables simultaneously. The primary objective of this study was to conduct a usability assessment of a prototype of LionVu 2.0 which included area- and point-based data. The assessment was conducted through an online survey; 10 individuals, most of whom had a masters or doctorate degree, completed the survey. Although most participants had a favorable view of LionVu 2.0, many had little to no experience with web mapping. Therefore, it was not surprising to learn that participants wanted short 10–15-minute training videos to be available with future releases, and a simplified user-interface that removes advanced functionality. One unexpected finding was the suggestion of using LionVu 2.0 for teaching and grant proposals. The usability study of the prototype of LionVu 2.0 provided important feedback for its future development.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.