In recent years, various studies have been conducted on the prediction of crime occurrences. This predictive capability is intended to assist in crime prevention by facilitating effective implementation of police patrols. Previous studies have used data from multiple domains such as demographics, economics, and education. Their prediction models treat data from different domains equally. These methods have problems in crime occurrence prediction, such as difficulty in discovering highly nonlinear relationships, redundancies, and dependencies between multiple datasets. In order to enhance crime prediction models, we consider environmental context information, such as broken windows theory and crime prevention through environmental design. In this paper, we propose a feature-level data fusion method with environmental context based on a deep neural network (DNN). Our dataset consists of data collected from various online databases of crime statistics, demographic and meteorological data, and images in Chicago, Illinois. Prior to generating training data, we select crime-related data by conducting statistical analyses. Finally, we train our DNN, which consists of the following four kinds of layers: spatial, temporal, environmental context, and joint feature representation layers. Coupled with crucial data extracted from various domains, our fusion DNN is a product of an efficient decision-making process that statistically analyzes data redundancy. Experimental performance results show that our DNN model is more accurate in predicting crime occurrence than other prediction models.
In recent years, driver drowsiness and distraction have been important factors in a large number of accidents because they reduce driver perception level and decision making capability, which negatively affect the ability to control the vehicle. One way to reduce these kinds of accidents would be through monitoring driver and driving behavior and alerting the driver when they are drowsy or in a distracted state. In addition, if it were possible to predict unsafe driving behavior in advance, this would also contribute to safe driving. In this paper, we will discuss various monitoring methods for driver and driving behavior as well as for predicting unsafe driving behaviors. In respect to measurement methods of driver drowsiness, we discussed visual and non-visual features of driver behavior, as well as driving performance behaviors related to vehicle-based features. Visual feature measurements such as eye related measurements, yawning detection, facial expression are discussed in detail. As for non-visual features, we explore various physiological signals and possible drowsiness detection methods that use these signals. As for vehicle-based features, we describe steering wheel movement and the standard deviation of lateral position. To detect driver distraction, we describe head pose and gaze direction methods. To predict unsafe driving behavior, we explain predicting methods based on facial expressions and car dynamics. Finally, we discuss several issues to be tackled for active driver safety systems. They are 1) hybrid measures for drowsiness detection, 2) driving context awareness for safe driving, 3) the necessity for public data sets of simulated and real driving conditions.
To understand driving environments effectively, it is important to achieve accurate detection and classification of objects detected by sensor-based intelligent vehicle systems, which are significantly important tasks. Object detection is performed for the localization of objects, whereas object classification recognizes object classes from detected object regions. For accurate object detection and classification, fusing multiple sensor information into a key component of the representation and perception processes is necessary. In this paper, we propose a new object-detection and classification method using decision-level fusion. We fuse the classification outputs from independent unary classifiers, such as 3D point clouds and image data using a convolutional neural network (CNN). The unary classifiers for the two sensors are the CNN with five layers, which use more than two pre-trained convolutional layers to consider local to global features as data representation. To represent data using convolutional layers, we apply region of interest (ROI) pooling to the outputs of each layer on the object candidate regions generated using object proposal generation to realize color flattening and semantic grouping for charge-coupled device and Light Detection And Ranging (LiDAR) sensors. We evaluate our proposed method on a KITTI benchmark dataset to detect and classify three object classes: cars, pedestrians and cyclists. The evaluation results show that the proposed method achieves better performance than the previous methods. Our proposed method extracted approximately 500 proposals on a 1226×370 image, whereas the original selective search method extracted approximately 106×n proposals. We obtained classification performance with 77.72% mean average precision over the entirety of the classes in the moderate detection level of the KITTI benchmark dataset.
This study describes an experiment in which 126 participants engaged via a mobile telephone simulation that included a visual display in a discussion that required self-disclosure and affective evaluation of the other participant. Participants in same gender and mixed gender dyads were represented by avatars that varied in visual realism (unmodified video, modified video, graphic display, or no visual display) and behavioral realism (static visual display versus dynamic or animated). Participants subsequently rated the Perceived Social Richness of the Medium and the Interactant Satisfaction with the conversation. Interactant Satisfaction was a new measure of social presence created to tap emotional and affective evaluations. Participants rated devices withhigher-realism and more behaviorally realistic avatars as being more capable of effective social interaction, but their actual perceptions of affective dimensions of their conversational partner were essentially unaffected by visual representations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.