Autonomous vehicles offer the potential to drastically decrease the number and severity of road accidents. Most accidents occur due to human inattention or wrong decisions, whose factors can be eliminated by autonomous vehicles. However, not all accidents are avoidable through automation. Complying with the law is not always enough, there can be environmental problems (bad weather, road surface, etc.) causing accidents, and other actors (human drivers, pedestrians) making mistakes. These are unexpected situations, and the real-time sensors of vehicles are currently limited in their ability to predict them (a slippery road surface for example) in time, and deliver a programed response to a dangerous situation. This paper presents a method based on the analysis of historical accident records, to find danger zones of public road networks. A further statistical approach is used to find the significant risk factors of these zones, which data can be built into the controlling algorithms of autonomous vehicles, to prepare for these situations and avoid, or at least decrease the seriousness, of the potential incidents. It is concluded that the proposed method can find the black spots of a given road section and give assumptions about the main local risk factors.
Image based instance recognition is a difficult problem, in some cases even for the human eye. While latest developments in computer vision—mostly driven by deep learning—have shown that high performance models for classification or categorization can be engineered, the problem of discriminating similar objects with a low number of samples remain challenging. Advances from multi-class classification are applied for object matching problems, as the feature extraction techniques are the same; nature-inspired multi-layered convolutional nets learn the representations, and the output of such a model maps them to a multidimensional encoding space. A metric based loss brings same instance embeddings close to each other. While these solutions achieve high classification performance, low efficiency is caused by memory cost of high parameter number, which is in a relationship with input image size. Upon shrinking the input, the model requires less trainable parameters, while performance decreases. This drawback is tackled by using compressed feature extraction, e.g., projections. In this paper, a multi-directional image projection transformation with fixed vector lengths (MDIPFL) is applied for one-shot recognition tasks, trained on Siamese and Triplet architectures. Results show, that MDIPFL based approach achieves decent performance, despite of the significantly lower number of parameters.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.