Vision-based human action and activity recognition has an increasing importance among the computer vision community with applications to visual surveillance, video retrieval and human-computer interaction. In recent years, more and more datasets dedicated to human action and activity recognition have been created. The use of these datasets allows us to compare different recognition systems with the same input data. The survey introduced in this paper tries to cover the lack of a complete description of the most important public datasets for video-based human activity and action recognition and to guide researchers in the election of the most suitable dataset for benchmarking their algorithms.
This article introduces a new and unobtrusive wearable monitoring device based on electrodermal activity (EDA) to be used in health-related computing systems. This paper introduces the description of the wearable device capable of acquiring the EDA of a subject in order to detect his/her calm/distress condition from the acquired physiological signals. The lightweight wearable device is placed in the wrist of the subject to allow continuous physiological measurements. With the aim of validating the correct operation of the wearable EDA device, pictures from the International Affective Picture System are used in a control experiment involving fifty participants. The collected signals are processed, features are extracted and a statistical analysis is performed on the calm/distress condition classification. The results show that the wearable device solely based on EDA signal processing reports around 89% accuracy when distinguishing calm condition from distress condition.
This study presents and approach to measure the levels of acute stress in humans by analysing their behavioural patterns when interacting with technological devices. We study the effects of stress on eight behavioural, physical and cognitive features. The data was collected with the participation of 19 users in different phases, with different levels of stress induced. A non-parametric statistical hypothesis test is used to determine which features show statistically significant differences, for each user, when under stress. It is shown that the features more related to stress are the acceleration and the mean and maximum intensity of the touch. It is also shown that each user is affected by stress in a specific way. Moreover, all the process of estimating stress is undertaken in a non-invasive way. This work constitutes the foundation of a context layer for a virtual environment for conflict resolution. The main objective is to overcome some of the main drawbacks of communicating online, namely the lack of contextual information such as body language or gestures.
Perceiving the environment is crucial in any application related to mobile robotics research. In this paper, a new approach to real-time human detection through processing video captured by a thermal infrared camera mounted on the autonomous mobile platform mSecurit TM is introduced. The approach starts with a phase of static analysis for the detection of human candidates through some classical image processing techniques such as image normalization and thresholding. Then, the proposal starts a dynamic image analysis phase based in optical flow or image difference. Optical flow is used when the robot is moving, whilst image difference is the preferred method when the mobile platform is still. The results of both phases are compared to enhance the human segmentation by infrared camera. Indeed, optical flow or image difference will emphasize the foreground hot spot areas obtained at the initial human candidates' detection.
Model-driven engineering (MDE), implicitly based upon meta-model principles, is gaining more and more attention in software systems due to its inherent benefits. Its use normally improves the quality of the developed systems in terms of productivity, portability, inter-operability and maintenance. Therefore, its exploitation for the development of multi-agent systems (MAS) emerges in a natural way. In this paper, agent-oriented software development (AOSD) and MDE paradigms are fully integrated for the development of MAS. Meta-modeling techniques are explicitly used to speed up several phases of the process. The Prometheus methodology is used for the purpose of validating the proposal. The metaobject facility (MOF) architecture is used as a guideline for developing a MAS editor according to the language provided by Prometheus methodology. Firstly, an Ecore meta-model for Prometheus language is developed. Ecore is a powerful tool for designing model-driven architectures (MDA). Next, facilities provided by the Graphical Modeling Framework (GMF) are used to generate the graphical editor. It offers support to develop agent models conform to the meta-model specified. Afterwards, it is also described how an agent code generator can be developed. In this way, code is automatically generated using as input the model specified with the graphical editor. A case of study validates the method put in practice for the development of a multi-agent surveillance system.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.