We quantitatively measured the smiles of a child with autism spectrum disorder (ASD-C) using a wearable interface device during animal-assisted activities (AAA) for 7 months, and compared the results with a control of the same age. The participant was a 10-year-old boy with ASD, and a normal healthy boy of the same age was the control. They voluntarily participated in this study. Neither child had difficulty putting on the wearable device. They kept putting on the device comfortably through the entire experiment (duration of a session was about 30-40 min). This study was approved by the Ethical Committee based on the rules established by the Institute for Developmental Research, Aichi Human Service Center. The behavior of the participants during AAA was video-recorded and coded by the medical examiner (ME). In both groups, the smiles recognized by the ME corresponded with the computer-detected smiles. In both groups, positive social behaviors increased when the smiles increased. Also, negative social behaviors decreased when the smiles increased in the (ASD-C). It is suggested that by leading the (ASD-C) into a social environment that may cause smiling, the child's social positive behaviors may be facilitated and his social negative behaviors may be decreased.
In this paper we present the design of a wearable device that reads positive facial expressions using physiological signals. We first analyze facial morphology in 3 dimensions and facial electromyographic signals on different facial locations and show that we can detect electromyographic signals with high amplitude on areas of low facial mobility on the side of the face, which are correlated to ones obtained from electrodes on traditional surface electromyographic capturing positions on top of facial muscles on the front of the face. We use a multi-attribute decision-making method to find adequate electrode positions on the side of face to capture these signals. Based on this analysis, we design and implement an ergonomic wearable device with high reliability. Because the signals are recorded distally, the proposed device uses independent component analysis and an artificial neural network to analyze them and achieve a high facial expression recognition rate on the side of the face. The recognized emotional facial expressions through the wearable interface device can be recorded during therapeutic interventions and for long-term facial expression recognition to quantify and infer the user's affective state in order to support medical professionals.Index Terms-Electromyography, face and gesture recognition, pattern recognition, wearable interface 1949-3045 ß
Vehicular ad hoc networking (VANET) have become a significant technology in the current years because of the emerging generation of self-driving cars such as Google driverless cars. VANET have more vulnerabilities compared to other networks such as wired networks, because these networks are an autonomous collection of mobile vehicles and there is no fixed security infrastructure, no high dynamic topology and the open wireless medium makes them more vulnerable to attacks. It is important to design new approaches and mechanisms to rise the security these networks and protect them from attacks. In this paper, we design an intrusion detection mechanism for the VANETs using Artificial Neural Networks (ANNs) to detect Denial of Service (DoS) attacks. The main role of IDS is to detect the attack using a data generated from the network behavior such as a trace file. The IDSs use the features extracted from the trace file as auditable data. In this paper, we propose anomaly and misuse detection to detect the malicious attack. Keywords-security; vehicular ad hoc networks; intrusion detection system; driverless car.We propose a new approach to secure external communication in self-driving and semi self-driving vehicles
Vehicular ad hoc networks (VANETs) play a vital role in the success of self-driving and semi self-driving vehicles, where they improve safety and comfort. Such vehicles depend heavily on external communication with the surrounding environment via data control and Cooperative Awareness Messages (CAMs) exchanges. VANETs are potentially exposed to a number of attacks, such as grey hole, black hole, wormhole and rushing attacks. This work presents an intelligent Intrusion Detection System (IDS) that relies on anomaly detection to protect the external communication system from grey hole and rushing attacks. These attacks aim to disrupt the transmission between vehicles and roadside units. The IDS uses features obtained from a trace file generated in a network simulator and consists of a feed-forward neural network and a support vector machine. Additionally, the paper studies the use of a novel systematic response, employed to protect the vehicle when it encounters malicious behaviour. Our simulations of the proposed detection system show that the proposed schemes possess outstanding detection rates with a reduction in false alarms. This safe mode response system has been evaluated using four performance metrics, namely, received packets, packet delivery ratio, dropped packets and the average end to end delay, under both normal and abnormal conditions.
In this paper we present a quantitative analysis of electrode positions on the side of the face for facial expression recognition using facial bioelectrical signals. We show that distal electrode locations on areas of low facial mobility have a strong amplitude and are correlated to signals captured in the traditional positions on top of the facial muscles. We report on electrode position choice as well successful facial expression identification using computational methods. We also propose a wearable interface device that can detect facial bioelectrical signals distally in a continuous manner while being unobtrusive to the user. The proposed device can be worn on the side of the face and capture signals that are considered to be a mixture of facial electromyographic signals and other bioelectrical signals. Finally we show the design of an interface that can be comfortably worn by the user and makes facial expression recognition possible.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.