Abstract-The ability of a computer to detect and appropriately respond to changes in a user's affective state has significant implications to Human-Computer Interaction (HCI). In this paper, we present our efforts toward audio-visual affect recognition on 11 affective states customized for HCI application (four cognitive/motivational and seven basic affective states) of 20 nonactor subjects. A smoothing method is proposed to reduce the detrimental influence of speech on facial expression recognition. The feature selection analysis shows that subjects are prone to use brow movement in face, pitch and energy in prosody to express their affects while speaking. For person-dependent recognition, we apply the voting method to combine the frame-based classification results from both audio and visual channels. The result shows 7.5% improvement over the best unimodal performance. For person-independent test, we apply multistream HMM to combine the information from multiple component streams. This test shows 6.1% improvement over the best component performance.
The COVID-19 pandemic disrupted the world in 2020 by spreading at unprecedented rates and causing tens of thousands of fatalities within a few months. The number of deaths dramatically increased in regions where the number of patients in need of hospital care exceeded the availability of care. Many COVID-19 patients experience Acute Respiratory Distress Syndrome (ARDS), a condition that can be treated with mechanical ventilation. In response to the need for mechanical ventilators, designed and tested an emergency ventilator (EV) that can control a patient’s peak inspiratory pressure (PIP) and breathing rate, while keeping a positive end expiratory pressure (PEEP). This article describes the rapid design, prototyping, and testing of the EV. The development process was enabled by rapid design iterations using additive manufacturing (AM). In the initial design phase, iterations between design, AM, and testing enabled a working prototype within one week. The designs of the 16 different components of the ventilator were locked by additively manufacturing and testing a total of 283 parts having parametrically varied dimensions. In the second stage, AM was used to produce 75 functional prototypes to support engineering evaluation and animal testing. The devices were tested over more than two million cycles. We also developed an electronic monitoring system and with automatic alarm to provide for safe operation, along with training materials and user guides. The final designs are available online under a free license. The designs have been transferred to more than 70 organizations in 15 countries. This project demonstrates the potential for ultra-fast product design, engineering, and testing of medical devices needed for COVID-19 emergency response.
No abstract
No abstract
The ability of a computer to detect and appropriately respond to changes in a user's affective state has significant implications to Human-Computer Interaction (HCI). To more accurately simulate the human ability to assess affects through multi-sensory data, automatic affect recognition should also make use of multimodal data. In this paper, we present our efforts toward audio-visual affect recognition. Based on psychological research, we have chosen affect categories based on an activationevaluation space which is robust in capturing significant aspects of emotion. We apply the Fisher boosting learning algorithm which can build a strong classifier by combining a small set of weak classification functions. Our experimental results show with 30 Fisher features, the testing error rates of our bimodal affect recognition is about 16% on the evaluation axis and 13% on the activation axis.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.