Through aiding the process of diagnosing cardiovascular diseases (CVD) such as arrhythmia, electrocardiograms (ECGs) have progressively improved prospects for an automated diagnosis system in modern healthcare. Recent years have seen the promising applications of deep neural networks (DNNs) in analyzing ECG data, even outperforming cardiovascular experts in identifying certain rhythm irregularities. However, DNNs have shown to be susceptible to adversarial attacks, which intentionally compromise the models by adding perturbations to the inputs. This concept is also applicable to DNN-based ECG classifiers and the prior works generate these adversarial attacks in a white-box setting where the model details are exposed to the attackers. However, the black-box condition, where the classification model's architecture and parameters are unknown to the attackers, remains mostly unexplored. Thus, we aim to fool ECG classifiers in the black-box and hard-label setting where given an input, only the final predicted category is visible to the attacker. Our attack on the DNN classification model for the PhysioNet Computing in Cardiology Challenge 2017 [12] database produced ECG data sets mostly indistinguishable from the whitebox version of an adversarial attack on this same database. Our results demonstrate that we can effectively generate the adversarial ECG inputs in this black-box setting, which raises significant concerns regarding the potential applications of DNN-based ECG classifiers in security-critical systems.
Modern smartphones and smartwatches are equipped with inertial sensors (accelerometer, gyroscope, and magnetometer) that can be used for Human Activity Recognition (HAR) to infer tasks such as daily activities, transportation modes and, gestures. HAR requires collecting raw inertial sensor values and training a machine learning model on the collected data. The challenge in this approach is that the models are trained for specific devices and device configurations whereas, in reality, the set of devices carried by a person may vary over time. Ideally, one would like activity inferencing to be robust of this variation and provide accurate predictions by making opportunistic use of information from available devices. Moreover, the devices may be located at different parts of the body (e.g. pocket, left and right wrist), may have different sets of sensors (e.g. a smartwatch may not have gyroscope while a smartphone might), and may differ in sampling frequencies. In this paper, we provide a solution which makes use of the information from available devices while being robust to their variations. Instead of training an endto-end model for every permutation of device combinations and configurations, we propose a scalable deep learning based solution in which each device learns its own sensor fusion model that maps the raw sensor values to a shared low dimensional latent space which we call the 'SenseHAR'-a virtual activity sensor. The virtual sensor has the same format and similar behavior regardless of the subset of devices, sensor's availability, sampling rate, or a device's location. This would help machine learning engineers to develop their application specific (e.g., from gesture recognition to activities of daily life) models in a hardware-agnostic manner based on this virtual activity sensor. Our evaluations show that an application model trained on SenseHAR achieves the state of the art accuracies of 95.32%, 74.22% and 93.13% on PAMAP2, Opportunity(gestures) and our collected datasets respectively.
End-to-end deep learning models are increasingly applied to safety-critical human activity recognition (HAR) applications, e.g., healthcare monitoring and smart home control, to reduce developer burden and increase the performance and robustness of prediction models. However, integrating HAR models in safety-critical applications requires trust, and recent approaches have aimed to balance the performance of deep learning models with explainable decision-making for complex activity recognition. Prior works have exploited the compositionality of complex HAR (i.e., higher-level activities composed of lower-level activities) to form models with symbolic interfaces, such as concept-bottleneck architectures, that facilitate inherently interpretable models. However, feature engineering for symbolic concepts-as well as the relationship between the concepts-requires precise annotation of lower-level activities by domain experts, usually with fixed time windows, all of which induce a heavy and error-prone workload on the domain expert. In this paper, we introduce X-CHAR, an eXplainable Complex Human Activity Recognition model that doesn't require precise annotation of low-level activities, offers explanations in the form of human-understandable, high-level concepts, while maintaining the robust performance of end-to-end deep learning models for time series data. X-CHAR learns to model complex activity recognition in the form of a sequence of concepts. For each classification, X-CHAR outputs a sequence of concepts and a counterfactual example as the explanation. We show that the sequence information of the concepts can be modeled using Connectionist Temporal Classification (CTC) loss without having accurate start and end times of low-level annotations in the training dataset-significantly reducing developer burden. We evaluate our model on several complex activity datasets and demonstrate that our model offers explanations without compromising the prediction accuracy in comparison to baseline models. Finally, we conducted a mechanical Turk study to show that the explanations provided by our model are more understandable than the explanations from existing methods for complex activity recognition.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.