Progress in making neural networks more robust against adversarial attacks is mostly marginal, despite the great efforts of the research community. Moreover, the robustness evaluation is often imprecise, making it difficult to identify promising approaches. We analyze the classification decisions of 19 different state-of-the-art neural networks trained to be robust against adversarial attacks. Our findings suggest that current untargeted adversarial attacks induce misclassification towards only a limited amount of different classes. Additionally, we observe that both overand under-confidence in model predictions result in an inaccurate assessment of model robustness. Based on these observations, we propose a novel loss function for adversarial attacks that consistently improves attack success rate compared to prior loss functions for 19 out of 19 analyzed models.Preprint. Under review.
Predictive business process monitoring (PBPM) aims to predict future process behavior during ongoing process executions based on event log data. Especially, techniques for the next activity and timestamp prediction can help to improve the performance of operational business processes. Recently, many PBPM solutions based on deep learning were proposed by researchers. Due to the sequential nature of event log data, a common choice is to apply recurrent neural networks with long short-term memory (LSTM) cells. We argue, that the elapsed time between events is informative. However, current PBPM techniques mainly use "vanilla" LSTM cells and hand-crafted time-related control flow features. To better model the time dependencies between events, we propose a new PBPM technique based on time-aware LSTM (T-LSTM) cells. T-LSTM cells incorporate the elapsed time between consecutive events inherently to adjust the cell memory. Furthermore, we introduce costsensitive learning to account for the common class imbalance in event logs. Our experiments on publicly available benchmark event logs indicate the effectiveness of the introduced techniques.
The vulnerability of deep neural networks to small and even imperceptible perturbations has become a central topic in deep learning research. The evaluation of new defense mechanisms for these so-called adversarial attacks has proven to be challenging. Although several sophisticated defense mechanisms were introduced, most of them were later shown to be ineffective. However, a reliable evaluation of model robustness is mandatory for deployment in safety-critical realworld scenarios. We propose a simple yet effective modification to the gradient calculation of state-of-the-art first-order adversarial attacks, which increases their success rate and thus leads to more accurate robustness estimates. Normally, the gradient update of an attack is directly calculated for the given data point. In general, this approach is sensitive to noise and small local optima of the loss function. Inspired by gradient sampling techniques from non-convex optimization, we propose to calculate the gradient direction of the adversarial attack as the weighted average over multiple points in the local vicinity. We empirically show that by incorporating this additional gradient information, we are able to give a more accurate estimation of the global descent direction on noisy and non-convex loss surfaces. Additionally, we show that the proposed method achieves higher success rates than a variety of state-of-the-art attacks on the benchmark datasets MNIST, Fashion-MNIST, and CIFAR10.
Unravelling the interplay between a human’s microbiome and physiology is a relevant task for understanding the principles underlying human health and disease. With regard to human chemical communication, it is of interest to elucidate the role of the microbiome in shaping or generating volatiles emitted from the human body. In this study, we characterized the microbiome and volatile organic compounds (VOCs) sampled from the neck and axilla of ten participants (five male, five female) on two sampling days, by applying different methodological approaches. Volatiles emitted from the respective skin site were collected for 20 min using textile sampling material and analyzed on two analytical columns with varying polarity of the stationary phase. Microbiome samples were analyzed by a culture approach coupled with MALDI-TOF-MS analysis and a 16S ribosomal RNA gene (16S RNA) sequencing approach. Statistical and advanced data analysis methods revealed that classification of body sites was possible by using VOC and microbiome data sets. Higher classification accuracy was achieved by combination of both data pools. Cutibacterium, Staphylococcus, Micrococcus, Streptococcus, Lawsonella, Anaerococcus, and Corynebacterium species were found to contribute to classification of the body sites by the microbiome. Alkanes, esters, ethers, ketones, aldehydes and cyclic structures were used by the classifier when VOC data were considered. The interdisciplinary methodological platform developed here will enable further investigations of skin microbiome and skin VOCs alterations in physiological and pathological conditions.
Progress in making neural networks more robust against adversarial attacks is mostly marginal, despite the great efforts of the research community. Moreover, the robustness evaluation is often imprecise, making it challenging to identify promising approaches. We do an observational study on the classification decisions of 19 different state-of-the-art neural networks trained to be robust against adversarial attacks. This analysis gives a new indication of the limits of the robustness of current models on a common benchmark. In addition, our findings suggest that current untargeted adversarial attacks induce misclassification toward only a limited amount of different classes. Similarly, we find that previous attacks under-explore the perturbation space during optimization. This leads to unsuccessful attacks for samples where the initial gradient direction is not a good approximation of the final adversarial perturbation direction. Additionally, we observe that both over- and under-confidence in model predictions result in an inaccurate assessment of model robustness. Based on these observations, we propose a novel loss function for adversarial attacks that consistently improves their efficiency and success rate compared to prior attacks for all 30 analyzed models.
The most important goal of customer service is to keep the customer satisfied. However, service resources are always limited and must prioritize specific customers. Therefore, it is essential to identify customers who potentially become unsatisfied and might lead to escalations. Data science on IoT data (especially log data) for machine health monitoring and analytics on enterprise data for customer relationship management (CRM) have mainly been researched and applied independently. This paper presents a data-driven decision support system framework that combines IoT and enterprise data to model customer sentiment and predicts escalations. The proposed framework includes a fully automated and interpretable machine learning pipeline using state-of-the-art methods. The framework is applied in a real-world case study with a major medical device manufacturer providing data from a fleet of thousands of high-end medical devices. An anonymized version of this industrial benchmark is released for the research community based on the presented case study, which has interesting and challenging properties. In our extensive experiments, we achieve a Recall@50 of 50.0% for the task of predicting customer escalations. In addition, we show that combining IoT and enterprise data can improve prediction results and ease troubleshooting. Additionally, we propose a practical workflow for end-users when applying the proposed framework.INDEX TERMS customer service, decision support system, IoT data, explainable AI, machine learning, big data, industrial AI.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.