While deep learning models have achieved unprecedented success in various domains, there is also a growing concern of adversarial attacks against related applications. Recent results show that by adding a small amount of perturbations to an image (imperceptible to humans), the resulting adversarial examples can force a classifier to make targeted mistakes. So far, most existing works focus on crafting adversarial examples in the digital domain, while limited efforts have been devoted to understanding the physical domain attacks. In this work, we explore the feasibility of generating robust adversarial examples that remain effective in the physical domain. Our core idea is to use an image-to-image translation network to simulate the digital-to-physical transformation process for generating robust adversarial examples. To validate our method, we conduct a large-scale physical-domain experiment, which involves manually taking more than 3000 physical domain photos. The results show that our method outperforms existing ones by a large margin and demonstrates a high level of robustness and transferability.
No abstract
Business processes underpin a large number of enterprise operations including processing loan applications, managing invoices, and insurance claims. The business process management (BPM) industry is expected to grow at approximately 16 Billion dollar by 2023. There is a large opportunity for infusing AI to reduce cost or provide better customer experience with a $15.7 trillion “potential contribution to the global economy by 2030”. To this end, the BPM literature is rich in machine learning solutions including unsupervised learning to gain insights on clusters of process traces, classification models to predict the outcomes, duration, or paths of partial process traces, extracting business process from documents, and models to recommend how to optimize a business process or navigate decision points. More recently, deep learning models including those from the NLP domain have been applied to process predictions.Unfortunately, very little of these innovations have been applied and adopted by enterprise companies. We assert that a large reason for the lack of adoption of AI models in BPM is that business users are risk-averse and do not implicitly trust AI models. There has, unfortunately, been little attention paid to explaining model predictions to business users with process context. We challenge the BPM community to build on the AI interpretability literature, and the AI Trust community to understand what it means to take advantage of business process artifacts in order to provide business level explanations.
No abstract
Today’s online question and answer (Q8A) services are receiving a large volume of questions. It becomes increasingly challenging to motivate domain experts to provide quick and high-quality answers. Recent systems seek to engage real-world experts by allowing them to set a price on their answers. This leads to a “targeted” Q8A model where users ask questions to a target expert by paying the corresponding price. In this article, we perform a case study on two emerging targeted Q8A systems, Fenda (China) and Whale (U.S.), to understand how monetary incentives affect user behavior. By analyzing a large dataset of 220K questions (worth 1 million USD), we find that payments indeed enable quick answers from experts, but also drive certain users to game the system for profits. In addition, this model requires users (experts) to proactively adjust their price to make profits. People who are unwilling to lower their prices are likely to hurt their income and engagement over time.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.