Classification techniques are widely used in security settings in which data can be deliberately manipulated by an adversary trying to evade detection and achieve some benefit. However, traditional classification systems are not robust to such data modifications. Most attempts to enhance classification algorithms in adversarial environments have focused on game theoretical ideas under strong underlying common knowledge assumptions, which are not actually realistic in security domains. We provide an alternative framework to such problems based on adversarial risk analysis which we illustrate with examples. Computational, implementation and robustness issues are discussed.
In several reinforcement learning (RL) scenarios, mainly in security settings, there may be adversaries trying to interfere with the reward generating process. In this paper, we introduce Threatened Markov Decision Processes (TMDPs), which provide a framework to support a decision maker against a potential adversary in RL. Furthermore, we propose a level-k thinking scheme resulting in a new learning framework to deal with TMDPs. After introducing our framework and deriving theoretical results, relevant empirical evidence is given via extensive experiments, showing the benefits of accounting for adversaries while the agent learns.
The introduction of a new drug to the commercial market follows a complex and long process that typically spans over several years and entails large monetary costs due to a high attrition rate. Because of this, there is an urgent need to improve this process using innovative technologies such as artificial intelligence (AI). Different AI tools are being applied to support all four steps of the drug development process (basic research for drug discovery; pre-clinical phase; clinical phase; and postmarketing). Some of the main tasks where AI has proven useful include identifying molecular targets, searching for hit and lead compounds, synthesising drug-like compounds and predicting ADME-Tox. This review, on the one hand, brings in a mathematical vision of some of the key AI methods used in drug development closer to medicinal chemists and, on the other hand, brings the drug development process and the use of different models closer to mathematicians. Emphasis is placed on two aspects not mentioned in similar surveys, namely, Bayesian approaches and their applications to molecular modelling and the eventual final use of the methods to actually support decisions. Graphic abstract Promoting a perfect synergy
Adversarial risk analysis (ARA) is a relatively new area of research that informs decision-making when facing intelligent opponents and uncertain outcomes. It is a decision-theoretic alternative to game theory. ARA enables an analyst to express her Bayesian beliefs about an opponent's utilities, capabilities, probabilities, and the type of strategic calculations that the opponent is using to make his decision. Within that framework, the analyst then solves the problem from the perspective of the opponent. This calculation produces a distribution over the actions of the opponent that permits the analyst to maximize her expected utility. This review covers conceptual, modeling, computational, and applied issues in ARA as well as interesting open research issues. This article is categorized under: Statistical and Graphical Methods of Data Analysis > Bayesian Methods and TheoryApplications of Computational Statistics > Defense and National Securityauctions, Bayes Nash equilibrium, decision theory, game theory, level-k thinking | INTRODUCTIONAdversarial risk analysis (ARA) guides decision-making when there are intelligent opponents who reason strategically about each other in the context of uncertain outcomes. It is a decision-theoretic alternative to classical game theory that uses Bayesian subjective distributions to model the goals, resources, beliefs, and reasoning of the opponent. Within this framework, the analyst solves the problem from the perspective of her opponent while placing subjective probability distributions on all unknown quantities. This structure provides a distribution over the actions of the opponent that enables her to maximize her expected utility, accounting for the uncertainty she has about the opponent. ARA applications include convoy routing through an insurgent city with improvised explosive devices (Banks, Petralia, & Wang, 2011), managing Somali piracy (Sevillano, Insua, & Rios, 2012), dealing with crime in a public transportation system (Banks, Aliaga, & Insua, 2015), Emile Borel's game La Relance (Banks et al., 2011), and cybersecurity (Rios Insua et al., 2019). It is relevant whenever one party is trying to model the decision-making process of one or more other parties, in order to achieve an outcome sought by the first party. The mathematics behind ARA can be quite complicated, but the essential idea is very natural. When asking the boss for a raise, one has a mental model for what the boss values (e.g., performance, flattery, punctual paperwork) and his likely response to various pitches. If the model is correct, one has a good chance of obtaining a raise; if not, then success is unlikely.
Current technology is unable to produce massively deployable, fully automated vehicles that do not require human intervention. Given that such limitations are projected to persist for decades, scenarios requiring a driver to assume control of a semiautomated vehicle, and vice versa, will remain a feature of modern roadways for the foreseeable future. Herein, we adopt a comprehensive perspective of this problem by simultaneously considering operational design domain supervision, driver and environment monitoring, trajectory planning, and driver-intervention performance assessment. More specifically, we develop a modeling framework for each of the aforementioned functions by leveraging decision analysis and Bayesian forecasting. Utilizing this framework, a suite of algorithms is subsequently proposed for driving-mode management and early warning emission, according to a management by exception principle. The efficacy of the developed methods is illustrated and examined via a simulated case study.
Whereas automated driving technology has made tremendous gains in the last decade, significant questions remain regarding its integration into society. Given its revolutionary nature, the use of automated driving systems (ADSs) is accompanied by myriad novel quandaries relating to both operational and ethical concerns that are relevant to numerous stakeholders (e.g., governments, manufacturers, and passengers). When considering any such problem, the ADS’s decision-making calculus is always a central component. This is true for concerns about public perception and trust to others regarding explainability and legal certainty. Therefore, in this manuscript, we set forth a general decision-analytic framework tailorable to multitudinous stakeholders. More specifically, we develop and validate a generic tree of ADS management objectives, explore potential attributes for their measurement, and provide multiattribute utility functions for implementation. Given the contention surrounding numerous ethical concerns in ADS operations, we explore how each of the aforementioned components can be tailored in accordance with the stakeholder’s desired ethical perspective. A simulation environment is developed upon which our framework is tested. Within this environment we illustrate how our approach can be leveraged by stakeholders to make strategic trade-offs regarding ADS behavior and to inform policymaking efforts. In so doing, our framework is demonstrated as a practical, tractable, and transparent means of modeling ADS decision making.
Adversarial classification (AC) is a major subfield within the increasingly important domain of adversarial machine learning (AML). So far, most approaches to AC have followed a classical game-theoretic framework. This requires unrealistic common knowledge conditions untenable in the security settings typical of the AML realm. After reviewing such approaches, we present alternative perspectives on AC based on adversarial risk analysis.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.