BackgroundPlanning for a possible influenza pandemic is an extremely high priority, as social and economic effects of an unmitigated pandemic would be devastating. Mathematical models can be used to explore different scenarios and provide insight into potential costs, benefits, and effectiveness of prevention and control strategies under consideration.Methods and FindingsA stochastic, equation-based epidemic model is used to study global transmission of pandemic flu, including the effects of travel restrictions and vaccination. Economic costs of intervention are also considered. The distribution of First Passage Times (FPT) to the United States and the numbers of infected persons in metropolitan areas worldwide are studied assuming various times and locations of the initial outbreak. International air travel restrictions alone provide a small delay in FPT to the U.S. When other containment measures are applied at the source in conjunction with travel restrictions, delays could be much longer. If in addition, control measures are instituted worldwide, there is a significant reduction in cases worldwide and specifically in the U.S. However, if travel restrictions are not combined with other measures, local epidemic severity may increase, because restriction-induced delays can push local outbreaks into high epidemic season. The per annum cost to the U.S. economy of international and major domestic air passenger travel restrictions is minimal: on the order of 0.8% of Gross National Product.ConclusionsInternational air travel restrictions may provide a small but important delay in the spread of a pandemic, especially if other disease control measures are implemented during the afforded time. However, if other measures are not instituted, delays may worsen regional epidemics by pushing the outbreak into high epidemic season. This important interaction between policy and seasonality is only evident with a global-scale model. Since the benefit of travel restrictions can be substantial while their costs are minimal, dismissal of travel restrictions as an aid in dealing with a global pandemic seems premature.
We present Apposcopy, a new semantics-based approach for identifying a prevalent class of Android malware that steals private user information. Apposcopy incorporates (i) a highlevel language for specifying signatures that describe semantic characteristics of malware families and (ii) a static analysis for deciding if a given application matches a malware signature. The signature matching algorithm of Apposcopy uses a combination of static taint analysis and a new form of program representation called Inter-Component Call Graph to efficiently detect Android applications that have certain control-and data-flow properties. We have evaluated Apposcopy on a corpus of real-world Android applications and show that it can effectively and reliably pinpoint malicious applications that belong to certain malware families.
This paper presents an example-driven synthesis technique for automating a large class of data preparation tasks that arise in data science. Given a set of input tables and an output table, our approach synthesizes a table transformation program that performs the desired task. Our approach is not restricted to a fixed set of DSL constructs and can synthesize programs from an arbitrary set of components, including higher-order combinators. At a high-level, our approach performs type-directed enumerative search over partial programs but incorporates two key innovations that allow it to scale: First, our technique can utilize any first-order specification of the components and uses SMT-based deduction to reject partial programs. Second, our algorithm uses partial evaluation to increase the power of deduction and drive enumerative search. We have evaluated our synthesis algorithm on dozens of data preparation tasks obtained from on-line forums, and we show that our approach can automatically solve a large class of problems encountered by R users.
We propose a new conflict-driven program synthesis technique that is capable of learning from past mistakes. Given a spurious program that violates the desired specification, our synthesis algorithm identifies the root cause of the conflict and learns new lemmas that can prevent similar mistakes in the future. Specifically, we introduce the notion of equivalence modulo conflict and show how this idea can be used to learn useful lemmas that allow the synthesizer to prune large parts of the search space. We have implemented a generalpurpose CDCL-style program synthesizer called Neo and evaluate it in two different application domains, namely data wrangling in R and functional programming over lists. Our experiments demonstrate the substantial benefits of conflictdriven learning and show that Neo outperforms two stateof-the-art synthesis tools, Morpheus and DeepCoder, that target these respective domains. CCS Concepts • Software and its engineering → Programming by example; Automatic programming;
Component-based approaches to program synthesis assemble programs from a database of existing components, such as methods provided by an API. In this paper, we present a novel type-directed algorithm for component-based synthesis. The key novelty of our approach is the use of a compact Petri-net representation to model relationships between methods in an API. Given a target method signature S, our approach performs reachability analysis on the underlying Petri-net model to identify sequences of method calls that could be used to synthesize an implementation of S. The programs synthesized by our algorithm are guaranteed to type check and pass all test cases provided by the user. We have implemented this approach in a tool called SYPET, and used it to successfully synthesize real-world programming tasks extracted from on-line forums and existing code repositories. We also compare SYPET with two state-of-the-art synthesis tools, namely INSYNTH and CODEHINT, and demonstrate that SYPET can synthesize more programs in less time. Finally, we compare our approach with an alternative solution based on hypergraphs and demonstrate its advantages.
This paper presents five studies on the development and validation of a scale of intellectual humility. This scale captures cognitive, affective, behavioral, and motivational components of the construct that have been identified by various philosophers in their conceptual analyses of intellectual humility. We find that intellectual humility has four core dimensions: Open-mindedness (versus Arrogance), Intellectual Modesty (versus Vanity), Corrigibility (versus Fragility), and Engagement (versus Boredom). These dimensions display adequate self-informant agreement, and adequate convergent, divergent, and discriminant validity. In particular, Open-mindedness adds predictive power beyond the Big Six for an objective behavioral measure of intellectual humility, and Intellectual Modesty is uniquely related to Narcissism. We find that a similar factor structure emerges in Germanophone participants, giving initial evidence for the model’s cross-cultural generalizability.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.