Static analysis tools report software defects that may or may not be detected by other verification methods. Two challenges complicating the adoption of these tools are spurious false positive warnings and legitimate warnings that are not acted on. This paper reports automated support to help address these challenges using logistic regression models that predict the foregoing types of warnings from signals in the warnings and implicated code. Because examining many potential signaling factors in large software development settings can be expensive, we use a screening methodology to quickly discard factors with low predictive power and cost-effectively build predictive models. Our empirical evaluation indicates that these models can achieve high accuracy in predicting accurate and actionable static analysis warnings, and suggests that the models are competitive with alternative models built without screening.
Although researchers have begun to explicitly support enduser programmers' debugging by providing information to help them find bugs, there is little research addressing the proper mechanism to alert the user to this information. The choice of alerting mechanism can be important, because as previous research has shown, different interruption styles have different potential advantages and disadvantages. To explore impacts of interruptions in the end-user debugging domain, this paper describes an empirical comparison of two interruption styles that have been used to alert end-user programmers to debugging information. Our results show that negotiated-style interruptions were superior to immediate-style interruptions in several issues of importance to end-user debugging, and further suggest that a reason for this superiority may be that immediate-style interruptions encourage different debugging strategies.
End users develop more software than any other group of programmers, using software authoring devices such as e-mail filtering editors, by-demonstration macro builders, and spreadsheet environments. Despite this, there has been little research on finding ways to help these programmers with the dependability of their software. We have been addressing this problem in several ways, one of which includes supporting end-user debugging activities through fault localization techniques. This paper presents the results of an empirical study conducted in an end-user programming environment to examine the impact of two separate factors in fault localization techniques that affect technique effectiveness. Our results shed new insights into fault localization techniques for end-user programmers and the factors that affect them, with significant implications for the evaluation of those techniques.
End-user programmers are writing an unprecedented number of programs, due in large part to the significant effort put forth to bring programming power to end users. Unfortunately, this effort has not been supplemented by a comparable effort to increase the correctness of these often faulty programs. To address this need, we have been working towards bringing fault localization techniques to end users. In order to understand how end users are affected by and interact with such techniques, we conducted a think-aloud study, examining the interactive, human-centric ties between end-user debugging and a fault localization technique. Our results provide insights into the contributions such techniques can make to an interactive end-user debugging process.
We present a rigorous mathematical framework for analyzing dynamics of a broad class of Boolean network models. We use this framework to provide the first formal proof of many of the standard critical transition results in Boolean network analysis, and offer analogous characterizations for novel classes of random Boolean networks. We precisely connect the short-run dynamic behavior of a Boolean network to the average influence of the transfer functions. We show that some of the assumptions traditionally made in the more common mean-field analysis of Boolean networks do not hold in general. For example, we offer some evidence that imbalance, or expected internal inhomogeneity, of transfer functions is a crucial feature that tends to drive quiescent behavior far more strongly than previously observed.Introduction. Complex systems can usually be represented as a network of interdependent functional units. Boolean networks were proposed by Kauffman as models of genetic regulatory networks [1, 2] and have received considerable attention across several scientific disciplines. They model a variety of complex phenomena, particularly in theoretical biology and physics [3][4][5][6][7][8].A Boolean network N with n nodes can be described by a directed graph G = (V, E) and a set of transfer functions. We use V and E to denote the sets of nodes and edges respectively, and denote the indegree of node i by K i . Each node i is assigned a K i -ary Boolean function f i : {−1, +1} K i → {−1, +1}, termed transfer function. If the state of node i at time t is x i (t), its state at time t + 1 is described by
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.