Proceedings of the 1st ACM SIGPLAN International Workshop on Machine Learning and Programming Languages 2017
DOI: 10.1145/3088525.3088563
|View full text |Cite
|
Sign up to set email alerts
|

Combining the logical and the probabilistic in program analysis

Abstract: Conventional program analyses have made great strides by leveraging logical reasoning. However, they cannot handle uncertain knowledge, and they lack the ability to learn and adapt. This in turn hinders the accuracy, scalability, and usability of program analysis tools in practice. We seek to address these limitations by proposing a methodology and framework for incorporating probabilistic reasoning directly into existing program analyses that are based on logical reasoning. We demonstrate that the combined ap… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(5 citation statements)
references
References 53 publications
0
5
0
Order By: Relevance
“…Thus, it would be interesting to study how ideas and techniques from the program analysis literature can be carry over to our framework, and vice versa. For example, Zhang et al [73] consider statistical properties of program behavior to advise abstractions, Sharma et al [65] relate verification to the learnability of concepts, Holtzen et al [36] study abstract predicates for loop-free probabilistic programs, and Monniaux [51] defines abstract representations for probabilistic program path analysis. On the semantical front, Cousot and Monerau [15] present a detailed and careful analysis for reasoning about (probabilistic) nondeterminism in programs, but as argued by Holtzen et al [36], they do consider the abstractions themselves to be probabilistic structures.…”
Section: Related Work and Discussionmentioning
confidence: 99%
“…Thus, it would be interesting to study how ideas and techniques from the program analysis literature can be carry over to our framework, and vice versa. For example, Zhang et al [73] consider statistical properties of program behavior to advise abstractions, Sharma et al [65] relate verification to the learnability of concepts, Holtzen et al [36] study abstract predicates for loop-free probabilistic programs, and Monniaux [51] defines abstract representations for probabilistic program path analysis. On the semantical front, Cousot and Monerau [15] present a detailed and careful analysis for reasoning about (probabilistic) nondeterminism in programs, but as argued by Holtzen et al [36], they do consider the abstractions themselves to be probabilistic structures.…”
Section: Related Work and Discussionmentioning
confidence: 99%
“…Thus, it would be interesting to study how ideas and techniques from the program analysis literature can be carry over to our framework, and vice versa. For example, [63] consider statistical properties of program behavior to advise abstractions, [55] relate verification to the learnability of concepts, [32] study abstract predicates for loop-free probabilistic programs, and [43] defines abstract representations for probabilistic program path analysis. On the semantical front, [11] present a detailed and careful analysis for reasoning about (probabilistic) nondeterminism in programs, but as argued in [32], they do consider the abstractions themselves to be probabilistic structures.…”
Section: Related Work and Discussionmentioning
confidence: 99%
“…Static analysis is known to produce a lot of false positives. To suppress them, several machine learning based approaches [25], [26], [27], [28], [10], [29], [30], [31], [49], [32], [33] have been proposed. Because they either target different languages or different static analyzers, they are not directly applicable.…”
Section: Table V Aumentioning
confidence: 99%
“…To suppress them, various methods have been proposed (as summarized in [24]). Among them, machine learning based approaches [25], [26], [27], [28], [10], [29], [30], [31], [32], [33] focus on learning the patterns of false positives from examples. However, training such models requires good labeled datasets.…”
Section: Introductionmentioning
confidence: 99%