2020
DOI: 10.1007/978-3-030-45439-5_21
|View full text |Cite
|
Sign up to set email alerts
|

Learning Based Methods for Code Runtime Complexity Prediction

Abstract: Predicting the runtime complexity of a programming code is an arduous task. In fact, even for humans, it requires a subtle analysis and comprehensive knowledge of algorithms to predict time complexity with high fidelity, given any code. As per Turing's Halting problem proof, estimating code complexity is mathematically impossible. Nevertheless, an approximate solution to such a task can help developers to get real-time feedback for the efficiency of their code. In this work, we model this problem as a machine … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

2
2
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 12 publications
(5 citation statements)
references
References 12 publications
2
2
0
Order By: Relevance
“…No significant difference in percent error was observed between the total number of base pairs computed by all RNA folding algorithms, with the largest discernible difference between Vsfold5 and Vienna (Adjusted P value = 0.9989). This agrees with current studies, as the challenge biotechnicians face instead lies within predictions in O(N3) and O(N4) time and space where N is the sequence length (using big O notation) [50]. While certain algorithms can reduce higher-ordered structures to O(N3), thereby reducing computational complexity, a growing minimal N value correlates with more possible making the algorithm less accurate [21].…”
Section: Assessment Of Percent Error Of In Total Base Pairs and Knott...supporting
confidence: 86%
“…No significant difference in percent error was observed between the total number of base pairs computed by all RNA folding algorithms, with the largest discernible difference between Vsfold5 and Vienna (Adjusted P value = 0.9989). This agrees with current studies, as the challenge biotechnicians face instead lies within predictions in O(N3) and O(N4) time and space where N is the sequence length (using big O notation) [50]. While certain algorithms can reduce higher-ordered structures to O(N3), thereby reducing computational complexity, a growing minimal N value correlates with more possible making the algorithm less accurate [21].…”
Section: Assessment Of Percent Error Of In Total Base Pairs and Knott...supporting
confidence: 86%
“…No significant difference in percent error was observed between the total number of base pairs computed by all RNA folding algorithms, with the largest discernible difference between Vsfold5 and Vienna (Adjusted P value = 0.9989). This agrees with current studies, as the challenge biotechnicians face instead lies within predictions in O(N3) and O(N4) time and space where N is the sequence length (using big O notation) [50]. While certain algorithms can reduce higher-ordered structures to O(N3), thereby reducing computational complexity, a growing minimal N value correlates with more possible pseudoknots, making the algorithm less accurate [21].…”
Section: Assessment Of Percent Error Of In Total Base Pairs and Knott...supporting
confidence: 86%
“…Sikka et al [25] investigate the use of machine learning to automatically predict the code runtime complexity class (e.g., O(n 2 )) of short programs. To this end, they collected 933 Java implementations of various algorithms from a competitive programming platform and annotated each with the corresponding complexity class (i.e., one of O(1), O(log n), O(n), O(n log n), O(n 2 )).…”
Section: Code Runtime Complexity Classificationmentioning
confidence: 99%
“…Sentiment Classification [3], [18], [19] Informative App Review Detection [20] App Review Classification [21], [27] Self-Admitted Technical Debt Detection [22] Comment Classification [23] Code-Comment Coherence Prediction [24] Linguistic Smell Detection [2] Code Runtime Complexity Classification [25] Code Readability Prediction [26] Outcome…”
Section: Fine-tuning and Testingmentioning
confidence: 99%