2022
DOI: 10.48550/arxiv.2208.09189
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Cross-Domain Evaluation of a Deep Learning-Based Type Inference System

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
0
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(7 citation statements)
references
References 0 publications
0
0
0
Order By: Relevance
“…There are potential approaches where new types can be 1 https://gitlab.com/dlr-dw/type-inference recognized through additional static analysis, as demonstrated in HiTyper [15]. However, there is still the issue that data types that are rarely or not in the training dataset cannot be predicted [8]. Nevertheless, Novelty detection has not been actively pursued as a solution to the problem.…”
Section: Related Workmentioning
confidence: 99%
See 4 more Smart Citations
“…There are potential approaches where new types can be 1 https://gitlab.com/dlr-dw/type-inference recognized through additional static analysis, as demonstrated in HiTyper [15]. However, there is still the issue that data types that are rarely or not in the training dataset cannot be predicted [8]. Nevertheless, Novelty detection has not been actively pursued as a solution to the problem.…”
Section: Related Workmentioning
confidence: 99%
“…Following the pipeline of TIPICAL described in Section III-B, we present our experiments on the ManyTypes4Py and the CrossDomainTypes4Py datasets. Moreover, we expanded the scope of the experiments of the original papers [6], [8] to further study the effects of different software domains, and the unknown data types issues.…”
Section: Experiments and Evaluationmentioning
confidence: 99%
See 3 more Smart Citations