Proceedings of the 2018 Conference on Research in Adaptive and Convergent Systems 2018
DOI: 10.1145/3264746.3264807
|View full text |Cite
|
Sign up to set email alerts
|

Deep-learning based web UI automatic programming

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(6 citation statements)
references
References 4 publications
0
6
0
Order By: Relevance
“…For the detection of GUI elements, most approaches use CNNs, including twostage detectors, such as Region-based CNN (R-CNN) and its variants (such as Faster R-CNN [Kim et al 2018]) as well as one-stage detectors, such as SSD, RetinaNet [Jain et al 2019][Pandian andSuleri 2020], YOLO [Yun et al 2018] and their variants. Several of the approaches that divide the processing into several stages use CNN for mapping the raw input image to a learned representation and then RNN for performing language modeling on the textual description associated with the input picture.…”
Section: Which Datasets Have Been Used For Building the Machine Learning Model?mentioning
confidence: 99%
See 2 more Smart Citations
“…For the detection of GUI elements, most approaches use CNNs, including twostage detectors, such as Region-based CNN (R-CNN) and its variants (such as Faster R-CNN [Kim et al 2018]) as well as one-stage detectors, such as SSD, RetinaNet [Jain et al 2019][Pandian andSuleri 2020], YOLO [Yun et al 2018] and their variants. Several of the approaches that divide the processing into several stages use CNN for mapping the raw input image to a learned representation and then RNN for performing language modeling on the textual description associated with the input picture.…”
Section: Which Datasets Have Been Used For Building the Machine Learning Model?mentioning
confidence: 99%
“…Concerning the evaluation, the researches vary largely in terms of scientific rigor. While some stand out due to their systematic and wide evaluation (including, e.g., [Robinson 2019][Moran et al 2018] [Chen et al 2018), other either present a very superficial evaluation just citing some results without presenting the research design (i.e., [Jain et al 2019][Kim et al 2018] [Liu et al 2018b] [Halbe and Joshi 2015] [Ge 2019 or do not present any information on the evaluation of the presented approach.…”
Section: How Have the Approaches Been Evaluated?mentioning
confidence: 99%
See 1 more Smart Citation
“…The main core of the model is based on the selfattention layers. The DL is used for generating source code from image of sketch design [14,15]. Both models use the Convention Neural Network (CNN) as the main unite of model design, but different in general model and method.…”
Section: Introductionmentioning
confidence: 99%
“…Similar system also built by Chen et al[28] where their work was able to generate GUI skeleton code from a mockup design. While most works in aiding GUI development start from mockup, there are works byRobinson [29] and Kim et al[30] where they made system capable in generating web UI from sketch or wireframe stage.…”
mentioning
confidence: 99%