2020
DOI: 10.21609/jiki.v13i1.845
|View full text |Cite
|
Sign up to set email alerts
|

Visual Recognition Of Graphical User Interface Components Using Deep Learning Technique

Abstract: Graphical User Interface (GUI) building in software development is a process which ideally need to go through several steps. Those steps in the process start from idea or rough sketch of the GUI, then refined into visual design, implemented in coding or prototype, and finally evaluated for its function and usability to discover design problem and to get feedback from users. Those steps repeated until the GUI considered satisfactory or acceptable by the user. Computer vision technique has been researched and de… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(7 citation statements)
references
References 18 publications
(24 reference statements)
0
6
0
Order By: Relevance
“…GUI leverages on the fact that human cognitive abilities are closely related to visual nature, so it allows the user to manipulate the program by using buttons and toolbars. In designing GUI, one of the most important aspects to be included is usability, which measures the accessibility, efficiency, and the aesthetic aspect of the UI [17]. This GUI design is divided into 4 steps: idea or rough sketch in the form of wireframe, refined visual design, implementation using MATLAB coding, and finally software testing for evaluation.…”
Section: Methodsmentioning
confidence: 99%
“…GUI leverages on the fact that human cognitive abilities are closely related to visual nature, so it allows the user to manipulate the program by using buttons and toolbars. In designing GUI, one of the most important aspects to be included is usability, which measures the accessibility, efficiency, and the aesthetic aspect of the UI [17]. This GUI design is divided into 4 steps: idea or rough sketch in the form of wireframe, refined visual design, implementation using MATLAB coding, and finally software testing for evaluation.…”
Section: Methodsmentioning
confidence: 99%
“…The use of image processing techniques, or those related has been widely carried out in previous research, some of them are pattern recognition techniques to interpret an image (Ronando and Sudaryanto 2018), deep learning to recognize certain images (Rahmadi and Sudaryanto 2020), use of hole filling technique to improve 3D image (Sudaryanto, Purnama, and Yuniarno 2019), or inpainting techniques to improve 2D image , car tire damage detection using gray level cooccurrence matrix with neural network method (Febriyanto, Rahmad, and Bella Vista 2021), automatic cancer detection for USG image using active-contour Chan-Vese (CV) simplification model (Nugroho et al 2022), selective encryption of medical images with using linear congruential generator (Nanda and Gelar 2022). In this research we do a simple image processing by implement ESPcam into automatic clothesline.…”
Section: Literature Reviewmentioning
confidence: 99%
“…The main function of this automatic clothesline system is to protect the clothes from rain, so when the ESPCam is already added, it become possible to use ESPCam to detect weather visually. Some method that may can be used is HCL histogram analysis for cloud image using K-Nearest Neighbor (KNN) algorithm (Hariani 2020), or using pattern recognition method (Ronando and Sudaryanto 2018), color sorter method (Sanjoto 2019), or deep learning method (Rahmadi and Sudaryanto 2020) to detect the different between cloudy, sunny, or rainy sky image. If the ESPCam image is still not eligible to be processed with these method, we can do preprocessing to the images with inpainting method , or other preprocessing method.…”
Section: Conclusion and Suggestionsmentioning
confidence: 99%
“…Suggestions for development for future research can be carried out in terms of image processing automation, with the hope that the system will be able to automatically detect motion in the future, or detect faces, or recognize faces from users or permitted parties. Methods that may be used include the weighted neighbor method [11] or the pattern recognition method [12] or the matrix mode method [13] or using the color sorter method [14] or the deep learning method [15] where all of these methods exist. possibility to be used in the process of motion detection or detection of the user's face.…”
Section: Conclusion and Recommendationsmentioning
confidence: 99%