Visual aesthetics is vital in determining the usability of the graphical user interface (GUI). It can strengthen the competitiveness of interactive online applications. Human aesthetic preferences for GUI are implicit and linked to various aspects of perception. In this study, an aesthetic GUI image database was constructed with 38,423 design works collected from Huaban.com, a popular social network website for art and design sharing, collection, and exhibition in China. The numbers of user collection and likes of each design work were used as the annotation to represent user preference levels. Deep convolutional neural networks were applied to evaluate the aesthetic preferences of GUIs, based on a large dataset of user interface design images with the ground-truth annotations. The experimental result indicated the feasibility of the proposed method, with a mean squared error (MSE) of 0.0222 for user collection prediction and an MSE of 0.0644 for user likes prediction in the best model performance of Squeeze-and-Excitation-VGG19 networks (SE-VGG19). This study aims to build a large aesthetic image database, and to explore a practical and objective evaluation model of GUI aesthetics.
A visual aesthetic is a crucial determinant of product design evaluation. Through the analysis of image features, not only can we evaluate the aesthetic level, but also we can reveal the whole quality of the design proposal. We assume that it could be a potential pattern to predict the ultimate success of the proposal in product design that a visual aesthetic can be a cue for award classification modeling. Consequently, we conduct investigation on a dataset of over 10,003 design submissions in a design competition held once a year from 2008 to 2018 in order to manifest the assumption. Due to the remarkable performance of deep convolutional neural networks (DCNNs), we compare seven deep learning methods to explore an optimal model for design award prediction based on product image analysis. The result of the experiments indicates that the proposed method achieves comparative accuracy in design award classification result predication, with the optimal classification accuracy of 70.79% using the SEFL-ResNet (Squeeze and Excitation-Focal Loss-ResNet) method.
Aesthetic perception is a human instinct that is responsive to multimedia stimuli. Giving computers the ability to assess human sensory and perceptual experience of aesthetics is a well-recognized need for the intelligent design industry and multimedia intelligence study. In this work, we constructed a novel database for the aesthetic evaluation of design, using 2,918 images collected from the archives of two major design awards, and we also present a method of aesthetic evaluation that uses machine learning algorithms. Reviewers' ratings of the design works are set as the ground-truth annotations for the dataset. Furthermore, multiple image features are extracted and fused. The experimental results demonstrate the validity of the proposed approach. Primary screening using aesthetic computing can be an intelligent assistant for various design evaluations and can reduce misjudgment in art and design review due to visual aesthetic fatigue after a long period of viewing. The study of computational aesthetic evaluation can provide positive effect on the efficiency of design review, and it is of great significance to aesthetic recognition exploration and applications development.
Leveraging the power of computational methods, AI can perform effective strategies in intelligent design. Researchers are pushing the boundaries of AI, developing computational systems to solve complex questions. The authors investigate the association of user preference for UI and deep image features, aiming to predict user preference level using deep convolutional neural networks (DCNNs) trained on a UI design image dataset. A total of 12,186 UI design images were collected from UI.cn and DOOOOR.com. Users' views and likes can help understand the implicit user preference level, which is set as the ground-truth annotation for the dataset. Six DCNNs, including VGG-19, InceptionNet-V3, MobileNet, EfficientNet, ResNet-50 and NASNetLarge were trained to learn the user preference of UI images. The experiment achieves an optimal result with a mean-squared error of 0.000214 and a mean absolute error of 0.0103 based on Effi-cientNet, which indicates that the proposed method provides the possibility in learning the pattern of user aesthetics preference for UI design. On the basis of the prediction model, a mobile application named 'HotUI' was developed for UI design recommendations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.