Abstract:Record-replay testing is widely used in mobile app testing as an automated testing method. However, the current record-replay methods are closely dependent on the internal information of the device or app under test. Due to the diversity of mobile devices and system platforms, their practical use is limited. To break this limitation, this paper proposes an entirely black-box learning-replay testing approach by combining robotics and vision technology to achieve a record-replay testing that can support cross-de… Show more
“…White et al [46] proposed a supervised deep learning approach and automatically identified GUI components to improve the coverage of random testing. Xue et al [47] proposed a supervised deep learning approach to assist in performing record-and-replay GUI testing in a mobile or web application. Mozgovoy and Pyshkin [27] used template matching to recognize objects and GUI elements in a screenshot of a mobile game, which allow test assertions to be made against the visual content of the game.…”
“…White et al [46] proposed a supervised deep learning approach and automatically identified GUI components to improve the coverage of random testing. Xue et al [47] proposed a supervised deep learning approach to assist in performing record-and-replay GUI testing in a mobile or web application. Mozgovoy and Pyshkin [27] used template matching to recognize objects and GUI elements in a screenshot of a mobile game, which allow test assertions to be made against the visual content of the game.…”
The HTML5 is used to display high quality graphics in web applications such as web games (i.e., games). However, automatically testing games is not possible with existing web testing techniques and tools, and manual testing is laborious. Many widely used web testing tools rely on the Document Object Model (DOM) to drive web test automation, but the contents of the are not represented in the DOM. The main alternative approach, snapshot testing, involves comparing oracle snapshot images with test-time snapshot images using an image similarity metric to catch visual bugs, i.e., bugs in the graphics of the web application. However, creating and maintaining oracle snapshot images for games is onerous, defeating the purpose of test automation. In this paper, we present a novel approach to automatically detect visual bugs in games. By leveraging an internal representation of objects on the , we decompose snapshot images into a set of object images, each of which is compared with a respective oracle asset (e.g., a sprite) using four similarity metrics: percentage overlap, mean squared error, structural similarity, and embedding similarity. We evaluate our approach by injecting 24 visual bugs into a custom game, and find that our approach achieves an accuracy of 100%, compared to an accuracy of 44.6% with traditional snapshot testing.
CCS CONCEPTS• Software and its engineering → Software testing and debugging.
“…Zhang et al 7 proposed a terminable learning model based on deep learning to identify similarities between GUI and recognizable GUI, avoiding redundant test cases and preventing explosion in the number of test cases by merging redundant isomorphic nodes in the GUI model. Xue et al 8 proposed a completely black-box learning-playback testing method that combines robotics and vision technologies to enable cross-device and cross-platform record-playback testing. At this stage, action recognition techniques are mostly used for recognizing human actions, identity authentication and other scenarios.…”
The explosive growth and rapid version iteration of various mobile applications have brought enormous workloads to mobile application testing. Robotic testing methods can efficiently handle repetitive testing tasks, which can compensate for the accuracy of manual testing and improve the efficiency of testing work. Vision-based robotic testing identifies the types of test actions by analyzing expert test videos and generates expert imitation test cases. The mobile application expert imitation testing method uses machine learning algorithms to analyze the behavior of experts imitating test videos, generates test cases with high reliability and reusability, and drives robots to execute test cases. However, the difficulty of estimating multi-dimensional gestures in 2D images leads to complex algorithm steps, including tracking, detection, and recognition of dynamic gestures. Hence, this article focuses on the analysis and recognition of test actions in mobile application robot testing. Combined with the improved YOLOv5 algorithm and the ResNet-152 algorithm, a visual modeling method of mobile application test action based on machine vision is proposed. The precise localization of the hand is accomplished by injecting dynamic anchors, attention mechanism, and the weighted boxes fusion in the YOLOv5 algorithm. The improved algorithm recognition accuracy increased from 82.6% to 94.8%. By introducing the pyramid context awareness mechanism into the ResNet-152 algorithm, the accuracy of test action classification is improved. The accuracy of the test action classification was improved from 72.57% to 76.84%. Experiments show that this method can reduce the probability of multiple detections and missed detection of test actions, and improve the accuracy of test action recognition.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.