Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering 2017
DOI: 10.1145/3106237.3106298
|View full text |Cite
|
Sign up to set email alerts
|

Guided, stochastic model-based GUI testing of Android apps

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
189
0
1

Year Published

2018
2018
2021
2021

Publication Types

Select...
3
2
2

Relationship

1
6

Authors

Journals

citations
Cited by 263 publications
(190 citation statements)
references
References 46 publications
0
189
0
1
Order By: Relevance
“…Dynodroid [14] extends the random selection using weights and frequencies of events. Model-based strategies such as PUMA [8], DroidBot [11], MobiGUITAR [2], and Stoat [29] apply model-based testing to apps. Systematic exploration strategies range from full-scale symbolic execution [18] to evolutionary algorithms [15,17].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Dynodroid [14] extends the random selection using weights and frequencies of events. Model-based strategies such as PUMA [8], DroidBot [11], MobiGUITAR [2], and Stoat [29] apply model-based testing to apps. Systematic exploration strategies range from full-scale symbolic execution [18] to evolutionary algorithms [15,17].…”
Section: Related Workmentioning
confidence: 99%
“…Systematic exploration strategies range from full-scale symbolic execution [18] to evolutionary algorithms [15,17]. All of these approaches do not explicitly manage diversity, except of Stoat [29] encoding diversity of sequences into the objective function. Diversity in SBST.…”
Section: Related Workmentioning
confidence: 99%
“…STORYDROID extracts nearly 2 times more activity transitions than the stateof-the-art ATG extraction tool (i.e., IC3 [62]) on both opensource apps and closed-source apps. Besides, STORYDROID significantly outperforms the state-of-the-art dynamic testing tool (i.e., STOAT [67]) on activity coverage for both opensource apps (87% on average) and closed-source apps (74% on average). On average, our rendered images achieve 84% similarity compared with the ones that are dynamically obtained by STOAT.…”
Section: Introductionmentioning
confidence: 99%
“…First, ATGs are usually incomplete due to the limitation of static analysis tools [36,62]. Second, to identify all UI pages, a pure static approach may miss parts of UIs that are dynamically rendered (see Section III), whereas a pure dynamic approach [17,40,41,66,67] may not be able to reach all pages in the app, especially those requiring login. Third, the obfuscated activity names lack the semantics of corresponding functionalities, making the storyboard hard to understand.…”
Section: Introductionmentioning
confidence: 99%
“…Thus, to maximize app market success developers aim at attaining high quality software by revealing and fixing potential software bugs as early as possible [14]. As a natural consequence, in last years both researchers and pratictioners developed techniques and tools to automate the testing of mobile applications [8], [12], [13], [21]. Such tools aim to reveal unhandled exceptions while exercising the app under test (AUT) with input and system events.…”
Section: Introductionmentioning
confidence: 99%