The platform will undergo maintenance on Sep 14 at about 9:30 AM EST and will be unavailable for approximately 1 hour.
2015 30th IEEE/ACM International Conference on Automated Software Engineering (ASE) 2015
DOI: 10.1109/ase.2015.89
|View full text |Cite
|
Sign up to set email alerts
|

Automated Test Input Generation for Android: Are We There Yet? (E)

Abstract: Mobile applications, often simply called "apps", are increasingly widespread, and we use them daily to perform a number of activities. Like all software, apps must be adequately tested to gain confidence that they behave correctly. Therefore, in recent years, researchers and practitioners alike have begun to investigate ways to automate apps testing. In particular, because of Android's open source nature and its large share of the market, a great deal of research has been performed on input generation techniqu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

6
303
0
1

Year Published

2017
2017
2023
2023

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 392 publications
(310 citation statements)
references
References 31 publications
6
303
0
1
Order By: Relevance
“…for an app. While a number of other approaches for automation have been proposed, a recent study [18] showed that Monkey exhibited a better coverage in terms of code coverage and fault detection capabilities than other automated tools. Completely random events would prevent apples-to-apples comparison among versions of the same app, so we specify the same random seed that generates the sequence of events for interaction with all of an app's versions.…”
Section: Interaction With Appsmentioning
confidence: 99%
“…for an app. While a number of other approaches for automation have been proposed, a recent study [18] showed that Monkey exhibited a better coverage in terms of code coverage and fault detection capabilities than other automated tools. Completely random events would prevent apples-to-apples comparison among versions of the same app, so we specify the same random seed that generates the sequence of events for interaction with all of an app's versions.…”
Section: Interaction With Appsmentioning
confidence: 99%
“…The code coverage can be calculated by analyzing the basic information of byte-code and collecting the runtime information of executed code. We pick two popular automatic testing tools Monkey [5] and Dynodroid [19] for comparison, since a recent research [15] shows that Monkey and Dynodroid achieve higher coverage than other existing testing tools for Android apps. The number of generated events for Monkey is 10000 and for Dynodroid is 2000 (same with what Machiry et al suggested in their work).…”
Section: Methodsmentioning
confidence: 99%
“…Choudhary et al [15] conduct an empirical study on existing testing tools for Android apps. An interesting discovery is that Monkey and Dynodroid which are based on the random exploration strategy can reach higher coverage than other tools with more sophisticated strategies.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…F-Droid offers direct access to app source code and to their developers through code repository and issue tracker. It has been used in many other studies on Android testing proposed in the literature [21,22,23] and contains a growing number of applications belonging to different categories. ‡ https://f-droid.org/ 10 of 27 AMALFITANO ET AL.…”
Section: Objects Selectionmentioning
confidence: 99%