Software logs are of great value in both industrial and open-source projects. Mobile analytics logging enables developers to collect logs remotely from their apps running on end user devices at the cost of recording and transmitting logs across the Internet to a centralised infrastructure.This paper makes a first step in characterising logging practices with a widely adopted mobile analytics logging library, namely Firebase Analytics. We provide an empirical evaluation of the use of Firebase Analytics in 57 open-source Android applications by studying the evolution of code-bases to understand: a) the needs-in-common that push practitioners to adopt logging practices on mobile devices, and b) the differences in the ways developers use local and remote logging.Our results indicate mobile analytics logs are less pervasive and less maintained than traditional logging code. Based on our analysis, we believe logging using mobile analytics is more user centered compared to traditional logging, where the latter is mainly used to record information for debugging purposes.
Ideally, all software should be easy to use and accessible for a wide range of people; however, even software that appears to be modern and intuitive often falls short of the most basic usability and accessibility goals. Why does this happen? One reason is that sometimes our designs look appealing so we skip the step of testing their usability and accessibility-all in the interest of speed, reducing costs, and competitive advantage. Even many large-scale applications from Internet companies present fundamental hurdles for some groups of users, and smaller sites are no better. We therefore need ways to help us discover these usability and accessibility problems efficiently and effectively. Usability and accessibility are two ways of measuring software quality. This article covers several ways in which automated tests can help identify problems and limitations in Web-based applications, where fixing them makes the software more usable and/or accessible. The work complements, rather than replaces, other human usability testing. No matter how valuable in-person testing is, effective automation is able to increase the value of overall testing by extending its reach and range. Automated tests that are run with minimal human intervention across a vast set of Web pages would be impractical to conduct in person. Conversely, people are capable of spotting many issues that are hard to program a computer to detect. Many organizations don't do any usability or accessibility testing at all; often it's seen as too expensive, too specialized, or something to address after testing all the "functionality" (which is seldom completed because of time and other resource constraints). For these organizations, good test automation can help in several ways. Automated tests can guide and inform the software development process by providing information about the software as it is being written. This testing helps the creators of the software fix problems quickly (because they have fast, visible feedback) and to experiment with greater confidence. It can also help identify potential issues in the various internal releases by assessing each release quickly and consistently. Some usability experts find the idea of incorporating automated tests into their work alien, uncomfortable, or even unnecessary. Some may already be using static analysis tools such as Hera
Google provides Android Vitals, a set of reports and tools for Android Developers as part of Google Play Console. Android Vitals can help developers improve their Android apps after an app has been launched by providing information on how their app is performing in key areas such as battery use, performance, and stability (freezes and crashes). Android Vitals also provides various comparisons, including against global bad behavior thresholds, against various peer groups of apps, and across releases of this app. Developers confirm Android Vitals notifies them of relevant problems and they found it valuable even if they also use crash reporting and mobile analytics. The underlying data is used by Google to assess the relative quality of Android apps; and the perceived quality may materially affect the visibility of an app in the Google Play Store. Yet little is known about the tools. This paper outlines various experiences from the developers' perspective of using Android Vitals with several popular Android apps to help open discussions and suggest further research areas. It introduces an open source project, created as part of our work, that enables developers to download pertinent data, particularly crash reports. The data can be analysed both by the development team and others. A particular benefit of this tool is to make the data available outside of the Google platform, which allows developers and (indirectly) researchers to develop additional analysis techniques not currently provided by the platform. CCS CONCEPTS • Software and its engineering → Application specific development environments; Software testing and debugging; Empirical software validation.
Automated usability tests can be valuable companions to in-person tests.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.