Smart phones comprise a large and rapidly growing market. These devices provide unprecedented opportunities for sensor mining since they include a large variety of sensors, including an: acceleration sensor (accelerometer), location sensor (GPS), direction sensor (compass), audio sensor (microphone), image sensor (camera), proximity sensor, light sensor, and temperature sensor. Combined with the ubiquity and portability of these devices, these sensors provide us with an unprecedented view into people's lives-and an excellent opportunity for data mining. But there are obstacles to sensor mining applications, due to the severe resource limitations (e.g., power, memory, bandwidth) faced by mobile devices. In this paper we discuss these limitations, their impact, and propose a solution based on our WISDM (WIireless Sensor Data Mining) smart phone-based sensor mining architecture.
Activity Recognition (AR), which identifies the activity that a user performs, is attracting a tremendous amount of attention, especially with the recent explosion of smart mobile devices. These ubiquitous mobile devices, most notably but not exclusively smartphones, provide the sensors, processing, and communication capabilities that enable the development of diverse and innovative activity recognitionbased applications. However, although there has been a great deal of research into activity recognition, surprisingly little practical work has been done in the area of applications in mobile devices. In this paper we describe and categorize a variety of activity recognition-based applications. Our hope is that this work will encourage the development of such applications and also influence the direction of activity recognition research.
Image recognition systems offer the promise to learn from images at scale without requiring expert knowledge. However, past research suggests that machine learning systems often produce biased output. In this article, we evaluate potential gender biases of commercial image recognition platforms using photographs of U.S. members of Congress and a large number of Twitter images posted by these politicians. Our crowdsourced validation shows that commercial image recognition systems can produce labels that are correct and biased at the same time as they selectively report a subset of many possible true labels. We find that images of women received three times more annotations related to physical appearance. Moreover, women in images are recognized at substantially lower rates in comparison with men. We discuss how encoded biases such as these affect the visibility of women, reinforce harmful gender stereotypes, and limit the validity of the insights that can be gathered from such data.
Human activity recognition (AR) has begun to mature as a field, but for AR research to thrive, large, diverse, high quality, AR data sets must be publically available and AR methodology must be clearly documented and standardized. In the process of comparing our AR research to other efforts, however, we found that most AR data sets are sufficiently limited as to impact the reliability of existing research results, and that many AR research papers do not clearly document their experimental methodology and often make unrealistic assumptions. In this paper we outline problems and limitations with AR data sets and describe the methodology problems we noticed, in the hope that this will lead to the creation of improved and better documented data sets and improved AR experimental methodology. Although we cover a broad array of methodological issues, our primary focus is on an often overlooked factor, model type, which determines how AR training and test data are partitioned-and how AR models are evaluated. Our prior research indicates that personal, hybrid, and impersonal/universal models yield dramatically different performance [30], yet many research studies do not highlight or even identify this factor. We make concrete recommendations to address these issues and also describe our own publically available AR data sets.
Activity recognition allows ubiquitous mobile devices like smartphones to be context-aware and also enables new applications, such as mobile health applications that track a user's activities over time. However, it is difficult for smartphonebased activity recognition models to perform well, since only a single body location is instrumented. Most research focuses on universal/impersonal activity recognition models, where the model is trained using data from a panel of representative users. In this paper we compare the performance of these impersonal models with those of personal models, which are trained using labeled data from the intended user, and hybrid models, which combine aspects of both types of models. Our analysis indicates that personal training data is required for high accuracybut that only a very small amount of training data is necessary. This conclusion led us to implement a self-training capability into our Actitracker smartphone-based activity recognition system[1], and we believe personal models can also benefit other activity recognition systems as well.
Smart phones are quite sophisticated and increasingly incorporate diverse and powerful sensors. One such sensor is the tri-axial accelerometer, which measures acceleration in all three spatial dimensions. The accelerometer was initially included for screen rotation and advanced game play, but can support other applications. In prior work we showed how the accelerometer could be used to identify and/or authenticate a smart phone user [11]. In this paper we extend that prior work to identify user traits such as sex, height, and weight, by building predictive models from labeled accelerometer data using supervised learning methods. The identification of such traits is often referred to as -soft biometrics‖ because these traits are not sufficiently distinctive or invariant to uniquely identify an individual-but they can be used in conjunction with other information for identification purposes. While our work can be used for biometric identification, our primary goal is to learn as much as possible about the smart phone user. This mined knowledge can be then be used for a number of purposes, such as marketing or making an application more intelligent (e.g., a fitness app could consider a user's weight when calculating calories burned).
This is a brief outline of early work in progress. We will update this document after analyzing results from a crowd sourced validation procedure]
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.