Abstract-Despite the availability of the sensor and smartphone devices to fulfill the ubiquitous computing vision, thestate-of-the-art falls short of this vision. We argue that the reason for this gap is the lack of an infrastructure to task/utilize these devices for collaboration. We propose that Twitter can provide an "open" publish-subscribe infrastructure for sensors and smartphones, and pave the way for ubiquitous crowd-sourced sensing and collaboration applications. We design and implement a crowd-sourced sensing and collaboration system over Twitter, and showcase our system in the context of two applications: a crowd-sourced weather radar, and a participatory noise-mapping application. Our results from real-world Twitter experiments give insights into the feasibility of this approach and outlines the research challenges in sensor/smartphone integration to Twitter.
Location-based queries are quickly becoming ubiquitous. However, traditional search engines perform poorly for a significant fraction of location-based queries, which are nonfactual (i.e., subjective, relative, or multi-dimensional). As an alternative, we investigate the feasibility of answering locationbased queries by crowdsourcing over Twitter. More specifically, we study the effectiveness of employing location-based services (such as Foursquare) for finding appropriate people to answer a given location-based query. Our findings give insights for the feasibility of this approach and highlight some research challenges in social search engines.
Abstract. This paper describes the design, implementation and deployment of LineKing (LK), a crowdsourced line wait-time monitoring service. LK consists of a smartphone component (that provides automatic, energy-efficient, and accurate wait-time detection), and a cloud backend (that uses the collected data to provide accurate wait-time estimation). LK is used on a daily basis by hundreds of users to monitor the wait-times of a coffee shop in our university campus. The novel wait-time estimation algorithms deployed at the cloud backend provide mean absolute errors of less than 2-3 minutes.
We leverage crowd wisdom for multiple-choice question answering, and employ lightweight machine learning techniques to improve the aggregation accuracy of crowdsourced answers to these questions. In order to develop more effective aggregation methods and evaluate them empirically, we developed and deployed a crowdsourced system for playing the “Who wants to be a millionaire?” quiz show. Analyzing our data (which consist of more than 200,000 answers), we find that by just going with the most selected answer in the aggregation, we can answer over 90% of the questions correctly, but the success rate of this technique plunges to 60% for the later/harder questions in the quiz show. To improve the success rates of these later/harder questions, we investigate novel weighted aggregation schemes for aggregating the answers obtained from the crowd. By using weights optimized for reliability of participants (derived from the participants’ confidence), we show that we can pull up the accuracy rate for the harder questions by 15%, and to overall 95% average accuracy. Our results provide a good case for the benefits of applying machine learning techniques for building more accurate crowdsourced question answering systems.
This paper presents a survey on the timing performance of Google Cloud Messaging (GCM). We evaluate GCM in real world experiments, and at a reasonable scale involving thousands of real users. Our findings reveal that the GCM message delivery is unpredictable, namely having a reliable connection to Google's GCM servers on the client device does not guarantee a timely message arrival. Therefore, GCM is not suitable for time sensitive and/or "must-deliver-to-all" app scenarios. On the other hand, GCM delivers the push messages to a big portion of the subscribers (more than 40% in any experiment scenario) in a reasonable timeframe (in 10 seconds). Therefore, GCM may be a good fit for the application scenarios where random multicasting is sufficient, such as crowdsourcing systems. Our results provide a through evaluation of the GCM performance, and will guide developers and researchers to decide whether GCM is suitable for their intended use cases.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.