While research on collaborative tagging systems has largely been the purview of computer scientists, the behavior of these systems is driven by the psychology of their users. Here we explore how simple models of boundedly rational human decision making may partly account for the high-level properties of a collaborative tagging environment, in particular with respect to the distribution of tags used across the folksonomy. We discuss several plausible heuristics people might employ to decide on tags to use for a given item, and then describe methods for testing evidence of such strategies in real collaborative tagging data. Using a large dataset of annotations collected from users of the social music website Last.fm with a novel crawling methodology (approximately one millions total users), we extract the parameters for our decision-making models from the data. We then describe a set of simple multi-agent simulations that test our heuristic models, and compare their results to the extracted parameters from the tagging dataset. Results indicate that simple social copying mechanisms can generate surprisingly good fits to the empirical data, with implications for the design and study of tagging systems.
In this paper, we examine the effects of three video game variables: camera perspective (1 st person versus 3 rd person), session duration, and repeated play on training participants to mitigate three cognitive biases. We developed a 70 minute, 3D immersive video game for use as an experimentation test bed. One-hundred and sixty three participants either watched an instructional decision video or played one of the four versions of the game. Each participant's learning was assessed by comparing his or her post-test scores and pre-test scores for knowledge of the biases and ability to mitigate them. Results indicated that repeated game play across two sessions produced the largest improvement in learning, and was more effective than the instructional decision video and single session game for mitigating biases. Surprisingly, session duration did not improve learning, and results were mixed for the third person perspective improved learning. Overall, the video game did improve participant's ability to learn and to mitigate three cognitive biases. Implications for training using video game are discussed.
A folksonomy is ostensibly an information structure built up by the "wisdom of the crowds", but is the "crowd" really doing the work? Tagging is in fact a sharply skewed process in which a small minority of users generate an overwhelming majority of the annotations. Using data from the social music site Last.fm as a case study, this paper explores the implications of this tagging imbalance. Partitioning the folksonomy into two halves -one created by the prolific minority and the other by the non-prolific majority of taggers -we examine the large-scale differences in these two sub-folksonomies and the users generating them, and then explore several possible accounts of what might be driving these differences. We find that prolific taggers preferentially annotate content in the long-tail of less popular items, use tags with higher information content, and show greater tagging expertise. These results indicate that "supertaggers" not only tag more than their counterparts, but in quantifiably different ways.
A folksonomy is ostensibly an information structure built up by the "wisdom of the crowd", but is the "crowd" really doing the work? Tagging is in fact a sharply skewed process in which a small minority of "supertagger" users generate an overwhelming majority of the annotations. Using data from three large-scale social tagging platforms, we explore (a) how to best quantify the imbalance in tagging behavior and formally define a supertagger, (b) how supertaggers differ from other users in their tagging patterns, and (c) if effects of motivation and expertise inform our understanding of what makes a supertagger. Our results indicate that such prolific users not only tag more than their counterparts, but in quantifiably different ways. Specifically, we find that supertaggers are more likely to label content in the long tail of less popular items, that they show differences in patterns of content tagged and terms utilized, and are measurably different with respect to tagging expertise and motivation. These findings suggest we should question the extent to which folksonomies achieve crowdsourced classification via the "wisdom of the crowd", especially for broad folksonomies like Last.fm as opposed to narrow folksonomies like Flickr.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.