Hundreds of millions of people play intellectually-demanding video games every day. What does individual performance on these games tell us about cognition? Here, we describe two studies that examine the potential link between intelligence and performance in one of the most popular video games genres in the world (Multiplayer Online Battle Arenas: MOBAs). In the first study, we show that performance in the popular MOBA League of Legends’ correlates with fluid intelligence as measured under controlled laboratory conditions. In the second study, we also show that the age profile of performance in the two most widely-played MOBAs (League of Legends and DOTA II) matches that of raw fluid intelligence. We discuss and extend previous videogame literature on intelligence and videogames and suggest that commercial video games can be useful as 'proxy' tests of cognitive performance at a global population level.
It has been suggested that the brain pre-empts changes in the environment through generating predictions, although real-time electrophysiological evidence of prediction violations in the domain of visual perception remain elusive. In a series of experiments we showed participants sequences of images that followed a predictable implied sequence or whose final image violated the implied sequence. Through careful design we were able to use the same final image transitions across predictable and unpredictable conditions, ensuring that any differences in neural responses were due only to preceding context and not to the images themselves. EEG and MEG recordings showed that early (N170) and mid-latency (N300) visual evoked potentials were robustly modulated by images that violated the implied sequence across a range of types of image change (expression deformations, rigid-rotations and visual field location). This modulation occurred irrespective of stimulus object category. Although the stimuli were static images, MEG source reconstruction of the early latency signal (N/M170) localized expectancy violation signals to brain areas associated with motion perception. Our findings suggest that the N/M170 can index mismatches between predicted and actual visual inputs in a system that predicts trajectories based on ongoing context. More generally we suggest that the N/M170 may reflect a "family" of brain signals generated across widespread regions of the visual brain indexing the resolution of top-down influences and incoming sensory data. This has important implications for understanding the N/M170 and investigating how the brain represents context to generate perceptual predictions.
It has been suggested that the brain pre-empts changes in the visual environment through generating predictions, although real-time eletrophysiological evidence of prediction violations remains elusive. In a series of experiments we showed participants sequences of images that followed a predictable implied sequence or whose final image violated the implied sequence. Through careful design we were able to use the same final image transitions across predictable and unpredictable conditions, ensuring that any differences in neural responses were due only to preceding context and not to the images themselves. EEG and MEG recordings showed that early/mid-latency visual evoked potentials were robustly modulated by images that violated the implied sequence across a range of types of image change (expression deformations, rigid-rotations and visual field location). This modulation occurred irrespective of stimulus object category. Although the stimuli were static images, MEG source reconstruction of the early latency signal (N/M170) localised expectancy violation signals to brain areas associated with motion perception. Our findings suggest that the N/M170 can index mismatches between predicted and actual visual inputs in a system that predicts trajectories based on ongoing context. This has important implications for understanding the N/M170 and investigating how the brain represents context to generate perceptual predictions.
Esports (competitive videogames) have grown into a global phenomenon with over 450m viewers and a 1.5bn USD market. Esports broadcasts follow a similar structure to traditional sports. However, due to their virtual nature, a large and detailed amount data is available about in-game actions not currently accessible in traditional sport. This provides an opportunity to incorporate novel insights about complex aspects of gameplay into the audience experience-enabling more in-depth coverage for experienced viewers, and increased accessibility for newcomers. Previous research has only explored a limited range of ways data could be incorporated into esports viewing (e.g. data visualizations post-match) and only a few studies have investigated how the presentation of statistics impacts spectators' experiences and viewing behaviors. We present Weavr, a companion app that allows audiences to consume datadriven insights during and around esports broadcasts. We report on deployments at two major tournaments, that provide ecologically valid findings about how the app's features were experienced by audiences and their impact on viewing behavior. We discuss implications for the design of second-screen apps for live esports events, and for traditional sports as similar data becomes available for them via improved tracking technologies. CCS CONCEPTS • Human-centered computing → Human computer interaction (HCI); User studies.
An unresolved goal in face perception is to identify brain areas involved in face processing and simultaneously understand the timing of their involvement. Currently, high spatial resolution imaging techniques identify the fusiform gyrus as subserving processing of invariant face features relating to identity. High temporal resolution imaging techniques localize an early latency evoked component-the N/M170 -as having a major generator in the fusiform region; however, this evoked component is not believed to be associated with the processing of identity. To resolve this, we used novel magnetoencephalographic beamformer analyses to localize cortical regions in humans spatially with trial-by-trial activity that differentiated faces and objects and to interrogate their functional sensitivity by analyzing the effects of stimulus repetition. This demonstrated a temporal sequence of processing that provides category-level and then item-level invariance. The right fusiform gyrus showed adaptation to faces (not objects) at ϳ150 ms after stimulus onset regardless of face identity; however, at the later latency of ϳ200 -300 ms, this area showed greater adaptation to repeated identity faces than to novel identities. This is consistent with an involvement of the fusiform region in both early and midlatency face-processing operations, with only the latter showing sensitivity to invariant face features relating to identity.
Multiplayer strategy games are examples of imperfect information games, where information about the game state can be retrieved through in-game mechanics. One such mechanic is vision. Within esports titles of this genre, such as League of Legends (LoL) and Dota 2, players often gather map information through the use of friendly units called wards. In LoL, one of the most popular esports title worldwide, warding has hitherto been evaluated only using a heuristic called vision score, provided by Riot, the game's developer. In this paper, we examine the accuracy at LoL's vision score at predicting the overall game-winner within the context supported by the game. We have ported LoL's vision score to Dota 2, a similarly popular esports title, and compared its performance against a novel warding model. We have compared both models not only at predicting the overall winner, but also the current state of the game and their ability to predict and reflect short term game advantage and events. We found our model significantly outperformed LoL's vision score. Additionally, we trained and evaluated a model for predicting the value of wards in real-time through the use of a Neural Network.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.