Proceedings of the International Symposium on Pervasive Displays 2014
DOI: 10.1145/2611009.2611032
|View full text |Cite
|
Sign up to set email alerts
|

Gestures Everywhere

Abstract: Gestures Everywhere is a dynamic framework for multimodal sensor fusion, pervasive analytics and gesture recognition. Our framework aggregates the real-time data from approximately 100 sensors that include RFID readers, depth cameras and RGB cameras distributed across 30 interactive displays that are located in key public areas of the MIT Media Lab. Gestures Everywhere fuses the multimodal sensor data using radial basis function particle filters and performs real-time analysis on the aggregated data. This incl… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2017
2017
2020
2020

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 12 publications
(2 citation statements)
references
References 16 publications
(21 reference statements)
0
2
0
Order By: Relevance
“…However, since it readily provides real-time tracking of users, devices, and activities, we expect it to become a part of a larger tracking ecosystem consisting of multiple cameras and sensors. For example, this could advance strategies such as those in the Gestures Everywhere framework [12] which integrates sensing across multiple spaces to provide both low-and high-level tracking information about the users and groups. A generic and ubiquitous infrastructure for ad-hoc interactions, by combining EagleSense and other space models (e.g., [32,52]), can enable visualizations of interactive spaces as well as opting-in and opting-out interaction techniques at scale.…”
Section: Tracking Spacementioning
confidence: 99%
“…However, since it readily provides real-time tracking of users, devices, and activities, we expect it to become a part of a larger tracking ecosystem consisting of multiple cameras and sensors. For example, this could advance strategies such as those in the Gestures Everywhere framework [12] which integrates sensing across multiple spaces to provide both low-and high-level tracking information about the users and groups. A generic and ubiquitous infrastructure for ad-hoc interactions, by combining EagleSense and other space models (e.g., [32,52]), can enable visualizations of interactive spaces as well as opting-in and opting-out interaction techniques at scale.…”
Section: Tracking Spacementioning
confidence: 99%
“…To understand the interaction of viewers across several displays, Gillian et al [13] designed and deployed a framework that is capable of recognizing viewers across displays (using both depth-cameras and additional sensing functionality). While this system provides analytical insights into how viewers interact with content, it also provides additional features to the user and personalized content across a range of displays.…”
Section: Digital Signage Analyticsmentioning
confidence: 99%