“…Using a combination of cloud-based data sharing and touch sensing, Mistry et al created SPARSH, software for transferring media from device to device by touching one and then the other [14]. Others have developed proxemic interfaces enabled by specialized sensors or radios, like RFID [1]. Increasingly, the near field communication (NFC) standard is enabling very short range proximal sensing and communication on equipped mobile devices.…”
Section: Proximal Sensing and Proxemic Interactionmentioning
Abstract-We present a wearable system that uses ambient electromagnetic interference (EMI) as a signature to identify electronic devices and support proxemic interaction. We designed a low cost tool, called EMI Spy, and a software environment for rapid deployment and evaluation of ambient EMI-based interactive infrastructure. EMI Spy captures electromagnetic interference and delivers the signal to a user's mobile device or PC through either the device's wired audio input or wirelessly using Bluetooth. The wireless version can be worn on the wrist, communicating with the user's mobile device in their pocket. Users are able to train the system in less than 1 second to uniquely identify displays in a 2-m radius around them, as well as to detect pointing at a distance and touching gestures on the displays in real-time. The combination of a low cost EMI logger and an open source machine learning tool kit allows developers to quickly prototype proxemic, touch-to-connect, and gestural interaction. We demonstrate the feasibility of mobile, EMI-based device and gesture recognition with preliminary user studies in 3 scenarios, achieving 96% classification accuracy at close range for 6 digital signage displays distributed throughout a building, and 90% accuracy in classifying pointing gestures at neighboring desktop LCD displays. We were able to distinguish 1-and 2-finger touching with perfect accuracy and show indications of a way to determine power consumption of a device via touch. Our system is particularly well-suited to temporary use in a public space, where the sensors could be distributed to support a popup interactive environment anywhere with electronic devices. By designing for low cost, mobile, flexible, and infrastructurefree deployment, we aim to enable a host of new proxemic interfaces to existing appliances and displays.
“…Using a combination of cloud-based data sharing and touch sensing, Mistry et al created SPARSH, software for transferring media from device to device by touching one and then the other [14]. Others have developed proxemic interfaces enabled by specialized sensors or radios, like RFID [1]. Increasingly, the near field communication (NFC) standard is enabling very short range proximal sensing and communication on equipped mobile devices.…”
Section: Proximal Sensing and Proxemic Interactionmentioning
Abstract-We present a wearable system that uses ambient electromagnetic interference (EMI) as a signature to identify electronic devices and support proxemic interaction. We designed a low cost tool, called EMI Spy, and a software environment for rapid deployment and evaluation of ambient EMI-based interactive infrastructure. EMI Spy captures electromagnetic interference and delivers the signal to a user's mobile device or PC through either the device's wired audio input or wirelessly using Bluetooth. The wireless version can be worn on the wrist, communicating with the user's mobile device in their pocket. Users are able to train the system in less than 1 second to uniquely identify displays in a 2-m radius around them, as well as to detect pointing at a distance and touching gestures on the displays in real-time. The combination of a low cost EMI logger and an open source machine learning tool kit allows developers to quickly prototype proxemic, touch-to-connect, and gestural interaction. We demonstrate the feasibility of mobile, EMI-based device and gesture recognition with preliminary user studies in 3 scenarios, achieving 96% classification accuracy at close range for 6 digital signage displays distributed throughout a building, and 90% accuracy in classifying pointing gestures at neighboring desktop LCD displays. We were able to distinguish 1-and 2-finger touching with perfect accuracy and show indications of a way to determine power consumption of a device via touch. Our system is particularly well-suited to temporary use in a public space, where the sensors could be distributed to support a popup interactive environment anywhere with electronic devices. By designing for low cost, mobile, flexible, and infrastructurefree deployment, we aim to enable a host of new proxemic interfaces to existing appliances and displays.
“…Gestures Everywhere is built on top of an existing interactiveinformation system [3], which has been running throughout the Media Lab since early 2010. This system, referred to as the "Glass Infrastructure", consists of over 30 digitalinformation displays distributed at key locations throughout the building.…”
Section: System Architecturementioning
confidence: 99%
“…These sensors include cameras [16], capacitive touchscreens [3], infrared proximity sensors and beyond [13]. As these displays become pervasive throughout our workplaces, retail venues and other public spaces, they will become an invaluable part of the rapidly growing sensor networks that observe every moment of our lives.…”
Gestures Everywhere is a dynamic framework for multimodal sensor fusion, pervasive analytics and gesture recognition. Our framework aggregates the real-time data from approximately 100 sensors that include RFID readers, depth cameras and RGB cameras distributed across 30 interactive displays that are located in key public areas of the MIT Media Lab. Gestures Everywhere fuses the multimodal sensor data using radial basis function particle filters and performs real-time analysis on the aggregated data. This includes key spatio-temporal properties such as presence, location and identity; in addition to higher-level analysis including social clustering and gesture recognition. We describe the algorithms and architecture of our system and discuss the lessons learned from the systems deployment.
“…전자게시판과 관련된 최근 연구 동향을 살펴보 면, 기술적으로는 서버나 인터페이스 기술에 관한 연구 [2,7], 모바일 단말기와 콘텐츠를 공유하기 위 한 연구 [16], 유리 구조에 대한 연구 [14], 응용분야 를 넓히기 위한 연구[1, 5, 6] 등이 있다. 또한 사회 과학적으로는 이용자에 대한 인지과학적 연구 [8], 사용자의 주관적인 응답 시간에 관한 연구 [18], 사 용자의 무지각적인 행동에 관한 연구 [15] • 외부 개발자로부터 무한대의 아이디어와 리소 스를 제공받을 수 있다.…”
논문투고일:2013년 10월 26일논문수정완료일:2013년 12월 18일 논문게재확정일:2013년 12월 22일 * 남선산업(주) 기업부설연구소, 제1저자 ** 전남대학교 경영학부, 교신저자The software which is used for electronic bulletin boards have shortcomings that the addition of new function and new information's type to the order point software is very difficult, and the aggressive adaption of newly introduced type of media is impossible because the software is developed by custom solution. Eventually new cost and time are required to enhance functionality or performance to software of DID(Digital Information Display). In this paper, we proposed the scheme to package DID's contents and to customize it using plug-in method. We conducted a case study of this scheme. The platform which can install Apps to DID as one of content was designed. Apps can be inserted by plug-in type on DID platform and run separately with DID framework. As a result, We got advantage that various
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.