2019
DOI: 10.3390/e21040414
|View full text |Cite
|
Sign up to set email alerts
|

Action Recognition Using Single-Pixel Time-of-Flight Detection

Abstract: Action recognition is a challenging task that plays an important role in many robotic systems, which highly depend on visual input feeds. However, due to privacy concerns, it is important to find a method which can recognise actions without using visual feed. In this paper, we propose a concept for detecting actions while preserving the test subject’s privacy. Our proposed method relies only on recording the temporal evolution of light pulses scattered back from the scene. Such data trace to record one action … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
11
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
9

Relationship

3
6

Authors

Journals

citations
Cited by 9 publications
(11 citation statements)
references
References 54 publications
0
11
0
Order By: Relevance
“…The most natural and intuitive method for body movement projection is based on the skeleton, which represents hierarchically arranged joint kinematics along with body segments [ 12 ]. In the past, research on body tracking was based on video data, which made it extremely challenging and usually amounted to single frame analysis [ 13 , 14 , 15 ]. However, the definition of motion is a change in position over time, thus it should be described as a set of consecutive frame sequences.…”
Section: Introductionmentioning
confidence: 99%
“…The most natural and intuitive method for body movement projection is based on the skeleton, which represents hierarchically arranged joint kinematics along with body segments [ 12 ]. In the past, research on body tracking was based on video data, which made it extremely challenging and usually amounted to single frame analysis [ 13 , 14 , 15 ]. However, the definition of motion is a change in position over time, thus it should be described as a set of consecutive frame sequences.…”
Section: Introductionmentioning
confidence: 99%
“…Proper identification of emotional state can significantly improve quality of human-computer interfaces. It can be applied for monitoring of psycho-physiological states of individuals e.g., to assess the level of stress or fatigue, forensic data analysis [ 2 ], advertisement [ 3 ], social robotic [ 4 ], video conferencing [ 5 ], violence detection [ 6 ], animation or synthesis of life-like agents xue2018voice, and many others. Automatic emotion recognition methods utilize various input types i.e., facial expressions [ 7 , 8 , 9 ], speech [ 10 , 11 , 12 ], gesture and body language [ 13 , 14 ], physical signals such as electrocardiogram (ECG), electromyography (EMG), electrodermal activity, skin temperature, galvanic resistance, blood volume pulse (BVP), and respiration [ 15 ].…”
Section: Introductionmentioning
confidence: 99%
“…In care homes with elderly patients, for example, interaction of the user with typical device-dependent hardware or following specific instruction during biometric scan (e.g., direct contact with a camera, placing a biometric into a specific position, etc.) [ 7 , 8 ]. In other words, the nature of such uncontrolled environments suggest the biometric designer to consider strictly natural and transparent systems that mitigate the user non-cooperativeness behavior, providing an enhanced performance.…”
Section: Introductionmentioning
confidence: 99%