2021
DOI: 10.48550/arxiv.2107.12744
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Real-Time Activity Recognition and Intention Recognition Using a Vision-based Embedded System

Sahar Darafsh,
Saeed Shiry Ghidary,
Morteza Saheb Zamani

Abstract: With the rapid increase in digital technologies, most fields of study include recognition of human activity and intention recognition, which are essential in smart environments. In this study, we equipped the activity recognition system with the ability to recognize intentions by affecting the pace of movement of individuals in the representation of images. Using this technology in various environments such as elevators and automatic doors will lead to identifying those who intend to pass the automatic door fr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 32 publications
(41 reference statements)
0
1
0
Order By: Relevance
“…For example, Goncalves et al developed a deeplearning-based approach for the recognition of human motion (such as walking, stopping, and left/right turn) with an accuracy of 93%, but this method utilizes a walker-mounted RGB-D camera to obtain lower-body motion video, and thus cannot be used for wearable robot control [41]. Darafsh et al developed a vision system that utilizes videos from stationary-mounted cameras to distinguish people who intend to pass through an auto-matic door from those who do not [42]. Although this work is also about human intent recognition, its camera placement, data processing method, and desired application all differ significantly from the approach presented in this paper.…”
Section: Discussionmentioning
confidence: 99%
“…For example, Goncalves et al developed a deeplearning-based approach for the recognition of human motion (such as walking, stopping, and left/right turn) with an accuracy of 93%, but this method utilizes a walker-mounted RGB-D camera to obtain lower-body motion video, and thus cannot be used for wearable robot control [41]. Darafsh et al developed a vision system that utilizes videos from stationary-mounted cameras to distinguish people who intend to pass through an auto-matic door from those who do not [42]. Although this work is also about human intent recognition, its camera placement, data processing method, and desired application all differ significantly from the approach presented in this paper.…”
Section: Discussionmentioning
confidence: 99%