2018
DOI: 10.1109/tsmc.2017.2660547
|View full text |Cite
|
Sign up to set email alerts
|

A Gait Recognition Method for Human Following in Service Robots

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
26
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 62 publications
(26 citation statements)
references
References 35 publications
0
26
0
Order By: Relevance
“…H UMAN action recognition is necessary for various computer vision applications that demand information of people's behavior, including surveillance for public safety, human-computer interaction applications, and robotics [1], [2] and [3]. However, action recognition in colored images is challenging task due to several factors, such as complex background, illumination variation, and clothing color, which make it difficult to segment the human body in every scene.…”
Section: Introductionmentioning
confidence: 99%
“…H UMAN action recognition is necessary for various computer vision applications that demand information of people's behavior, including surveillance for public safety, human-computer interaction applications, and robotics [1], [2] and [3]. However, action recognition in colored images is challenging task due to several factors, such as complex background, illumination variation, and clothing color, which make it difficult to segment the human body in every scene.…”
Section: Introductionmentioning
confidence: 99%
“…Several works [15][16][17][18][19][20] use the first approach to approximate the distance information by triangulation methods applied on two or more RGB views of the same scene. However, the most used visual sensors for person detection are RGB-D cameras [21][22][23][24][25][26][27][28][29][30][31][32] that are able to get both RGB images and depth maps by exploiting infrared light. Several methods employ sensor fusion techniques to merge information from different kinds of sensing systems.…”
Section: Person Followingmentioning
confidence: 99%
“…Focusing on vision-based methods, different strategies can be adopted to detect the person in the environment. Mi et al [26], Ren et al [25] and Chi et al [29] all adopted the Microsoft Kinect SDK that directly provides skeleton position. Satake et al [16,17] used manually designed templates to extract relevant features and find the target location.…”
Section: Person Followingmentioning
confidence: 99%
“…On the other hand, depth sensors technology, such as Kinect, enables realtime 3D human skeleton tracking [10], [11]. Supported by recent advances in robot vision and artificial intelligence, it is possible to monitor a patient's health while carrying out daily activities [12]- [14].…”
Section: Introductionmentioning
confidence: 99%