2015 57th International Symposium ELMAR (ELMAR) 2015
DOI: 10.1109/elmar.2015.7334534
|View full text |Cite
|
Sign up to set email alerts
|

Human action identification and search in video files

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 13 publications
0
2
0
Order By: Relevance
“…However, aimed at a huge video database, it is a very lengthy procedure to allot meta-data manually to videos via watching every video 7 . With the incrementing power of computers, advanced methodologies depend on designing actions by recording the movement of diverse perceptions aimed at acquiring an action design that must be recognized 8 . The target of deploying machine learning (ML) aimed at HAR is to know the actions that have been executed in the video.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…However, aimed at a huge video database, it is a very lengthy procedure to allot meta-data manually to videos via watching every video 7 . With the incrementing power of computers, advanced methodologies depend on designing actions by recording the movement of diverse perceptions aimed at acquiring an action design that must be recognized 8 . The target of deploying machine learning (ML) aimed at HAR is to know the actions that have been executed in the video.…”
Section: Introductionmentioning
confidence: 99%
“…7 With the incrementing power of computers, advanced methodologies depend on designing actions by recording the movement of diverse perceptions aimed at acquiring an action design that must be recognized. 8 The target of deploying machine learning (ML) aimed at HAR is to know the actions that have been executed in the video. The deep learning (DL) methodologies have efficient performance analogized to the conventional ML methodologies.…”
Section: Introductionmentioning
confidence: 99%