2018 Fifth International Conference on Parallel, Distributed and Grid Computing (PDGC) 2018
DOI: 10.1109/pdgc.2018.8745852
|View full text |Cite
|
Sign up to set email alerts
|

Human Detection and Motion Tracking Using Machine Learning Techniques: A Review

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
2
2
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(1 citation statement)
references
References 6 publications
0
1
0
Order By: Relevance
“…The subtraction process cannot handle the fast moving objects because when the object in motion goes still or moves fast, for a few frames, background and foreground can't be distinguished. Also, thresholding the difference is a kind of exacting approach which may lead to unnoticed activity of the interest of researcher (Anandhalli & Baligar, 2015) (Mahajan & Padha, 2018). The widespread modus operandi for distinguishing the background from the foreground is Background Subtraction (BS) which extracts low entropy based pixels of the object in motion without any prior information about the scene proving to be effectual in stationary camera arrangements and highly precise in pixel, frame as well as region level procedure (Kumar & Yadav, 2016b).…”
Section: Introductionmentioning
confidence: 99%
“…The subtraction process cannot handle the fast moving objects because when the object in motion goes still or moves fast, for a few frames, background and foreground can't be distinguished. Also, thresholding the difference is a kind of exacting approach which may lead to unnoticed activity of the interest of researcher (Anandhalli & Baligar, 2015) (Mahajan & Padha, 2018). The widespread modus operandi for distinguishing the background from the foreground is Background Subtraction (BS) which extracts low entropy based pixels of the object in motion without any prior information about the scene proving to be effectual in stationary camera arrangements and highly precise in pixel, frame as well as region level procedure (Kumar & Yadav, 2016b).…”
Section: Introductionmentioning
confidence: 99%