2018 Global Smart Industry Conference (GloSIC) 2018
DOI: 10.1109/glosic.2018.8570061
|View full text |Cite
|
Sign up to set email alerts
|

Tracking of Moving Objects With Regeneration of Object Feature Points

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
7
1
1

Relationship

1
8

Authors

Journals

citations
Cited by 13 publications
(4 citation statements)
references
References 12 publications
0
4
0
Order By: Relevance
“…The system is tested against objects of two categories namely humans and vehicles in motion. Lychkov et.al [4] proposed a novel object tracking system that addresses the issue of losing track of object's feature points. The authors use the concept of regeneration in living beings to detection of objects in video.…”
Section: Related Workmentioning
confidence: 99%
“…The system is tested against objects of two categories namely humans and vehicles in motion. Lychkov et.al [4] proposed a novel object tracking system that addresses the issue of losing track of object's feature points. The authors use the concept of regeneration in living beings to detection of objects in video.…”
Section: Related Workmentioning
confidence: 99%
“…Detecting events and tracking situations is performed via various methods depending on the type of data (Deviatkov and Lychkov, 2017;Lychkov, Alfimtsev and Sakulin, 2018). To detect events in a stream of text documents, the authors proposed a method based on incremental clustering (Andreev, Berezkin and Kozlov, 2017).…”
Section: Detecting Events In Data Streamsmentioning
confidence: 99%
“…The ill-posed definition of the visual tracking (i.e., model-free tracking, on-the-fly learning, single-camera, 2D information) is more challenging in complicated real-world scenarios which may include arbitrary classes of target appearance and their motion model (e.g., human, drone, animal, vehicle), different imaging characteristics (e.g., static/moving camera, smooth/abrupt movement, camera resolution), and changes in environmental conditions (e.g., illumination variation, background clutter, crowded scenes). Although traditional scenarios visual tracking methods utilize various frameworks -like discriminative correlation filters (DCF) [21]- [24], silhouette tracking [25], [26], Kernel tracking [27]- [29], point tracking [30], [31], and so forth -these methods cannot provide satisfactory results in unconstrained environments. The main reasons are the target representation by handcrafted features (such as the histogram of oriented gradients (HOG) [32] and Color-Names (CN)) [33] and inflexible target modeling.…”
Section: Introductionmentioning
confidence: 99%