2010 IEEE Safety Security and Rescue Robotics 2010
DOI: 10.1109/ssrr.2010.5981554
|View full text |Cite
|
Sign up to set email alerts
|

FPGA-based real-time moving object detection for walking robots

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
4
0

Year Published

2014
2014
2021
2021

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 9 publications
0
4
0
Order By: Relevance
“…Detection technique and application [6][7][8][9][10][11] Static camera (i) Background subtraction (ii) Using GMM, ViBE, and so forth [23] Moving robot (i) Detecting moving object using (ii) Optical flow and frame differencing [24] UAV (i) Detecting and tracking object feature [ to detect moving vehicle in aerial videos. The research in [16] presented two different approaches to detect and track moving vehicle and person using a Hierarchy of Gradient (HoG) based classifier.…”
Section: Camera Platformmentioning
confidence: 99%
See 1 more Smart Citation
“…Detection technique and application [6][7][8][9][10][11] Static camera (i) Background subtraction (ii) Using GMM, ViBE, and so forth [23] Moving robot (i) Detecting moving object using (ii) Optical flow and frame differencing [24] UAV (i) Detecting and tracking object feature [ to detect moving vehicle in aerial videos. The research in [16] presented two different approaches to detect and track moving vehicle and person using a Hierarchy of Gradient (HoG) based classifier.…”
Section: Camera Platformmentioning
confidence: 99%
“…They utilized background subtraction techniques such as Gaussian Mixture Model (GMM) and ViBE (Visual Background Extractor) to perform foreground object segmentation in static background video. The work in [23] has proposed FPGA-based moving object detection for a walking robot. They implemented ego-motion estimation using optical flow technique and frame differencing in hardware/software codesign system.…”
Section: Camera Platformmentioning
confidence: 99%
“…Research in egomotion estimation has mainly focused on ground robots [35], [36], [41], for which the application requirements differ significantly from the case where the egomotion estimation of a UAV is targeted. Namely, for the UAV case, the problem can be simplified as it can be safely assumed that the detected feature points are positioned on the same plane relative to the camera due to the altitude of the UAV.…”
Section: E Comparison To Existing Workmentioning
confidence: 99%
“…The resulting system consumes around 80% of the available resources achieving a maximum of 28 frames/sec, while using a camera with 320 × 256 pixels resolution. In [41], the egomotion estimation of a walking robot is performed as part of a bigger system. The employed motion model limits the degrees of freedom to four and has thus limitations in differentiating between rotation and translation in the x and y axis.…”
Section: E Comparison To Existing Workmentioning
confidence: 99%