2019
DOI: 10.1109/jsen.2018.2884321
|View full text |Cite
|
Sign up to set email alerts
|

Robust Robot Pose Estimation for Challenging Scenes With an RGB-D Camera

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
13
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 32 publications
(13 citation statements)
references
References 31 publications
0
13
0
Order By: Relevance
“…The results of image convolution as shown in Figure 7 are adopted as the horizontal and vertical Harr wavelet response of each pixel point. The size of the Harr wavelet is 4S, as shown in Equation (12). The scale value S where the feature point is located is calculated according to the size of the current template.…”
Section: Determination Of Foreground Feature Pointsmentioning
confidence: 99%
See 1 more Smart Citation
“…The results of image convolution as shown in Figure 7 are adopted as the horizontal and vertical Harr wavelet response of each pixel point. The size of the Harr wavelet is 4S, as shown in Equation (12). The scale value S where the feature point is located is calculated according to the size of the current template.…”
Section: Determination Of Foreground Feature Pointsmentioning
confidence: 99%
“…It is invariant to geometric characteristics and noise, and it is also stable to viewpoint changes [11]. However, its feature vector is 128-dimensional, so it has a large amount of calculation and a slower calculation speed [12,13], which is not suitable for navigation and other situations with high real-time requirements. Researchers have proposed many improvement methods on its basis.…”
Section: Introductionmentioning
confidence: 99%
“…When considering some factors within feature extraction algorithms (e.g., SIFT, SURF) such as velocity, stability, rotation invariance and so on [48], the ORB algorithm [15] is selected to extract and match point features by using the Opencv library. By combing the RANSAC [43] algorithm with a fundamental matrix constraint, we reject the mismatches and obtain an initial set of matching points.…”
Section: B Extraction and Matching Of Point And Line Featuresmentioning
confidence: 99%
“…In recent years, environmental perception technology for robotic system has attracted a lot of attention from researchers, and most of the studies on it are focused on the space above ground. With the development of computer science, there are more and more environmental perception sensors developed for robotics and artificial intelligence School of Instrumentation Science and Engineering, Harbin Institute of Technology, Heilongjiang, China fields, including radio frequency identification (RFID) with its sensors networks, 1,2 lidar sensors, [3][4][5][6][7] red-green-bluedepth (RGB-D) camera, 8,9 millimeter radar, 3,10 and so on. In addition, the related perception technology above can be classified into two main categories: the one is vision based and the other one is sensor fusion based.…”
Section: Introductionmentioning
confidence: 99%
“…11 Recently, color-depth cameras have been widely used in robotic system for the applications of environmental perception, such as pose estimation, 9 detection and tracking of humans, 12 autonomous navigation, 13 and simultaneous localization and mapping (SLAM). 8 Compared with the other cameras, a single color-depth camera is able to produce a transformation matrix with an absolute scale because of its additional depth information, 8,14,15 which helps robots achieve higher sensing accuracy with lower system cost. However, the fact that cameras are highly affected by lighting conditions makes those optical sensors unsuitable to be used in dark environment.…”
Section: Introductionmentioning
confidence: 99%