2015
DOI: 10.1080/23080477.2015.11665643
|View full text |Cite
|
Sign up to set email alerts
|

An FPGA Implementation of a Robot Control System with an Integrated 3D Vision System

Abstract: Robot decision making and motion control are commonly based on visual information in various applications. Positionbased visual servo is a technique for vision-based robot control, which operates in the 3D workspace, uses real-time image processing to perform tasks of feature extraction, and returns the pose of the object for positioning control. In order to handle the computational burden at the vision sensor feedback, we design a FPGA-based motion-vision integrated system that employs dedicated hardware circ… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 13 publications
0
2
0
Order By: Relevance
“…The motors are 24 V geared DC motors. A similar FPGAbased motion controller for DC motors is used in [15], with the addition of image processing IP from stereo CMOS image sensors. A further example is [16], where nonlinear adaptive and 'computed torque' algorithms are partitioned between a DSP and FPGA.…”
Section: Fpgas Socs For Accelerating Dsp In Roboticsmentioning
confidence: 99%
“…The motors are 24 V geared DC motors. A similar FPGAbased motion controller for DC motors is used in [15], with the addition of image processing IP from stereo CMOS image sensors. A further example is [16], where nonlinear adaptive and 'computed torque' algorithms are partitioned between a DSP and FPGA.…”
Section: Fpgas Socs For Accelerating Dsp In Roboticsmentioning
confidence: 99%
“…In addition, the state sensing is also a key issue in the process of control design. Recently, for the wheeled mobile robot, there are lots of studies using the camera directly mounted on the mobile robot (Klein and Murray, 2007; Zhang et al , 2015; Sun et al , 2017) to sense its location and orientation by the parallel tracking and mapping algorithm (PTAM) (Klein and Murray, 2007), visual simultaneous localization and mapping algorithm (vSLAM) (Durrant-Whyte and Bailey, 2006; Bailey and Durrant-Whyte, 2006; Havangi et al , 2014), visual odometry algorithm (Nister et al , 2006; Scaramuzza and Fraundorfer, 2011; Leutenegger et al , 2015) or stereo vision (Chen et al , 2015). The PTAM and SLAM algorithms not only estimate the location of a moving mobile robot but also build a map along the trajectory of the mobile robot.…”
Section: Introductionmentioning
confidence: 99%