2020 17th International Conference on Ubiquitous Robots (UR) 2020
DOI: 10.1109/ur49135.2020.9144932
|View full text |Cite
|
Sign up to set email alerts
|

Visual Perception Framework for an Intelligent Mobile Robot

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 12 publications
(5 citation statements)
references
References 10 publications
0
5
0
Order By: Relevance
“…The study by [35] described the current representation of perception as one of the AMR applications drawbacks. Perception is essential when studying mobile robots [7].…”
Section: Perceptionmentioning
confidence: 99%
“…The study by [35] described the current representation of perception as one of the AMR applications drawbacks. Perception is essential when studying mobile robots [7].…”
Section: Perceptionmentioning
confidence: 99%
“…Particularly, this idea is becoming key in the development of mobile robots. 632,633 The TEM cannot be considered a mobile robot, although the constantly changing interaction with the environment (i.e., changing magnification and sample position) is comparable and could surely benefit from perceptual preprocessing. Even the holder handling and loading could be directly tackled with the mobile robot perspective!…”
Section: Artificial Human-like Systems: the Path Towards Automation?mentioning
confidence: 99%
“…In the field of robot recognition, Lee et al [2] proposed a visual perception framework integrating multiple deep neural networks, which made the distributed application development of intelligent mobile robots more accessible and successfully recognized people, objects and human posture. In order to quickly and accurately detect the captured object and overcome the influence of complex interference in the environment, Song et al [3] proposed a new three-mode image fusion strategy, namely the visible depth-heat significance object detection.…”
Section: Introductionmentioning
confidence: 99%