2018
DOI: 10.1109/access.2018.2846554
|View full text |Cite
|
Sign up to set email alerts
|

Localization and Navigation for Autonomous Mobile Robots Using Petri Nets in Indoor Environments

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
24
0
2

Year Published

2018
2018
2023
2023

Publication Types

Select...
5
2
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 54 publications
(26 citation statements)
references
References 27 publications
0
24
0
2
Order By: Relevance
“…There are many ways to measure human and environmental information, such as a stereo camera, depth camera, laser range finder (LRF), and global positioning system (GPS). Moreover, there are many ways to understand scenes such as simultaneously localization and mapping (SLAM) using petri nets [52] and considering relocation performance [53].…”
Section: Pre-processing: Scene Classificationmentioning
confidence: 99%
“…There are many ways to measure human and environmental information, such as a stereo camera, depth camera, laser range finder (LRF), and global positioning system (GPS). Moreover, there are many ways to understand scenes such as simultaneously localization and mapping (SLAM) using petri nets [52] and considering relocation performance [53].…”
Section: Pre-processing: Scene Classificationmentioning
confidence: 99%
“…In (2) there was vertical support of the motor (3) and it was connected to a propeller (4). At the rear (stern), there was a servo motor (5) connected to the rudder (6). Further, from the stern comes the data cable (7) and a mini float (8), connecting the probes (9).…”
Section: Vector: the Boatmentioning
confidence: 99%
“…While in [5], a probabilistic model is used to identify scenes and objects, and a hierarchical model to process the robot's location. In [4], radio-frequency identification (RFID) is used to locate a mobile robot in an indoor environment, and incidences matrices as map of the paths. Futhermore, [6] proposed a Recurrent Convolutional Neural Network (RCNN) model that uses the fusion of a 2D laser signal with an inertial sensor signal to train and improve prediction scenarios.…”
Section: Introductionmentioning
confidence: 99%