The 6th International Conference on Soft Computing and Intelligent Systems, and the 13th International Symposium on Advanced In 2012
DOI: 10.1109/scis-isis.2012.6505325
|View full text |Cite
|
Sign up to set email alerts
|

Self-localization based on image features of omni-directional image

Abstract: Omni-vision system using an omni-mirror is popular to acquire environment information around an autonomous mobile robot. In RoboCup soccer middle size robot league in particular, self-localization methods based on white line extraction on the soccer field are popular. We have studied a self-localization method based on image features, for example, SIFT and SURF, so far. Comparative studies with a conventional self-localization method based on white line extraction are conducted. Compared to the self-localizati… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0
1

Year Published

2014
2014
2020
2020

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 6 publications
0
2
0
1
Order By: Relevance
“…Su menggunakan deskriptor global untuk melakukan lokalisasi. Lokalisasi diri berdasarkan data citra arena sepak bola robot dilakukan oleh Hibino (Hibino, Yuta, Takahashi, & Maeda, 2012). Fitur citra yang diambil dari kamera omni dibandingkan dengan citra acuan.…”
Section: Pendahuluanunclassified
“…Su menggunakan deskriptor global untuk melakukan lokalisasi. Lokalisasi diri berdasarkan data citra arena sepak bola robot dilakukan oleh Hibino (Hibino, Yuta, Takahashi, & Maeda, 2012). Fitur citra yang diambil dari kamera omni dibandingkan dengan citra acuan.…”
Section: Pendahuluanunclassified
“…Each robot must comprise a completely independent vision system as well as self-contained powering and motoring mechanisms, and autonomously accomplish certain behaviors; for example, navigating the game field and following the ball. Therefore, the omni-vision system is generally used to acquire environmental information around an autonomous mobile robot for it to accomplish self-localization [4][5][6]. Fig.…”
Section: Introductionmentioning
confidence: 99%
“…θ m1 +θ m2 ) •sin(θ m2 +θ m3 ) sin θ m2 •sin(θ m1 +θ m2 +θ m3 )(5) CR c = sin(θ c1 +θ c2 ) •sin(θ c2 +θ c3 ) sin θ c2 •sin(θ c1 +θ c2 +θ c3 )(6) To locate the robot (Point O), on the map, the CR principle was applied twice. First, A was used as a focus to determine the CR(A) for both the camera and map, indicated as CR c (A) and CR m (A), respectively, indicated by the blue lines inFigs.…”
mentioning
confidence: 99%