2010
DOI: 10.1613/jair.3094
|View full text |Cite
|
Sign up to set email alerts
|

Best-First Heuristic Search for Multicore Machines

Abstract: To harness modern multicore processors, it is imperative to develop parallel versions of fundamental algorithms. In this paper, we compare different approaches to parallel best-first search in a shared-memory setting. We present a new method, PBNF, that uses abstraction to partition the state space and to detect duplicate states without requiring frequent locking. PBNF allows speculative expansions when necessary to keep threads busy. We identify and fix potential livelock conditions in our approach, proving i… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
113
0
14

Year Published

2010
2010
2020
2020

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 52 publications
(130 citation statements)
references
References 26 publications
(20 reference statements)
0
113
0
14
Order By: Relevance
“…Thus, wrapper approaches are generally considered to produce better feature subsets. 22 In this study, a wrapper approach based on classifier subset evaluator and BestFirst search method 23 was used for feature reduction.…”
Section: B1 Feature Selectionmentioning
confidence: 99%
“…Thus, wrapper approaches are generally considered to produce better feature subsets. 22 In this study, a wrapper approach based on classifier subset evaluator and BestFirst search method 23 was used for feature reduction.…”
Section: B1 Feature Selectionmentioning
confidence: 99%
“…31 Some of these studies took a multi-thread approach on a multi-core machine. 32 Later, they extended the A* algorithm into a shared-memory parallel environment with a hash function which assigned each state to a unique process.…”
Section: Parallel Planningmentioning
confidence: 99%
“…This idea was later developed into the shared‐memory parallel planning method . Some of these studies took a multi‐thread approach on a multi‐core machine . Later, they extended the A* algorithm into a shared‐memory parallel environment with a hash function which assigned each state to a unique process. Some researchers put their emphasis on exploiting distributed memory computing clusters .…”
Section: Introductionmentioning
confidence: 99%
“…Parallel planning methods (Vrakas, Refanidis, & Vlahavas, 2001;Kishimoto, Fukunaga, & Botea, 2009;Burns, Lemons, Ruml, & Zhou, 2010) aim to speed-up the solution of centralized planning problems given access to a distributed computing environment, such as a large cluster. Parallel planning can be performed in ma-strips by applying existing approaches to the underlying strips problem (i.e., ignoring agent identities).…”
Section: Multi-agent Planningmentioning
confidence: 99%