2014
DOI: 10.1145/2589481
|View full text |Cite
|
Sign up to set email alerts
|

Learning Probabilistic Hierarchical Task Networks as Probabilistic Context-Free Grammars to Capture User Preferences

Abstract: We propose automatically learning probabilistic Hierarchical Task Networks (pH-TNs) in order to capture a user's preferences on plans, by observing only the user's behavior. HTNs are a common choice of representation for a variety of purposes in planning, including work on learning in planning. Our contributions are (a) learning structure and (b) representing preferences. In contrast, prior work employing HTNs considers learning method preconditions (instead of structure) and representing domain physics or sea… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
20
0

Year Published

2014
2014
2016
2016

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 24 publications
(21 citation statements)
references
References 26 publications
0
20
0
Order By: Relevance
“…We exploit the connection between representation learning and grammar induction by extending an existing pCFG algorithm [Li et al, 2009] to support feature learning and transfer. It shares some ideas from previous work on grammar induction (e.g., [Wolff, 1982, Langley and Stromsten, 2000, Stolcke, 1994, Vanlehn, 1987), which searches for the target grammar by adding or merging non-terminal symbols 2 .…”
Section: Discussionmentioning
confidence: 99%
See 4 more Smart Citations
“…We exploit the connection between representation learning and grammar induction by extending an existing pCFG algorithm [Li et al, 2009] to support feature learning and transfer. It shares some ideas from previous work on grammar induction (e.g., [Wolff, 1982, Langley and Stromsten, 2000, Stolcke, 1994, Vanlehn, 1987), which searches for the target grammar by adding or merging non-terminal symbols 2 .…”
Section: Discussionmentioning
confidence: 99%
“…In fact, a common strategic error students make in a problem like -3x=12 is for the student to divide both sides by 3 rather than -3 [Li et al, 2011a]. Based on these observations, we built a representation learner by extending an existing probabilistic context free grammar (pCFG) learner [Li et al, 2009] to support feature learning and transfer learning. The representation learner is domain general.…”
Section: Chapter 3 Deep Feature Representation Learningmentioning
confidence: 99%
See 3 more Smart Citations