2010
DOI: 10.1007/s10462-010-9188-4
|View full text |Cite
|
Sign up to set email alerts
|

From symbolic to sub-symbolic information in question classification

Abstract: Question Answering (QA) is undoubtedly a growing field of current research in Artificial Intelligence. Question classification, a QA subtask, aims to associate a category to each question, typically representing the semantic class of its answer. This step is of major importance in the QA process, since it is the basis of several key decisions. For instance, classification helps reducing the number of possible answer candidates, as only answers matching the question category should be taken into account. This p… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
81
0

Year Published

2016
2016
2019
2019

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 133 publications
(88 citation statements)
references
References 18 publications
(32 reference statements)
0
81
0
Order By: Relevance
“…In 2011, Silva et al [31] worked on a question answering system by using question classification from symbolic to sub-symbolic information. Authors also gave the information about last few year work done on supervised machine learning approaches to question classification.…”
Section: Related Workmentioning
confidence: 99%
“…In 2011, Silva et al [31] worked on a question answering system by using question classification from symbolic to sub-symbolic information. Authors also gave the information about last few year work done on supervised machine learning approaches to question classification.…”
Section: Related Workmentioning
confidence: 99%
“…In Ma et al (2015), convolutions are guided by dependencies linking question words, but it is not clear how the word vectors are initialized. In our case, we only use pre-trained word vectors and the output of a parser, avoiding intensive manual feature engineering, as in Silva et al (2010). The accuracy of these models are reported in Tab.…”
Section: Related Workmentioning
confidence: 99%
“…They can effec- Table 1: QC accuracy (%) and description of SVM (Silva et al, 2010), DCNN (Kalchbrenner et al, 2014), CNNns (Kim, 2014), DepCNN, (Ma et al, 2015) and SPTK (Croce et al, 2011) models.…”
Section: Tree Kernels-based Lexical Similaritymentioning
confidence: 99%
“…They can effec- Table 1: QC accuracy (%) and description of SVM (Silva et al, 2010), DCNN (Kalchbrenner et al, 2014), CNNns (Kim, 2014), DepCNN, (Ma et al, 2015) and SPTK (Croce et al, 2011) models. tively encode lexical, syntactic and semantic information in learning algorithms. For this purpose, they count the number of substructures shared by two trees.…”
Section: Tree Kernels-based Lexical Similaritymentioning
confidence: 99%