2018 International Joint Conference on Neural Networks (IJCNN) 2018
DOI: 10.1109/ijcnn.2018.8489057
|View full text |Cite
|
Sign up to set email alerts
|

Learning Transferable Features For Open-Domain Question Answering

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
3
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 12 publications
(4 citation statements)
references
References 15 publications
0
3
0
Order By: Relevance
“…• It is a domain-specific problem, i.e., a prediction model learned from a sub-population (or ICU domain) is likely to fail when tested against data from other population [32]. Feature transferability is thus an appealing way to provide robustness to the prediction models [33], [34].…”
Section: Methodsmentioning
confidence: 99%
“…• It is a domain-specific problem, i.e., a prediction model learned from a sub-population (or ICU domain) is likely to fail when tested against data from other population [32]. Feature transferability is thus an appealing way to provide robustness to the prediction models [33], [34].…”
Section: Methodsmentioning
confidence: 99%
“…To construct the adjacency matrix A, for each node pair (v i , v j ), we applied cosine similarity based on enriched TFIDF features 3 as the value A ij . Previous work has applied cosine similarity for vector space models (García-Pablos et al, 2018;Zuin et al, 2018;Bhatia et al, 2016), so we believe it is a suitable method in our case. This way we were able to generate concept-resource edge values (A cr ) and resourceresource edge values (A r ).…”
Section: Adjacency Matrix Amentioning
confidence: 98%
“…In all our experiments, we constrained the Rashomon sets to include only models subject to the same learning algorithms, such as the hypothesis space of all decision trees with depth up to some value. However, employing different methods enables a final model to capture nuances that are particular to all base methods, such as combining a convolutional neural network and an LSTM to capture both temporal and context-aware patterns [Zuin et al, 2018]. However, our method relies on computing feature importance to compare models.…”
Section: Future Workmentioning
confidence: 99%