2009
DOI: 10.1007/978-3-642-04180-8_42
|View full text |Cite
|
Sign up to set email alerts
|

Leveraging Higher Order Dependencies between Features for Text Classification

Abstract: Traditional machine learning methods only consider relationships between feature values within individual data instances while disregarding the dependencies that link features across instances. In this work, we develop a general approach to supervised learning by leveraging higher-order dependencies between features. We introduce a novel Bayesian framework for classification named Higher Order Naive Bayes (HONB). Unlike approaches that assume data instances are independent, HONB leverages co-occurrence relatio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
19
0

Year Published

2009
2009
2015
2015

Publication Types

Select...
3
3

Relationship

1
5

Authors

Journals

citations
Cited by 21 publications
(20 citation statements)
references
References 15 publications
(33 reference statements)
1
19
0
Order By: Relevance
“…A simplified model, the HOTK, uses higher-order paths between terms. In this sense, it is similar to the previously proposed term-based higher-order learning algorithms Higher-Order Naïve Bayes (HONB) (Ganiz et al, 2009) and Higher-Order Smoothing (HOS) (Poyraz et al, 2012(Poyraz et al, , 2014.…”
Section: Introductionsupporting
confidence: 66%
See 3 more Smart Citations
“…A simplified model, the HOTK, uses higher-order paths between terms. In this sense, it is similar to the previously proposed term-based higher-order learning algorithms Higher-Order Naïve Bayes (HONB) (Ganiz et al, 2009) and Higher-Order Smoothing (HOS) (Poyraz et al, 2012(Poyraz et al, , 2014.…”
Section: Introductionsupporting
confidence: 66%
“…Based on these work, the authors in Ganiz et al (2009Ganiz et al ( , 2011) built a new Bayesian classification framework called Higher-Order Naive Bayes (HONB) which presents that words in documents are strongly connected by such higher-order paths and that they can be exploited in order to get better performance for classification. Both HONB (Ganiz et al, 2009) and HOS (Poyraz et al, 2012(Poyraz et al, , 2014 are based on Naïve Bayes.…”
Section: Semantic Kernels For Text Classificationmentioning
confidence: 99%
See 2 more Smart Citations
“…Identifying mislabeled or anomalous shipments through scrutiny of manifest data is one step in a multi-layer inspection process for containers arriving at ports described in [27.] Other relevant work on risk scoring and anomaly detection from manifest data is found in [5] and [13] …”
Section: Introductionmentioning
confidence: 99%