2016
DOI: 10.1609/aaai.v30i1.10208
|View full text |Cite
|
Sign up to set email alerts
|

Learning Future Classifiers without Additional Data

Abstract: We propose probabilistic models for predicting future classifiers given labeled data with timestamps collected until the current time. In some applications, the decision boundary changes over time. For example, in spam mail classification, spammers continuously create new spam mails to overcome spam filters, and therefore, the decision boundary that classifies spam or non-spam can vary. Existing methods require additional labeled and/or unlabeled data to learn a time-evolving decision boundary. However, collec… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 6 publications
(8 citation statements)
references
References 24 publications
(25 reference statements)
0
8
0
Order By: Relevance
“…Present is also considered as a weighting method, where there is a weight only at the current time. The AAAI16 method is for learning future classifiers by using the one-step algorithm (Kumagai and Iwata 2016). With the proposed method, we ran the learning four times with different initial conditions, and selected the result with the highest lower bound.…”
Section: Comparison Methodsmentioning
confidence: 99%
See 4 more Smart Citations
“…Present is also considered as a weighting method, where there is a weight only at the current time. The AAAI16 method is for learning future classifiers by using the one-step algorithm (Kumagai and Iwata 2016). With the proposed method, we ran the learning four times with different initial conditions, and selected the result with the highest lower bound.…”
Section: Comparison Methodsmentioning
confidence: 99%
“…For Batch, Online and Present, we chose the regularization parameter from {10 −1 , 1, 10 1 } in terms of which average AUC over all test time units was the best. For AAAI16, we set the parameters for the gamma priors as the same values as those in a previous study (Kumagai and Iwata 2016), ran the learning four times with different initial conditions, and changed the order of vector autoregressive models from 1, . .…”
Section: Comparison Methodsmentioning
confidence: 99%
See 3 more Smart Citations