2018 IEEE 4th International Conference on Identity, Security, and Behavior Analysis (ISBA) 2018
DOI: 10.1109/isba.2018.8311467
|View full text |Cite
|
Sign up to set email alerts
|

Continuous authentication using one-class classifiers and their fusion

Abstract: While developing continuous authentication systems (CAS), we generally assume that samples from both genuine and impostor classes are readily available. However, the assumption may not be true in certain circumstances.Therefore, we explore the possibility of implementing CAS using only genuine samples. Specifically, we investigate the usefulness of four one-class classifiers OCC (elliptic envelope, isolation forest, local outliers factor, and one-class support vector machines) and their fusion. The performance… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
24
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
2
1

Relationship

2
6

Authors

Journals

citations
Cited by 34 publications
(25 citation statements)
references
References 22 publications
(43 reference statements)
1
24
0
Order By: Relevance
“…If the user allows a detection latency of 4 seconds which is usually not long enough for an impostor to perform malicious operations on the smartphone after stealing it, the accuracy can be increased to 90.24%. Although the accuracy is not perfect, it is comparable to the state-of-the-art one-class smartphone authentication using various handcrafted features and complex model fusion [11] in the literature. Also, our privacy-preserving one-class model (PED-LSTM-Vote-200) achieves better detection accuracy, F1 score, TPN and TNR results than the state-of-the-art two-class KRR model with hand-crafted features [13] for this data set when both are using a 64-reading window.…”
Section: Insights From Algorithm Performancementioning
confidence: 73%
See 1 more Smart Citation
“…If the user allows a detection latency of 4 seconds which is usually not long enough for an impostor to perform malicious operations on the smartphone after stealing it, the accuracy can be increased to 90.24%. Although the accuracy is not perfect, it is comparable to the state-of-the-art one-class smartphone authentication using various handcrafted features and complex model fusion [11] in the literature. Also, our privacy-preserving one-class model (PED-LSTM-Vote-200) achieves better detection accuracy, F1 score, TPN and TNR results than the state-of-the-art two-class KRR model with hand-crafted features [13] for this data set when both are using a 64-reading window.…”
Section: Insights From Algorithm Performancementioning
confidence: 73%
“…There have been preliminary works on authenticating smartphone users with only that user's data, which might fit under our LAD scenario. For example, multi-motion sensors [26], fusion of swiping and phone movement patterns [11] and keystroke [10,14] have been used for one-class smartphone user authentication. However, these works leveraged manually-crafted features and deep learning models are not found to outperform the conventional machine learning models.…”
Section: Related Workmentioning
confidence: 99%
“…The extracted features were used to train the authentication models. The training part consisted of training of one [23,24] or two class classifiers [3,6,19]. While the testing part focused on passing genuine and non-genuine samples through the model and computing the genuine fail (false reject rates) and impostor pass rates (false accept rates).…”
Section: Continuous Authentication Via Touch Gesturesmentioning
confidence: 99%
“…Impostors applies to every user in the database except the Genuine user. It is important to note that the impostors do not make any deliberate attempt to mimic the Genuine user; their regular gait patterns were used as impostor samples following the past studies [23,26,27]. However, Imitator is the user who made a deliberate attempt to copy the Genuine user after receiving feedback-based training (see Section 3.3 for more details).…”
Section: Related Workmentioning
confidence: 99%
“…Following previous studies [23,26,30,31,40,45,57,59], especially Kumar et al [27], we employed the Bayes Network (Bayes), Logistic Regression (LogReg), Multilayer Perceptron (MulPer), Random Forest (RanFor), and SVM to classify the feature vectors between genuine and impostor. Besides these classifiers, we included kNN because previous studies [26,41,57] have demonstrated its superiority over the other approaches. To be trained, these classifiers required feature vectors from both genuine and impostor classes.…”
Section: Choice Of Classificationmentioning
confidence: 99%