Android pattern lock is still popularly used for mobile user authentication. Unfortunately, however, many concerns have been raised regarding its security and usability. User-created patterns tend to be simply structured or reduced to a small set. Complex patterns are hard to memorize. Input patterns are susceptible to various attacks, such as guessing attacks, smudge attacks, and shoulder surfing attacks. This paper presents a novel mechanism based on the pattern lock, in which behavioral biometrics are employed to address these problems. Our basic idea starts from turning the lock pattern into public knowledge rather than a secret and leveraging touch dynamics. Users do not need to create their own lock patterns or memorize them. Instead, our system shows a public pattern along with guidance on how to draw it. All the user needs to do for authentication is to draw the pattern as shown. For adversaries, the above-mentioned attacks are rendered useless by this new mechanism. Specifically, we study how to generate the public patterns and how to perform authentication. We considered segments, angles, directions, and turns as units for constructing the lock patterns, and established the public pattern criteria. The results are utilized to generate four public patterns in our experiment. For authentication, we achieved equal error rates (EERs) as low as 2.66% (sitting), 3.53% (walking), and 5.83% (combined). Furthermore, the results of our additional experiments demonstrated that our system preserved performance over time (F1-score = 89.88%, SD = 4.60%), and was sufficiently secure against camera-based recording attacks (FAR = 3.25%).INDEX TERMS Behavioral authentication, android pattern lock, smartphone, machine learning. I. INTRODUCTIONSmartphones have now become a part of our daily lives, and their functionality has significantly increased; hence, mobile user authentication has now turned into an essential mechanism for the security and privacy of users. Currently, various authentication methods such as PIN, passwords, biometrics, and pattern lock, are used among smartphone users, and each scheme has advantages and disadvantages [10], [26].Android pattern lock, which is still widely used for mobile user authentication, dates back to the earlier recall-based systems such as Draw-A-Secret (DAS) [22] and Pass-Go [40]. Users are asked to create and memorize a graphical pattern on a 3 × 3 grid. For authentication, they should remember the pattern, and then draw it with a finger on the grid.The associate editor coordinating the review of this manuscript and approving it for publication was Xiaofan He.
It is challenging for malware lineage inference to identify versions of collected malware by ensuring high accuracy in clustering. In this article, we tackle this problem and present a novel mechanism using behavioral features for version identification of (un)packed malware. Our basic idea involves focusing on intrafamily clustering. We extract the so-called family feature sets, i.e., hybrid features specific to each family. Our intuition is that family feature sets may achieve higher accuracy in clustering than common feature sets, and unpacked malware found in or relevant to such a cluster can result in the lineage inference of family members using traditional inference methods. We conduct experiments with two datasets, 8928 malware samples from VXHeavens and 3293 samples by manual analysis, composed of packed malware in a large portion. The results demonstrate that we can accurately classify samples into malware families based on the hybrid features we choose. In addition, we can also effectively extract family feature sets from 37 feature categories using forward stepwise selection. For intrafamily clustering, we employed the agglomerative clustering algorithm and observed that using family feature sets is significantly more accurate than using common feature sets, which facilitates higher accuracy lineage inference of packed malware.
The confidentiality threat against training data has become a significant security problem in neural language models. Recent studies have shown that memorized training data can be extracted by injecting well-chosen prompts into generative language models. While these attacks have achieved remarkable success in the English-based Transformer architecture, it is unclear whether they are still effective in other language domains. This paper studies the effectiveness of attacks against Korean models and the potential for attack improvements that might be beneficial for future defense studies. The contribution of this study is two-fold. First, we perform a membership inference attack against the state-of-the-art Korean GPT model. We found approximate training data with 20% to 90% precision in the top-100 samples and confirmed that the proposed attack technique for naive GPT is valid across the language domains. Second, in this process, we observed that the redundancy of the selected sentences could hardly be detected with the existing attack method. Since the information appearing in a few documents is more likely to be meaningful, it is desirable to increase the uniqueness of the sentences to improve the effectiveness of the attack. Thus, we propose a deduplication strategy to replace the traditional word-level similarity metric with the BPE token level. Our proposed strategy reduces 6% to 22% of the underestimated samples among selected ones, thereby improving precision by up to 7%p. As a result, we show that considering both language-and model-specific characteristics is essential to improve the effectiveness of attack strategies. We also discuss possible mitigations against the MI attacks on the general language models.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.