“…RELATED WORK There is a large body of work demonstrating that installtime prompts fail because users do not understand or pay attention to them [17], [21], [37]. When using install-time prompts, users often do not understand which permission types correspond to which sensitive resources and are surprised by the ability of background applications to collect information [15], [20], [36].…”
Abstract-Current smartphone operating systems regulate application permissions by prompting users on an ask-on-first-use basis. Prior research has shown that this method is ineffective because it fails to account for context: the circumstances under which an application first requests access to data may be vastly different than the circumstances under which it subsequently requests access. We performed a longitudinal 131-person field study to analyze the contextuality behind user privacy decisions to regulate access to sensitive resources. We built a classifier to make privacy decisions on the user's behalf by detecting when context has changed and, when necessary, inferring privacy preferences based on the user's past decisions and behavior. Our goal is to automatically grant appropriate resource requests without further user intervention, deny inappropriate requests, and only prompt the user when the system is uncertain of the user's preferences. We show that our approach can accurately predict users' privacy decisions 96.8% of the time, which is a four-fold reduction in error rate compared to current systems.
“…RELATED WORK There is a large body of work demonstrating that installtime prompts fail because users do not understand or pay attention to them [17], [21], [37]. When using install-time prompts, users often do not understand which permission types correspond to which sensitive resources and are surprised by the ability of background applications to collect information [15], [20], [36].…”
Abstract-Current smartphone operating systems regulate application permissions by prompting users on an ask-on-first-use basis. Prior research has shown that this method is ineffective because it fails to account for context: the circumstances under which an application first requests access to data may be vastly different than the circumstances under which it subsequently requests access. We performed a longitudinal 131-person field study to analyze the contextuality behind user privacy decisions to regulate access to sensitive resources. We built a classifier to make privacy decisions on the user's behalf by detecting when context has changed and, when necessary, inferring privacy preferences based on the user's past decisions and behavior. Our goal is to automatically grant appropriate resource requests without further user intervention, deny inappropriate requests, and only prompt the user when the system is uncertain of the user's preferences. We show that our approach can accurately predict users' privacy decisions 96.8% of the time, which is a four-fold reduction in error rate compared to current systems.
“…As we will show in Table 2, a linear classifier using API calls and permissions as input features, which are the most popular and the best performing input features for Android malware detectors [5,8,10,14,22,36], performs badly on new malware instances (the testing set), although it has a very good classification performance on the validation set. In this section, we will show that unwanted behaviours improve the classification performance of new malware detection.…”
Section: Evaluation: Detecting New Malwarementioning
confidence: 99%
“…To automatically detect Android malware, machine learning methods have been applied to train malware classifiers [5,8,21,22,36]. Among them, the tool Drebin [8] extracts a broad range of features, such as permissions, components, API calls and intents, then trains an SVM classifier.…”
Section: Introductionmentioning
confidence: 99%
“…DroidAPIMiner [5] uses refined API calls and relies on the KNN (k-nearest neighbours) algorithm. Another interesting tool is CHABADA [22] which detects outliers (abnormal API usage) within clusters of applications by exploiting OC-SVM (one-class SVM). All of these classifiers were trying to obtain good fits to the training data by using different methods and variant kinds of features.…”
Abstract. Machine-learning-based Android malware classifiers perform badly on the detection of new malware, in particular, when they take API calls and permissions as input features, which are the best performing features known so far. This is mainly because signature-based features are very sensitive to the training data and cannot capture general behaviours of identified malware. To improve the robustness of classifiers, we study the problem of learning and verifying unwanted behaviours abstracted as automata. They are common patterns shared by malware instances but rarely seen in benign applications, e.g., intercepting and forwarding incoming SMS messages. We show that by taking the verification results against unwanted behaviours as input features, the classification performance of detecting new malware is improved dramatically. In particular, the precision and recall are respectively 8 and 51 points better than those using API calls and permissions, measured against industrial datasets collected across several years. Our approach integrates several methods: formal methods, machine learning and text mining techniques. It is the first to automatically generate unwanted behaviours for Android malware detection. We also demonstrate unwanted behaviours constructed for well-known malware families. They compare well to those described in human-authored descriptions of these families.
“…The number of topics has been selected according to the number of labels resulted from the open coding session and reported in Table 8.4. Although a near-optimal configuration of LDA could require a proper setting-e.g., through search-based optimization techniques [PDO + 13]-in this work we have set the number of topics equal to the number of expected categories, an approach already followed when LDA has been used to categorize text [GTGZ14].…”
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.