Abstract-To filter SSL/TLS-protected traffic, some antivirus and parental-control applications interpose a TLS proxy in the middle of the host's communications. We set out to analyze such proxies as there are known problems in other (more matured) TLS processing engines, such as browsers and common TLS libraries. Compared to regular proxies, client-end TLS proxies impose several unique constraints, and must be analyzed for additional attack vectors; e.g., proxies may trust their own root certificates for externally-delivered content and rely on a custom trusted CA store (bypassing OS/browser stores). Covering existing and new attack vectors, we design an integrated framework to analyze such client-end TLS proxies. Using the framework, we perform a thorough analysis of eight antivirus and four parentalcontrol applications for Windows that act as TLS proxies, along with two additional products that only import a root certificate. Our systematic analysis uncovered that several of these tools severely affect TLS security on their host machines. In particular, we found that four products are vulnerable to full server impersonation under an active man-in-the-middle (MITM) attack out-of-the-box, and two more if TLS filtering is enabled. Several of these tools also mislead browsers into believing that a TLS connection is more secure than it actually is, by e.g., artificially upgrading a server's TLS version at the client. Our work is intended to highlight new risks introduced by TLS interception tools, which are possibly used by millions of users.
Millions of users are exposed to password-strength meters/checkers at highly popular web services that use userchosen passwords for authentication. Recent studies have found evidence that some meters actually guide users to choose better passwords-which is a fairly rare-bit of good news in password research. However, these meters are mostly based on ad-hoc design. At least, as we found, most vendors do not provide any explanation of their design choices, sometimes making them appear to be a black box. We analyze password meters deployed in selected popular websites, by measuring the strength labels assigned to common passwords from several password dictionaries. From this empirical analysis with millions of passwords, we report prominent characteristics of meters as deployed at popular websites. We shed light on how the server-end of some meters functions, provide examples of highly inconsistent strength outcomes for the same password in different meters, along with examples of many weak passwords being labeled as strong or even very strong. These weaknesses and inconsistencies may confuse users in choosing a stronger password, and thus may weaken the purpose of these meters. On the other hand, we believe these findings may help improve existing meters, and possibly make them an effective tool in the long run.
Passwords are ubiquitous in our daily digital lives. They protect various types of assets ranging from a simple account on an online newspaper website to our health information on government websites. However, due to the inherent value they protect, attackers have developed insights into cracking/guessing passwords both offline and online. In many cases, users are forced to choose stronger passwords to comply with password policies; such policies are known to alienate users and do not significantly improve password quality. Another solution is to put in place proactive password-strength meters/checkers to give feedback to users while they create new passwords. Millions of users are now exposed to these meters on highly popular web services that use user-chosen passwords for authentication. More recently, these meters are also being built into popular password managers, which protect several user secrets including passwords. Recent studies have found evidence that some meters actually guide users to choose better passwords-which is a rare bit of good news in password research. However, these meters are mostly based on ad hoc design. At least, as we found, most vendors do not provide any explanation for their design choices, sometimes making them appear as a black box. We analyze password meters deployed in selected popular websites and password managers. We document obfuscated source-available meters, infer the algorithm behind the closed-source ones, and measure the strength labels assigned to common passwords from several password dictionaries. From this empirical analysis with millions of passwords, we shed light on how the server end of some web service meters functions and provide examples of highly inconsistent strength outcomes for the same password in different meters, along with examples of many weak passwords being labeled as strong or even excellent. These weaknesses and inconsistencies may confuse users in choosing a stronger password, and thus may weaken the purpose of these meters. On the other hand, we believe these findings may help improve existing meters and possibly make them an effective tool in the long run. ACM Reference Format:Xavier de Carné de Carnavalet and Mohammad Mannan. 2015. A large-scale evaluation of high-impact password strength meters.
The majority of computer users download compiled software and run it directly on their machine. Apparently, this is also true for open-sourced software -most users would not compile the available source, and implicitly trust that the available binaries have been compiled from the published source code (i.e., no backdoor has been inserted in the binary). To verify that the official binaries indeed correspond to the released source, one can compile the source of a given application, and then compare the locally generated binaries with the developer-provided official ones. However, such simple verification is non-trivial to achieve in practice, as modern compilers, and more generally, toolchains used in software packaging, have not been designed with verifiability in mind. Rather, the output of compilers is often dependent on parameters that can be strongly tied to the building environment. In this paper, we analyze a widely-used encryption tool, TrueCrypt, to verify its official binary with the corresponding source. We first manually replicate a close match to the official binaries of sixteen most recent versions of TrueCrypt for Windows up to v7.1a, and then explain the remaining differences that can solely be attributed to non-determinism in the build process. Our analysis provides the missing guarantee on official binaries that they are indeed backdoor-free, and makes audits on TrueCrypt's source code more meaningful. Also, we uncover several sources of non-determinism in TrueCrypt's compilation process; these findings may help create future verifiable build processes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.