ACM DL

ACM Transactions on

Privacy and Security (TOPS)

Menu
Latest Articles

ANCHOR: Logically Centralized Security for Software-Defined Networks

Software-defined networking (SDN) decouples the control and data planes of traditional networks, logically centralizing the functional properties of the network in the SDN controller. While this centralization brought advantages such as a faster pace of innovation, it also disrupted some of the natural defenses of traditional architectures against... (more)

Safe and Efficient Implementation of a Security System on ARM using Intra-level Privilege Separation

Security monitoring has long been considered as a fundamental mechanism to mitigate the damage of a... (more)

NEWS

About TOPS

ACM TOPS publishes high-quality research results in the fields of information and system security and privacy.  Studies addressing all aspects of these fields are welcomed, ranging from technologies, to systems and applications, to the crafting of policies.

Read more

Forthcoming Articles
Usability Study of Four Secure Email Tools Using Paired Participants

Secure email is increasingly being touted as usable by novice users, with a push for adoption based on recent concerns about government surveillance. To determine whether secure email is ready for grassroots adoption, we employ a laboratory user study that recruits pairs of novice users to install and use several of the latest systems to exchange secure messages. We present both quantitative and qualitative results from 28 pairs of novices as they use Pwm, Tutanota, and Virtru and 10 pairs of novices as they use Mailvelope. Participants report being more at ease with this type of study and better able to cope with mistakes since both participants are "on the same page." We find that users prefer integrated solutions over depot-based solutions and that tutorials are important in helping First-time users. Additionally, hiding the details of how a secure email system provides security can lead to a lack of trust in the system. Finally, our results demonstrate that PGP using manual key management is still unusable for novice users, with 9 out of 10 participant pairs failing to complete the study.

Using Episodic Memory for User Authentication

We propose a new authentication mechanism, called ``life-experience passwords (LEPs).'' Sitting somewhere between passwords and security questions, a LEP consists of several facts about a user-chosen life event, such as a trip, a graduation, a wedding, etc. At LEP creation, the system extracts these facts from the user's input and transforms them into questions and answers. At authentication, the system prompts the user with questions and matches her answers with the stored ones. We show that question choice and design make LEPs much more secure than security questions and passwords, while the question-answer format promotes password diversity and recall, lowering reuse. Specifically, we find that: (1) LEPs are 10^9 -10^14 x stronger than an ideal, randomized, 8-character password, (2) LEPs are up to 3 x more memorable than passwords and on par with security questions, and (3) LEPs are reused half as often as passwords. While both LEPs and security questions use personal experiences for authentication, LEPs use several questions, which are closely tailored to each user. This increases LEP security against guessing attacks. In our evaluation, only 0.7% of LEPs were guessed by casual friends, and 9.5% by family members or close friends, roughly half of the security question guessing rate. On the downside, LEPs take around 5 x longer to input than passwords. So, these qualities make LEPs suitable for multi-factor authentication at high-value servers, such as financial or sensitive work servers, where stronger authentication strength is needed.

A General Framework for Adversarial Examples with Objectives

Images perturbed subtly to be misclassified by neural networks, called adversarial examples, have emerged as a technically deep challenge and an important concern for several application domains. Most research on adversarial examples takes as its only constraint that the perturbed images are similar to the originals. However, real-world application of these ideas often requires the examples to satisfy additional constraints, which are typically enforced through custom modification of the perturbation process. In this paper, we propose adversarial generative nets (AGNs), a general methodology to train a generator neural network to emit adversarial examples satisfying desired objectives. We demonstrate the ability of AGNs to accommodate a wide array of objectives, including imprecise ones difficult to model, in two application domains. Particularly, we demonstrate physical adversarial examples---eyeglass frames designed to fool face recognition---with better robustness, inconspicuousness, and scalability than previous approaches, as well as a new attack to fool a handwritten-digit classifier.

Tractor Beam: Safe-hijacking of Consumer Drones with Adaptive GPS Spoofing

The consumer drone market is booming. Consumer drones are predominantly used for aerial photography; however, their use has been expanding because of their autopilot technology. Unfortunately, terrorists have also begun to use consumer drones for kamikaze bombing and reconnaissance. To protect against such threats, several companies have started anti-drone services that primarily focus on disrupting or incapacitating drone operations. However, the approaches employed are inadequate because they make any drone that has intruded stop and remain over the protected area. We specify this issue by introducing the concept of safe-hijacking, which enables a hijacker to expel the intruding drone from the protected area remotely. As a safe-hijacking strategy, we investigated whether consumer drones in the autopilot mode can be hijacked via adaptive GPS spoofing. Specifically, as consumer drones activate GPS fail-safe and change their flight mode whenever a GPS error occurs, we examined the conditions under which the fail-safe is activated and the corresponding recovery procedures. To this end, we performed black- and white-box analyses of the fail-safe mechanisms of three popular drones: DJI Phantom 3 Standard, DJI Phantom 4, and 3DR Solo. Based on our analyses result, we designed safe-hijacking strategies for each drone. The results of field experiments and software simulations verified the efficacy of our safe-hijacking strategies against these drones and demonstrated that the strategies can force the drones to move in any direction with high accuracy.

Resilient Privacy Protection for Location-based Services through Decentralization

Location-based Services (LBSs) provide valuable services, with convenient features for mobile users. However, the location and other information disclosed through each query to the LBS erodes user privacy. This is a concern especially because LBS providers can be honest-but-curious, collecting queries and tracking users whereabouts and infer sensitive user data. This motivated both centralized and decentralized location privacy protection schemes for LBSs: anonymizing and obfuscating LBS queries to not disclose exact information, while still getting useful responses. Decentralized schemes overcome disadvantages of centralized schemes, eliminating anonymizers, and enhancing users control over sensitive information. However, an insecure decentralized system could create serious risks beyond private information leakage. More so, attacking an improperly designed decentralized LBS privacy protection scheme could be an effective and low-cost step to breach user privacy. We address exactly this problem, by proposing security enhancements for mobile data sharing systems. We protect user privacy while preserving accountability of user activities, leveraging pseudonymous authentication with mainstream cryptography. We show our scheme can be deployed with off-the-shelf devices with an experimental result on automotive testbed.

Hybrid Private Record Linkage: Separating Differentially-Private Synopses From Matching Records

Private record linkage protocols allow multiple parties to exchange matching records, which refer to the same entities or have similar values, while keeping the non-matching ones secret. Conventional protocols are based on computationally expensive cryptographic primitives and therefore do not scale. To address these scalability issues, hybrid protocols have been proposed that combine differential privacy techniques with secure multiparty computation techniques. However, a drawback of such protocols is that they disclose to the parties both the matching records and the differentially private synopses of the datasets involved in the linkage. Consequently, differential privacy is no longer always satisfied. To address this issue, we propose a novel framework, which separates the private synopses from the matching records. The two parties do not access the synopses directly, but still use them to efficiently link records. We theoretically prove the security of our framework under the state-of-the-art privacy notion differential privacy for record linkage (DPRL). In addition, we have developed a simple but effective strategy for releasing private synopses. Extensive experimental results show that our framework is superior to the existing methods in terms of efficiency.

Kernel Protection against Just-In-Time Code Reuse

The abundance of memory corruption and disclosure vulnerabilities in kernel code necessitates the deployment of hardening techniques to prevent privilege escalation attacks. As stricter memory isolation mechanisms between the kernel and user space become commonplace, attackers increasingly rely on code reuse techniques to exploit kernel vulnerabilities. Contrary to similar attacks in more restrictive settings, like in web browsers, in kernel exploitation, non-privileged local adversaries have great flexibility in abusing memory disclosure vulnerabilities to dynamically discover, or infer, the location of code snippets in order to construct code-reuse payloads. Recent studies have shown that the coupling of code diversification with the enforcement of a "read XOR execute" (R^X) memory safety policy is an effective defense against the exploitation of userland software, but so far this approach has not been applied for the protection of the kernel itself. In this paper, we fill this gap by presenting kR^X: a kernel hardening scheme based on execute-only memory and code diversification. We study a previously unexplored point in the design space, where a hypervisor or a super-privileged component is not required. Implemented mostly as a set of GCC plugins, kR^X is readily applicable to x86 Linux kernels (both 32- and 64-bit) and can benefit from hardware support (segmentation on x86, MPX on x86-64) to optimize performance. In full protection mode, kR^X incurs a low runtime overhead of 4.04%, which drops to 2.32% when MPX is available, and 1.32% when memory segmentation is in use.

MaMaDroid: Detecting Android Malware by Building Markov Chains of Behavioral Models (Extended Version)

The constant evolution of the Android ecosystem, and of malware itself, makes it hard to design robust tools that can operate for long periods of time without the need for modifications or costly re-training. Aiming to address this issue, we set to detect malware from a behavioral point of view, modeled as the sequence of abstracted API calls. We introduce MaMaDroid, a static-analysis based system that abstracts apps API calls to their class, package, or family, and builds a model from their sequences obtained from the call graph of an app as Markov chains. This ensures that the model is more resilient to API changes and the features set is of manageable size. We evaluate MaMaDroid using a dataset of 8.5K benign and 35.5K malicious apps collected over a period of six years, showing that it effectively detects malware (with up to 0.99 F-measure) and keeps its detection capabilities for long periods of time (up to 0.87 F-measure two years after training). We also show that MaMaDroid re- markably improves over DroidAPIMiner, a state-of-the-art detection system that relies on the frequency of (raw) API calls. Aiming to assess whether MaMaDroid's effectiveness mainly stems from the API abstraction or from the sequencing modeling, we also evaluate a variant of it that uses frequency (instead of sequences), of abstracted API calls. We find that it is not as accurate, failing to capture maliciousness when trained on malware samples including API calls that are equally or more frequently used by benign apps.

All ACM Journals | See Full Journal Index

Search TOPS
enter search term and/or author name