ACM DL

ACM Transactions on

Privacy and Security (TOPS)

Menu
Latest Articles

ISOTOP: Auditing Virtual Networks Isolation Across Cloud Layers in OpenStack

Multi-tenancy in the cloud is a double-edged sword. While it enables cost-effective resource sharing, it increases security risks for the hosted applications. Indeed, multiplexing virtual resources belonging to different tenants on the same physical substrate may lead to critical security concerns... (more)

NEWS

About TOPS

ACM TOPS publishes high-quality research results in the fields of information and system security and privacy.  Studies addressing all aspects of these fields are welcomed, ranging from technologies, to systems and applications, to the crafting of policies.

Read more

Forthcoming Articles
Alpha-Beta Privacy

The specification of privacy goals in a formal way has proved to be not quite trivial so far---until this work. The most widely used approach in formal methods is based on the static equivalence of frames in the applied pi-calculus, basically asking whether or not the intruder is able to distinguish two given worlds. But then a subtle question emerges: how can we be sure that we have specified all pairs of worlds to properly reflect our intuitive privacy goal? To address this problem, we introduce in this paper a novel and declarative way to specify privacy goals, called (± , ² )-privacy. This new approach is based on specifying two formulae ± and ² in first-order logic with Herbrand universes, where ± reflects the intentionally released information and ² includes the actual cryptographic (technical) messages the intruder can see. Then (± , ² )-privacy means that the intruder cannot derive any non-technical statement from ² that he cannot derive from ± already. We describe by a variety of examples how this notion can be used in practice. Even though (± , ² )-privacy does not directly contain a notion of distinguishing between worlds, there is a close relationship to static equivalence of frames that we investigate formally. This allows us to justify (and criticize) the specifications that are currently used in verification tools, and obtain a decision procedure for a large fragment of (± , ² )-privacy.

Characterizing the Security of the SMS Ecosystem with Public Gateways

Recent years have seen the Short Message Service (SMS) become a critical component of the security infrastructure, assisting with tasks including identity verification and second-factor authentication. At the same time, this messaging infrastructure has become dramatically more open and connected to public networks than ever before. However, the implications of this openness, the security practices of benign services, and the malicious misuse of this ecosystem are not well understood. In this paper, we provide a comprehensive longitudinal study to answer these questions, analyzing over 900,000 text messages sent to public online SMS gateways over the course of 28 months. From this data, we were able to uncover the geographical distribution of spam messages, studied SMS as a transmission medium of malicious content, and find that changes in benign and malicious behaviors in the SMS ecosystem have been minimal during our collection period. Additionally, we identified a range of services sending extremely sensitive plaintext data, implementing low entropy solutions for one-use codes, and behaviors indicating that public gateways are primarily used for evading account creation policies that require verified phone numbers. This latter finding has significant implications for combating phone-verified account fraud and demonstrates that such evasion will continue to be difficult to detect and prevent.

A Close Look at a Daily Dataset of Malware Samples

The number of unique malware samples is growing out of control. Over the years, security companies have designed and deployed complex infrastructures to collect and analyze this overwhelming number of samples. As a result, on average, a security company collects from its different feeds more than 1M unique files per day. These are automatically stored and processed to extract basic static and dynamic analysis information. However, only a tiny amount of this data is interesting for security researchers and attracts the interest of a human expert. To the best of our knowledge, nobody has systematically dissected these datasets to precisely understand what they really contain. The security community generally discards the problem because of the alleged prevalence of uninteresting samples. In this paper, we guide the reader through a step-by-step analysis of the hundred of thousands Windows executables collected every day. Our goal is to show how a company can employ existing state-of-the-art techniques to automatically process these samples, and then perform manual experiments to understand and document what is the real content of these gigantic datasets. We present the filtering steps and we discuss in detail the clustering solution we implemented to group samples according to their behavior and support manual verification. Finally, we use the results of this measurement experiment to provide a rough estimate of both the human and computer resources that are required to get to the bottom of the catch of the day.

Kernel Protection against Just-In-Time Code Reuse

The abundance of memory corruption and disclosure vulnerabilities in kernel code necessitates the deployment of hardening techniques to prevent privilege escalation attacks. As stricter memory isolation mechanisms between the kernel and user space become commonplace, attackers increasingly rely on code reuse techniques to exploit kernel vulnerabilities. Contrary to similar attacks in more restrictive settings, like in web browsers, in kernel exploitation, non-privileged local adversaries have great flexibility in abusing memory disclosure vulnerabilities to dynamically discover, or infer, the location of code snippets in order to construct code-reuse payloads. Recent studies have shown that the coupling of code diversification with the enforcement of a "read XOR execute" (R^X) memory safety policy is an effective defense against the exploitation of userland software, but so far this approach has not been applied for the protection of the kernel itself. In this paper, we fill this gap by presenting kR^X: a kernel hardening scheme based on execute-only memory and code diversification. We study a previously unexplored point in the design space, where a hypervisor or a super-privileged component is not required. Implemented mostly as a set of GCC plugins, kR^X is readily applicable to x86 Linux kernels (both 32- and 64-bit) and can benefit from hardware support (segmentation on x86, MPX on x86-64) to optimize performance. In full protection mode, kR^X incurs a low runtime overhead of 4.04%, which drops to 2.32% when MPX is available, and 1.32% when memory segmentation is in use.

KIST: Kernel-Informed Socket Transport for Tor

Tor's growing popularity and user diversity has resulted in network performance problems that are not well understood, though performance is understood to be a significant factor in Tor's security. A large body of work has attempted to solve performance problems without a complete understanding of where congestion occurs in Tor. In this paper, we first study congestion in Tor at individual relays as well as along the entire end-to-end Tor path and find that congestion occurs almost exclusively in egress kernel socket buffers. We then analyze Tor's socket interactions and discover two major contributors to Tor's congestion: Tor writes sockets sequentially, and Tor writes as much as possible to each socket. To improve Tor's performance, we design, implement, and test KIST: a new socket management algorithm that uses real-time kernel information to dynamically compute the amount to write to each socket while considering all circuits of all writable sockets when scheduling cells. We find that, in the medians, KIST reduces circuit congestion by over 30 percent, reduces network latency by 18 percent, and increases network throughput by nearly 10 percent. We also find that client and relay performance with KIST improves as network load and packet loss rates increase. We analyze the security of KIST and find an acceptable performance and security trade-off, as it does not significantly affect the outcome of well-known latency and throughput attacks. KIST has been merged into Tor version 0.3.2.1-alpha and is configured as the default socket scheduling algorithm.

All ACM Journals | See Full Journal Index

Search TOPS
enter search term and/or author name