We present a novel scheme called Decentralized Attestation Scheme for Device Swarms (DADS) which is to the best of our knowledge the first to accomplish decentralized attestation in device swarms. Device swarms are smart, mobile, and interconnected devices which operate in large numbers, and are likely to be part of emerging applications in Cyber-Physical Systems (CPS) and Industrial Internet of Things (IIoTs). Swarm devices process and exchange safety, privacy and mission-critical information. Thus, it is important to have a good code verification technique that scales to device swarms and establishes trust among collaborating devices. DADS has several advantages over current state-of-the-art swarm attestation techniques: it is de- centralized, has no single point of failure, and can handle changing topologies after nodes are compromised. DADS assures system resilience to node compromise/failure while guaranteeing only devices that execute genuine code remain part of the group. We conduct performance measurements of communication, computation, memory, and energy using the TrustLite embedded systems architecture in OMNeT++ simulation environment. We show that the proposed approach can significantly reduce communication cost and is very efficient in terms of computation, memory and energy requirements. We also analyze security and show that DADS is very effective and robust against various attacks.
Images perturbed subtly to be misclassified by neural networks, called adversarial examples, have emerged as a technically deep challenge and an important concern for several application domains. Most research on adversarial examples takes as its only constraint that the perturbed images are similar to the originals. However, real-world application of these ideas often requires the examples to satisfy additional constraints, which are typically enforced through custom modification of the perturbation process. In this paper, we propose adversarial generative nets (AGNs), a general methodology to train a generator neural network to emit adversarial examples satisfying desired objectives. We demonstrate the ability of AGNs to accommodate a wide array of objectives, including imprecise ones difficult to model, in two application domains. Particularly, we demonstrate physical adversarial examples---eyeglass frames designed to fool face recognition---with better robustness, inconspicuousness, and scalability than previous approaches, as well as a new attack to fool a handwritten-digit classifier.
Location-based Services (LBSs) provide valuable services, with convenient features for mobile users. However, the location and other information disclosed through each query to the LBS erodes user privacy. This is a concern especially because LBS providers can be honest-but-curious, collecting queries and tracking users whereabouts and infer sensitive user data. This motivated both centralized and decentralized location privacy protection schemes for LBSs: anonymizing and obfuscating LBS queries to not disclose exact information, while still getting useful responses. Decentralized schemes overcome disadvantages of centralized schemes, eliminating anonymizers, and enhancing users control over sensitive information. However, an insecure decentralized system could create serious risks beyond private information leakage. More so, attacking an improperly designed decentralized LBS privacy protection scheme could be an effective and low-cost step to breach user privacy. We address exactly this problem, by proposing security enhancements for mobile data sharing systems. We protect user privacy while preserving accountability of user activities, leveraging pseudonymous authentication with mainstream cryptography. We show our scheme can be deployed with off-the-shelf devices with an experimental result on automotive testbed.
The quantity of personal data that is collected, stored, and subsequently processed continues to grow rapidly. Given its sensitivity, ensuring privacy protections has become a necessary component of database management. To enhance protection, a number of mechanisms have been developed, such as audit logging and alert triggers, which notify administrators about suspicious activities for investigation. However, this approach is limited. First, the volume of alerts is often substantially greater than the auditing capabilities of organizations. Second, strategic attackers can attempt to disguise their actions or carefully choose targets, thus hide illicit activities. In this paper, we introduce an auditing approach that accounts for adversarial behavior by 1) prioritizing the order in which types of alerts are investigated and 2) providing an upper bound on how much resource to allocate for each type. Specifically, we model the interaction between a database auditor and attackers as a Stackelberg game. We show that even a highly constrained version of such problem is NP-Hard. Then we introduce a method that combines linear programming, column generation and heuristic searching to derive an auditing policy. On the synthetic data, we perform an extensive evaluation on the approximation degree of our solution with the optimal one. The two real datasets, 1) 1.5 months of audit logs from Vanderbilt University Medical Center and 2) a publicly available credit card application dataset, are used to test the policy-searching performance. The findings illustrate that our methods produce high-quality mixed strategies, and our general approach significantly outperforms non-game-theoretic baselines.
Trust in a microelectronics-based system can be characterized as the level of confidence that a system is free of subversive alterations made during system development, or that the development process of a system has not been manipulated by a malicious adversary. Trust in systems has become an increasing concern over the past decade. This paper presents a novel game-theoretic framework, called GPLADD, for analyzing and quantifying a system trustworthiness at the end of the development process, through the analysis of risk of development-time system manipulation. GPLADD represents attacks and attacker-defender contest over time. It treats time as an explicit constraint and allows incorporating the informational asymmetries between the attacker and defender into analysis. GPLADD includes an explicit representation of attack steps via multi-step attack graphs, attacker and defender strategies, and player actions at different times. GPLADD allows quantifying the attack success probability over time and the attacker and defender costs based on their capabilities and strategies. This ability to quantify different attacks provides an input for evaluation of trust in the development process. We demonstrate GPLADD on an example attack and its variants. We develop a method for representing success probability for arbitrary attacks and derive an explicit analytic characterization of success probability for a specific attack. We present a numeric Monte Carlo study of a small set of attacks, quantify attack success probabilities, attacker and defender costs, and illustrate the options the defender has for limiting the attack success and improving trust in the development process.