Intertwined developments between program attacks and defenses witness the evolution of program anomaly detection methods, yet emerging categories of program attacks, e.g., non-control data attacks, nullify state-of-the-art detection mechanisms, e.g., pushdown automata anomaly detection. This paper points out the deficiency of existing program anomaly detection models under new attacks and presents an anomaly detection model based on context-sensitive grammar verification. The key feature of the proposed model is the reasoning of correlations among arbitrary events occurred in long program traces. It relaxes existing correlation analysis between events at a stack snapshot, e.g., paired call and ret, to correlation analysis among events historically occurred during the execution. The proposed method leverages specialized machine learning techniques to probe normal program behavior boundaries in vast high-dimensional detection space. Its two-stage modeling/detection design analyzes event correlation at both binary and quantitative levels. Our prototype successfully detects all reproduced real-world attacks against sshd, libpcre, and sendmail. The detection procedure incurs 0.1-1.3 ms overhead to profile and analyze a single behavior instance that consists of tens of thousands of function call or system call events.
Online Social Networks (OSNs) offer convenient ways to cheaply reach out to potentially large audiences worldwide. As number of likes of a page has become de-facto measure of its popularity and profitability, alongside Facebooks official targeted advertising platform, an underground market of services artificially inflating page likes, like-farms, has also emerged. However, there is very little work that systematically analyzes Facebook pages promotion methods. This paper presents a honeypot-based comparative measurement study of page likes. First, we analyze likes based on demographic, temporal, and social characteristics, and find that some farms seem to be operated by bots and do not really try to hide the nature of their operations, while others follow a stealthier approach, mimicking regular users behavior. Next, we look at fraud-detection algorithms currently deployed by Facebook and show that they are inefficient to detect stealthy farms which spread likes over longer timespans and like popular pages to mimic regular users. To overcome their limitations, we investigate the feasibility of timeline-based detection of like-farm accounts, focusing on characterizing content generated by Facebook accounts on their timelines as an indicator of genuine versus fake social activity. We analyze wide range of features extracted from timeline posts, categorized into lexical and non-lexical. We find that like-farm accounts tend to re-share content, use fewer words and poorer vocabulary, and generate duplicate comments and likes compared to normal users. Using lexical and non-lexical features, we built a classifier to detect like farms accounts that achieves precision higher than 99% and 93% recall.
Attack graphs provide compact representations of the attack paths that an attacker can follow to compromise network resources by analysing network vulnerabilities and topology. These representations are a powerful tool for security risk assessment. Bayesian inference on attack graphs enables the estimation of the risk of compromise to the system's components given their vulnerabilities and interconnections, and accounts for multi-step attacks spreading through the system. Whilst static analysis considers the risk posture at rest, dynamic analysis also accounts for evidence of compromise, e.g. from SIEM software or forensic investigation. However, in this context, exact Bayesian inference techniques do not scale well. In this paper we show how Loopy Belief Propagation - an approximate inference technique - can be applied to attack graphs, and that it scales linearly in the number of nodes for both static and dynamic analysis, making such analyses viable for larger networks. We experiment with different topologies and network clustering on synthetic Bayesian attack graphs with thousands of nodes to show that the algorithm's accuracy is acceptable and converge to a stable solution. We compare sequential and parallel versions of Loopy Belief Propagation with exact inference techniques for both static and dynamic analysis, showing the advantages of approximate inference techniques to scale to larger attack graphs.