New England Security Day
Cong Chen (Worcester Polytechnic Institute)
Thomas Eisenbarth (Worcester Polytechnic Institute)
Since the introduction of Differential Power Analysis, numerous attacks have been proposed and performed on a wide range of ciphers and platforms. In the meantime, two categories of countermeasures, i.e., hiding and masking, have been invented to thwart this serious threat. In order to fairly assess the effectiveness of these approaches in resisting the side channel leakage, several leakage evaluation methodologies have been studied in recent years.
First, We present the first Differential Power Analysis of a FPGA implementation of the McEliece cryptosystem. The presented cryptanalysis succeeds to recover the complete secret key after a few observed decryptions. It consists of a combination of a differential leakage analysis during the syndrome computation followed by an algebraic step that exploits the relation between the public and private key. Then, we applied a hybrid masking scheme (Threshold Implementation and arithmetic masking) in order to counteract this attack.
We also introduce a leakage detection method --paired t-test-- to fairly evaluate the leakage resistance of countermeasures. Paired t-test improves the standard t-test (TVLA) procedure by removing the effect caused by the environmental noise when capturing side channel leakage measurements. Higher order leakage detection is further improved with a moving average method. We compare the proposed test with standard t-test on synthetic data and physical measurements. Our results show that the proposed tests are robust to environmental noise.
Berk Gulmezoglu (Worcester Polytechnic Institute)
Mehmet Sinan Inci (Worcester Polytechnic Institute)
Gorka Irazoqui (Worcester Polytechnic Institute)
Thomas Eisenbarth (Worcester Polytechnic Institute)
Berk Sunar (Worcester Polytechnic Institute)
The co-location detection is a growing problem in the public cloud. With recent Cross-VM attacks on the cloud it is proven that the theft of sensitive information such as RSA and AES keys is possible. In this work we present how to achieve the random and targeted locations in popular clouds such as Amazon EC2, Google Cloud Engine and Microsoft Azure. In addition, we show that it is trivial to extract RSA and AES keys via Last Level Cache (LLC) on Amazon EC2. The noise reduction and key searching algorithms are also employed to improve the reliability of attack scenarios.
John Holowczak (University of Massachusetts Amherst)
Hadi Zolfaghari (University of Massachusetts Amherst)
Amir Houmansadr (University of Massachusetts Amherst)
The cached Internet content served by content delivery networks (CDN) comprises a large fraction of today’s Internet trace, yet, there is little study on how real-world censors deal with blocking forbidden CDN-hosted Internet content. We therefore design a client-side circumvention system, CacheBrowser, that leverages the censors’ difficulties in blocking CDN content. We implement CacheBrowser and use it to unblock CDN-hosted content in China with a download latency significantly smaller than traditional proxy-based circumvention systems like Tor. CacheBrowser’s superior quality-of-service is thanks to its publisher-centric approach, which retrieves blocked content directly from content publishers with no use of third-party proxies.
Milad NasrEsfahani (University of Massachusetts Amherst)
Amir Houmansadr (University of Massachusetts Amherst)
In this work we take a game theoretic approach to study the probelm of bridge distribution in Tor. We model the parties involved . Our goal is to, under specific threat models and assumptions about players, derive the optimal strategies for bridge distribution. Also, to find the theoretical bounds on the success of bridge distribution given specific scenarios and threat models.
Our study goes beyond Tor. Many circumvention systems need to 1) be accessible to many users, so they need to share some secret inforamtion to their users, and the information can be used to block such services. For instance, Our game theoretic approach can be extended to such other scenarios, so in general we target the problem of insider attacks.
Ethan Heilman (Boston University)
Foteini Baldimtsi (Boston University)
Sharon Goldberg (Boston University)
Although Bitcoin is often perceived to be an anonymous currency, research has shown that a user’s Bitcoin transactions can be linked to compromise the user’s anonymity. We present solutions to the anonymity problem for both transactions on Bitcoin’s blockchain and off the blockchain (in so called micropayment channel networks). We use an untrusted third party to issue anonymous vouchers which users redeem for Bitcoin. Blind signatures and Bitcoin transaction contracts (aka smart contracts) ensure the anonymity and fairness during the bitcoin - voucher exchange. Our schemes are practical, secure and anonymous.
Sevtap Duman (Northeastern University)
Kubra Kalkan Cakmakci (Northeastern University)
Manuel Egele (Northeastern University)
William Robertson (Northeastern University)
Engin Kirda (Northeastern University)
Spearphishing is a prominent targeted attack vector in today’s Internet. By impersonating trusted email senders through carefully crafted messages and spoofed metadata, adver- saries can trick victims into launching attachments containing malicious code or into clicking on malicious links that grant attackers a foothold into otherwise well-protected networks. Spearphishing is effective because it is fundamentally difficult for users to distinguish legitimate emails from spearphishing emails without additional defensive mechanisms. However, such mecha- nisms, such as cryptographic signatures, have found limited use in practice due to their perceived difficulty of use for normal users.
In this paper, we present a novel automated approach to defending users against spearphishing attacks. The approach first builds probabilistic models of both email metadata and stylometric features of email content. Then, subsequent emails are compared to these models to detect characteristic indicators of spearphishing attacks. Several instantiations of this approach are possible, including performing model learning and evaluation solely on the receiving side, or senders publishing models that can be checked remotely by the receiver. Our evaluation of a real data set drawn from 20 email users demonstrates that the approach effectively discriminates spearphishing attacks from legitimate email while providing significant ease-of-use benefits over traditional defenses.
Ramin Soltani (University of Massachusetts Amherst)
Consider a channel where authorized transmitter Jack sends packets to authorized receiver Steve according to a renewal process with rate λ packets per second for a time period T. Suppose that covert transmitter Alice wishes to communicate information to covert receiver Bob on the same channel without being detected by a watchful adversary Willie. We assume that Willie can look at packet contents but that Alice can communicate across a G/M/1 queue with service rate μ > λ to Bob by altering the timings of the packets going from Jack to Steve. First, Alice builds a codebook, with each codeword consisting of a sequence of packet timings to be employed for conveying the information associated with that codeword. However, to successfully employ this codebook, Alice must always have a packet to send at the appropriate time. We propose a construction where Alice covertly slows down the packet stream so as to buffer packets to use during a succeeding codeword transmission phase. Using this approach, Alice can covertly and reliably transmit O(T) covert bits to Bob in time period T.
Hannah Quay-de la Vallee (Brown University) Paige Selby (Brown University) Shriram Krishnamurthi (Brown University)
App marketplaces like the Google Play store and iOS’s App Store are many users’ primary tools for discovering and downloading apps, and thus have a substantial effect on which apps users install on their devices. Due to the amount of personal information on most mobile devices, these apps can pose a threat to user privacy. Most app platforms try to control this risk through some kind of permission system. Unfortunately, most major app marketplaces do not seem to incorporate apps’ permissions into their app ranking mechanisms, and do not provide users with tools to find more privacy-protecting apps, making it difficult for users to manage their privacy.
In this work, we build a prototype marketplace for Android that incorporates app privacy information as a first class element. We gather this privacy information from both automated tools and crowdsourcing. We use this information in our ranking mechanism to promote more privacy-respecting apps, and we display it to users in an easy-to-understand format so they can incorporate it into their installation decisions.
Ashwin Venkataraman (Courant Institute, NYU)
Srikanth Jagabathula (NYU Stern School of Business)
Lakshmi Subramanian (Courant Institute, NYU)
In this paper, we study the problem of aggregating noisy responses from crowd workers to infer the unknown true labels of binary tasks. Unlike most prior work which has examined this problem under the probabilistic worker paradigm, we consider a much broader class of adversarial workers with no specific assumptions on their labeling strategy. Our key contribution is the design of a computationally efficient reputation algorithm to identify and filter out such adversarial workers in crowdsourcing systems, given only the labels provided by the workers. Our algorithm uses the concept of optimal semi-matchings in conjunction with worker penalties based on label disagreements, to detect outlier worker labeling patterns. We prove that our algorithm can successfully identify low reliability workers, workers adopting deterministic strategies; and is robust to manipulation by worst-case sophisticated adversaries who can adopt arbitrary labeling strategies to degrade the accuracy of the inferred task labels. In particular, we propose a label aggregation algorithm utilizing the computed worker reputations and show that it is optimal (upto a constant) in identifying the task labels in the presence of worst-case sophisticated adversaries. Finally, we show that our reputation algorithm can significantly improve the accuracy of existing label aggregation algorithms in real-world crowdsourcing datasets.
Ling Xue (Keene State College)
Wei Lu (Keene State College)
Visualization techniques have been successfully applied to assist information security analysis and decision making in recent years. Although existing visualization techniques generate a number of good ideas, they are far from completed yet due to the large volume of network traffic data and a lack of flexibility and features. As a result, they are still struggling to explore and perceive the network traffic information and are not able to highlight abnormal network behaviors. Furthermore existing visualization techniques cannot detect botnet due to (1) the difficulties to identify unknown applications, and (2) the completely missing of temporal changes when reconstructing network traffic patterns. It is well known that in the early history of botnet detecting and blocking traffic from a centralized botnet is an easy task since the whole botnet can be deflected by blacklisting the centralized communication server. To prevent their intranet workstations from becoming viable bots, network administrators can simply block the appropriate outbound connections to the central botmaster. In response, more and more botnets are now evolving away from the centralized-communication approach, and toward the more-advanced strategy of distributed communication. Recent studies in 2013 have shown that the TOR network has been employed by botmasters in order to achieve stealthiness and untraceability, thus being more difficult to be taken down considering the unknown applications and anonymous C&C servers provided by the TOR hidden services. In this project we propose a new visualization approach, called iTraffic Player, visualizing the change of network activities (patterns) over time, avoiding the discovery of unknown application and allowing interactive controls using a media player style graphical user interface (GUI). Preliminary experimental results are promising and show that iTraffic Player is able to detect some botnet attacks with an acceptable accuracy under the human interaction.
Jared Carlson (Veracode)
The LLVM compiler infrastructure provides a unique and possibly ideal intersection of Developer tools, means for Formal Methods applications, and insertion/application of security tools. Due to the increasing number, and importance, of various languages utilizing LLVM, it only reinforces this framework's importance. The machine independent Intermediate Representation (IR) is open, well-organized, and supported by a large community make it a fertile ground for research and development. We are investigating various means of leveraging LLVM and IR for meaningful static and dynamic security analyses.
Liangxiao Xin (Boston University)
David Starobinski (Boston University)
Guevara Noubir (Northeastern University)
Hidden nodes can lead to serious channel congestion in Wi-Fi (IEEE 802.11) networks. Such vulnerability of Wi-Fi networks can be utilized by attackers to achieve a global denial of service attack, through an interference coupling phenomenon whereby collisions induced by a hidden node lead other hidden nodes to retransmit and congest the channel. In this work, we demonstrate the feasibility of a remote and protocol-compliant interference coupling attack in Wi-Fi networks. Our results, supported by testbed experiments and NS-3 simulations, provide a feasible scenario for a local attack to propagate in space and time and cause a congestion collapse of the entire network. The results show that the retry limit and the load of node play important roles in the success (and prevention) of interference coupling attacks.
Endadul Hoque (Northeastern University)
Omar Chowdhury (Purdue University)
Sze Yiu Chau (Purdue University)
Cristina Nita-Rotaru (Northeastern University)
Ninghui Li (Purdue University)
Network protocol implementations are expected to comply with their specifications. Noncompliance exhibited by an implementation can cause interoperability issues, inconsistent behavior, or performance degradation. Worse, some noncompliance can have security implications. Automatically detecting whether a protocol implementation is noncompliant with a property is a long-standing and challenging problem. Our solution relies on the observation that the prose specification of network protocols in documentation and standards (e.g., RFCs) describes protocol operations as finite state machines (FSM s) and implementations of protocols should closely follow the specified FSMs. We develop an automated framework that (1) extracts the implemented FSM of a protocol from the source code by leveraging symbolic execution and (2) determines whether the extracted FSM violates a given temporal property by using a symbolic model checker. We applied our framework on 5 protocol implementations taken from different TCP/IP network stacks to check against properties obtained from their publicly available specifications and standards. We detected 10 noncompliance instances, some of which have security implications.
Rishab Nithyanand (Stony Brook University)
Rachee Singh (Stony Brook University)
Shinyoung Cho (Stony Brook University)
Phillipa Gill (Stony Brook University)
Rob Johnson (Stony Brook University)
Traffic correlation attacks to de-anonymize Tor users are possible when an
adversary is in a position to observe traffic entering and exiting the Tor
network. In this talk, we focus on addressing the problem of traffic
correlation attacks by network-level adversaries (Autonomous Systems (ASes)).
We present a high-performance AS-aware Tor client - Cipollino
as a secure and practical solution.
Cipollino leverages public sources of empirical routing data and the latest developments in network-measurement research to defend against currently known types of network-level traffic correlation attacks, including attacks by active adversaries that may exploit BGP insecurities to de-anonymize Tor clients. Cipollino is able to reduce the number of vulnerable webpage loads to just 1.3% from vanilla Tor's 57%. In terms of performance, under aggressive configurations, the Cipollino client is able to match the page-load times of the vanilla Tor client and significantly improve on previous AS-aware Tor clients, even in its most conservative configuration. Additionally, Cipollino retains security against relay-level adversaries. Finally, like previous AS-aware Tor clients, Cipollino is able to perform load-balancing to avoid overloading relays.
Emaad Ahmed Manzoor (Stony Brook University)
Sadegh Momeni (University of Illinois Chicago)
Venkat N. Venkatakrishnan (University of Illinois Chicago)
Leman Akoglu (Stony Brook University)
System events logged by various mechanisms naturally give rise to a stream of evolving graphs. Each timestamped event consists of a subject-target pair, where the subject, target and the nature of their interaction could be of multiple types. Events from each traced application appear as edges of a single evolving graph, and there may be many such applications executing at a time.
Our goal is to detect abnormal applications using the graphs that their executions induce. To achieve this goal, application graphs are clustered and those that do not fit well into any cluster are flagged as anomalies. However, this entire process must be performed in a streaming manner, with clustering and anomaly decisions made rapidly on the arrival of each edge, all the while consuming bounded memory.
This talk introduces a new way to cluster and detect anomalies in typed graphs originating from a stream of edges, in which new graphs emerge and existing graphs evolve as the stream progresses. A new graph representation and sketching technique is introduced that is amenable to constant-time updates in a streaming scenario. Experiments on system call graphs generated from benign and malicious application executions demonstrate an average anomaly-detection precision of over 90% while sustaining a throughput of over 10,000 edges per second, with memory consumption constrained to a few hundred megabytes.