New England Security Day
Spring 2016

Presentations

Robust Traceability from Trace Amounts

Cynthia Dwork (Microsoft Research)
Adam Smith (Penn State University)
Thomas Steinke (Harvard University)
Jonathan Ullman (Northeastern University)
Salil Vadhan (Harvard University)

The privacy risks inherent in the release of a large number of summary statistics were illustrated by Homer et al. (PLoS Genetics, 2008), who considered the case of 1-way marginals of SNP allele frequencies obtained in a genome-wide association study: Given a large number of minor allele frequencies from a case group of individuals diagnosed with a particular disease, together with the genomic data of a single target individual and statistics from a sizable reference dataset independently drawn from the same population, an attacker can determine with high confidence whether or not the target is in the case group.
In this work we describe and analyze a simple attack that succeeds even if the summary statistics are significantly distorted, whether due to measurement error or noise intentionally introduced to protect privacy. Our attack only requires that the vector of distorted summary statistics is close to the vector of true marginals in L1 norm. Moreover, the reference pool required by previous attacks can be replaced by a single sample drawn from the underlying population. The new attack is not specific to genomics or to binary data. Our results underscore the importance of rigorous approaches to controlling privacy risks.

Exploring Privacy-Accuracy Tradeoffs using DPComp

Michael Hay (Colgate University)
Ashwin Machanavajjhala (Duke University)
Gerome Miklau (University of Massachusetts Amherst)
Yan Chen (Duke University)
Dan Zhang (University of Massachusetts Amherst)

Differential privacy has become the dominant standard in the research community for strong privacy protection. There has been a flood of research into query answering algorithms that meet this standard. But algorithms are becoming increasingly complex, which poses challenges for both researchers evaluating new technical approaches and for practitioners attempting to adopt state-of-the-art privacy algorithms to real-world tasks. Deployment of these techniques has been slowed by an incomplete understanding of the cost to accuracy implied by the adoption of differential privacy.
In this talk I will summarize a set of evaluation principles proposed recently to support the sound evaluation of privacy algorithms and I will highlight the results of a thorough empirical study done in accordance with these principles. I will also describe our vision for "DPCOMP", a publicly-accessible web-based system that allows users to interactively explore algorithm output in order to understand, both quantitatively and qualitatively, the error introduced by the algorithms. In addition, users can contribute new algorithms, new datasets, and even new error metrics, which are automatically incorporated into an evolving benchmark.

Multifirm Models of Cybersecurity Investment Competition vs. Cooperation and Network Vulnerability

Anna Nagurney (University of Massachusetts Amherst)
Shivani Shukla (University of Massachusetts Amherst)

In our research, we develop and compare three distinct models for cybersecurity investment in competitive and cooperative situations to safeguard against potential and ongoing threats. A Nash Equilibrium model of noncooperation in terms of cybersecurity levels of firms involved is formulated and analyzed. This acts as a disagreement point over which bargaining takes place in the second model, which yields a cooperative solution. Nash bargaining theory is utilized to argue for information sharing and to quantify the monetary and security benefits in terms of network vulnerability. Finally, our third model for comparison also focuses on cooperation, but from a system-optimization perspective. Qualitative properties are provided for the models in terms of existence and uniqueness results along with numerical solutions and sensitivity analysis to three cases focusing on retailers, financial service firms, and firms in the energy sector, since these have been subject to some of the most damaging cyberattacks. We compare the solutions of the models for the cases and recommend a course of action that has both financial and policy-related implications.

Cipollino: A Measurement Driven AS-aware Tor Client

Rishab Nithyanand (Stony Brook University)
Rachee Singh (Stony Brook University)
Shinyoung Cho (Stony Brook University)
Phillipa Gill (Stony Brook University)
Rob Johnson (Stony Brook University)

Traffic correlation attacks to de-anonymize Tor users are possible when an adversary is in a position to observe traffic entering and exiting the Tor network. In this talk, we focus on addressing the problem of traffic correlation attacks by network-level adversaries (Autonomous Systems (ASes)). We present a high-performance AS-aware Tor client - Cipollino as a secure and practical solution.
Cipollino leverages public sources of empirical routing data and the latest developments in network-measurement research to defend against currently known types of network-level traffic correlation attacks, including attacks by active adversaries that may exploit BGP insecurities to de-anonymize Tor clients. Cipollino is able to reduce the number of vulnerable webpage loads to just 1.3% from vanilla Tor's 57%. In terms of performance, under aggressive configurations, the Cipollino client is able to match the page-load times of the vanilla Tor client and significantly improve on previous AS-aware Tor clients, even in its most conservative configuration. Additionally, Cipollino retains security against relay-level adversaries. Finally, like previous AS-aware Tor clients, Cipollino is able to perform load-balancing to avoid overloading relays.

End-to-End Verification of Information-Flow Security for an Operating System Kernel

David Costanzo (Yale University)
Zhong Shao (Yale University)
Ronghui Gu (Yale University)

In this talk, we present a brief overview of a machine-checked end-to-end security proof for user processes executing over the mCertiKOS operating system kernel. mCertiKOS is a simple but nontrivial kernel with a formal functional specification. It is implemented entirely within the Coq proof assistant language, and the full C/assembly implementation of the kernel, which is extracted from Coq, has been verified to meet the high-level functional specification.
Our security proof is a form of noninterference, guaranteeing that different user processes executing over mCertiKOS cannot influence each other in any way (assuming explicit IPC system calls are disabled). We present a novel methodology used for the proof, which consists of: (1) specifying a high-level information-flow policy via a general observation function that defines each process's isolated view of program state, (2) proving noninterference with respect to this observation function over the high-level functional specification of the kernel, and (3) using this observation function to define whole-execution behaviors, allowing a whole-execution noninterference guarantee to propagate across simulations. Hence we specify the security policy using a high level of abstraction, but our final guarantee applies to the actual low-level C and assembly implementation of the mCertiKOS kernel.

Correct Audit Logging

Sepehr Amir-Mohammadian (University of Vermont)
Stephen Chong (Harvard University)
Christian Skalka (University of Vermont)

Retrospective security has become increasingly important to the theory and practice of cyber security, with auditing a crucial component of it. However, in systems where auditing is used, programs are typically instrumented to generate audit logs using manual, ad-hoc strategies. This is a potential source of error even if log analysis techniques are formal, since the relation of the log itself to program execution is unclear. This presentation focuses on provably correct program rewriting algorithms for instrumenting formal logging specifications. Correctness guarantees that the execution of an instrumented program produces sound and complete audit logs, properties defined by an information containment relation between logs and the program's logging semantics. A program rewriting approach to instrumentation for audit log generation is also proposed, in a manner that guarantees correct log generation even for untrusted programs. As a case study, a tool for OpenMRS, a popular medical records management system, is developed and instrumentation of break the glass policies is considered.

The NSF Workshop on Security and Formal Methods Organized by Steve Chong and Joshua Guttman

Joshua Guttman (Worcester Polytechnic Institute)

Formal methods have a high potential payoff for security, because they allow us to show that a mechanism resists a whole class of adversary strategies. Moreover, formal methods have enjoyed strong growth in recent years. In this workshop last November, about forty computer scientists in universities, industry research labs, and government gathered to discuss the opportunities and obstacles for formal methods in security. In this talk, we will describe the main conclusions of the workshop.

Chipping Away At The Security Education Problem

Ming Chow (Tufts University)
Roy Wattanasin (Brandeis and MIT)

We are still facing the same security vulnerabilities from over a decade ago. The problems are not going away anytime soon and a reason is because Computer Science curricula are still churning out students who are not even exposed to security. This talk will address the lack of emphasis on information security in Computer Science curricula, how CS curricula have an obligation, how to gradually fix the problem by integrating security into many Computer Science undergraduate and graduate classes, and success stories from students. This talk will also discuss what Tufts is currently working on to further address the security education problem by creating a joint cyber security and policy program that spans multiple departments.

Averting Cyberwar

Deborah Hurley (Harvard University and Brown University)

There is a legitimate counterpoint to the bellicose rhetoric and actual activity for cyberwar. Cyberwarfare centers around building capability, threatening, and deploying attacks on information and communications infrastructure to disrupt economic and social activity, engender dread, and break the will of the target population. Substantial resources have been invested in offensive and defensive cyberwarfare by many nations, including China and the United States. The United States has significant economic and strategic interests, since it is heavily dependent on information technologies, much American critical infrastructure is in private hands, and technical expertise is widely distributed around the globe.
It is vital to embrace the audacious vision of cyberspace as a zone dedicated to peace and human advancement. The fruits of the information society offer the world some of the best promise and opportunity for the future. Information and communications technologies are not ends in themselves but, rather, should be used in the service of humanity.
For the past few years, Hurley has been actively working on prevention of cyberwar, through the innovative initiative of a non-armament accord for cyberspace. There is a tradition of "non-armament" treaties, by which nations have agreed that certain vital resources are the common heritage of humanity and will be used for peaceful and scientific purposes. The Antarctic Treaty and the Outer Space Treaty, among others, stand out as precedent and models for realistic alternatives to the present trajectory. Other international treaties, such as the Chemical Weapons Convention (CWC), which widely affects the private sector, also provide guidance.
An international accord would disintermediate burgeoning cyberwar policy making and industry; recognize cyberspace as the common heritage of humanity; dedicate it for peace and science in the aim of human advancement; ensure ongoing peaceful development and use of cyberspace; avoid disputes and discord; encourage international cooperation; and regulate international relations with respect to the ubiquitous information environment.

An Investigation into Users' Considerations towards Using Password Managers

Michael Fagan (University of Connecticut)
Yusuf Albayram (University of Connecticut)
Mohammad Maifi Hasan Khan (University of Connecticut)
Ross Buck (University of Connecticut)

Password managers, though commonly recommended by security experts, are still not used by many users. Understanding why some choose to use password managers while others do not is important towards generally understanding why some users do what they do and, by extension, designing motivational tools such as video tutorials to help motivate more use of password managers. To investigate differences between those who do and do not use a password manager, for this work, we distributed an online survey to a total of 137 users and 111 non-users of the tool that asked about their opinions/experiences with password managers. Furthermore, since emotion has been identified by work in psychology and communications as influential in other risk-laden decision-making (e.g., condom use), we asked participants who use a password manager to rate how they feel for 45 different emotions, or, as the case for those who do not use a password manager, to rate how they imagine they would feel the 45 emotions if they did use the tool. Our results show that "users" of password managers noted convenience and usefulness as the main reasons behind using the tool, rather than security gains, underscoring the fact that even a large portion of users of the tool are not considering security as the primary benefit while making the decision. On the other hand, "non-users" noted security concerns as the main reason for not using a password manager, highlighting the prevalence of suspicion arising from lack of understanding of the technology itself. Finally, analysis of the differences in emotions between "users" and "non-users" reveals that participants who never use a password manager are more likely to feel suspicious compared to "users", which could be due to misunderstandings about the tool.

Shreds: Fine-grained Execution Units with Private Memory

Yaohui Chen (Stony Brook University)
Sebassujeen Reymondjohnson (Stony Brook University)
Zhichuang Sun (Stony Brook University)
Long Lu (Stony Brook University)

Once attackers have injected code into a victim program’s address space, or found a memory disclosure vulnerability, all sensitive data and code inside that address space are subject to thefts or manipulation. Unfortunately, this broad type of attack is hard to prevent, even if software developers wish to cooperate, mostly because the conventional memory protection only works at process level and previously proposed in-process memory isolation methods are not practical for wide adoption. We propose shreds, a set of OS-backed programming primitives that addresses developers' currently unmet needs for fine-grained, convenient, and efficient protection of sensitive memory content against in-process adversaries. A shred can be viewed as a flexibly defined segment of a thread execution (hence the name). Each shred is associated with a protected memory pool, which is accessible only to code running in the shred. Unlike previous works, shreds offer in-process private memory without relying on separate page tables, nested paging, or even modified hardware. Plus, shreds provide the essential data flow and control flow guarantees for running sensitive code. We have built the compiler toolchain and the OS module that together enable shreds on Linux. We demonstrated the usage of shreds and evaluated their performance using 5 non-trivial open source software, including OpenSSH and Lighttpd. The results show that shreds are fairly easy to use and incur low runtime overhead (4.67%).

Programming Support for an Integrated Multi-Party Computation and MapReduce Infrastructure

Nikolaj Volgushev (Boston University)
Andrei Lapets (Boston University)
Azer Bestavros (Boston University)

We describe and present a prototype of a distributed computational infrastructure and associated high-level programming language that allow multiple parties to leverage their own computational resources capable of supporting MapReduce operations in combination with multi-party computation(MPC). Our architecture allows a programmer to au-thor and compile a protocol using a uniform collection of standard constructs, even when that protocol involves computations that take place locally within each participant’s MapReduce cluster as well as across all the participants using an MPC protocol. The high-level programming language provided to the user is accompanied by static analysis algorithms that allow the programmer to reason about the efficiency of the protocol before compiling and running it. We present two example applications demonstrating how such an infrastructure can be employed.

Quantifying Security through Reasoning and Experimentation

Michael Atighetchi (BBN Technologies)
Joe Loyall (BBN Technologies)
Borislava Simidchieva (BBN Technologies)
Nate Soule (BBN Technologies)
Fusun Yaman (BBN Technologies)

Systems employing static defenses are more vulnerable to determined and resourceful adversaries. Moving Target Defenses (MTDs) have enhanced security by making entry points transient, e.g., with IP address hopping. But at the same time MTDs also introduce additional complexity, proving difficult for system administrators to select, compose, and configure correctly. Empirically assessing multiple MTDs in enterprise systems through state-of-the-art techniques such as penetration testing or red teaming is expensive. However, adding MTDs ad-hoc may introduce unacceptable overhead, or even decrease security. We focus on quantifying MTDs’ security and cost with a prototype that evaluates MTDs in a system context including mission objectives and security threats, all modeled using an extensible Web ontology paradigm. We find possible attack vectors given the input models and compute a number of low-level metrics, which we compose into cost, security, and mission fitness indices. Manually building models becomes untenable with increasing system size, so we address scalability through automation. We stitch together partial models (from scanning tools such as NMAP, tshark, and osquery) using redundant entities to identify related concepts. We validate the conglomerate model’s fidelity in an emulated environment, automatically characterizing latency and bandwidth impacts of MTD wrappers and assessing attack steps’ duration and probability of success. We store the empirical data with the analytical models to use for quantification. Iteratively quantifying deployments analytically and characterizing MTDs and attacks in silico lets users determine which MTDs provide superior security-cost tradeoffs.
Our prototype supports the quantification of deployments with hundreds of hosts. Automated system model construction, characterization of MTD costs, and attack step assessment have delivered promising results. We are extending the prototype to support quantifying operational networks and include static defenses. Ultimately, our vision is to not only quantify predefined deployments, but also guide system administrators toward the optimal selection and configuration of MTDs.

Sieve: Cryptographically Enforced Access Control for User Data in Untrusted Clouds

Frank Wang (MIT CSAIL)
James Mickens (Harvard)
Nickolai Zeldovich (MIT CSAIL)
Vinod Vaikuntanathan (MIT CSAIL)

Modern web services rob users of low-level control over cloud storage—a user’s single logical data set is scattered across multiple storage silos whose access controls are set by web services, not users. The consequence is that users lack the ultimate authority to determine how their data is shared with other web services. We introduce Sieve, a new platform which selectively (and securely) exposes user data to web services. Sieve has a user-centric storage model: each user uploads encrypted data to a single cloud store, and by default, only the user knows the decryption keys. Given this storage model, Sieve defines an infrastructure to support rich, legacy web applications. Using attribute-based encryption, Sieve allows users to define intuitively understandable access policies that are cryptographically enforceable. Using key homomorphism, Sieve can re-encrypt user data on storage providers in situ, revoking decryption keys from web services without revealing new keys to the storage provider. Using secret sharing and two-factor authentication, Sieve protects cryptographic secrets against the loss of user devices like smartphones and laptops. The result is that users can enjoy rich, legacy web applications, while benefiting from cryptographically strong controls over which data a web service can access.