Centre of Excellence on Cryptology
Indian Statistical Institute, Kolkata
Birla Institute of Technology, Mesra, Kolkata Campus
M/s. HP India Sales Pvt. Ltd.
Advanced System Lab, DRDO, Hyderabad
Assistant Research Professor, Carnegie Mellon University
Understanding and Protecting Privacy: Formal Semantics and Principled Audit Mechanisms
Privacy is a significant concern in modern society. Individuals
share personal information with many different organizations -
healthcare, finanacial and educational institutions, web
services providers and online social networks - often in
electronic form. Privacy violations occur when such personal
information is inappropriately collected, shared or used. This
talk reports on progress in precisely defining classes of
privacy policies and algorithmic methods for their enforcement.
First, we develop a semantic model and logic of privacy that makes rigorous the position that
privacy is a right to appropriate flows of information – a
position taken by the philosophical theory of contextual
integrity. This logic is used to develop the first complete
logical formalization of two US privacy laws - the Health
Insurance Portability and Accountability Act (HIPAA) Privacy
Rule and the Gramm-Leach-Bliley Act (GLBA).
Second, observing that preventive
access control mechanisms are not sufficient to enforce such
privacy policies, we develop two complementary audit mechanisms
for policy enforcement. The first algorithm, which we name
REDUCE, operates iteratively over audit logs that are incomplete
and evolve over time. In each iteration, it provably checks as
much of the policy as possible over the current log and outputs
a residual policy that can only be checked when the log is
extended with additional information. We implement REDUCE and
use it to check simulated audit logs for compliance with the
entire HIPAA Privacy Rule. Since privacy policies constrain
information flow and use based on subjective conditions (such as
beliefs) that may not be mechanically checkable, REDUCE will
output such conditions in the final residual policy leaving them
to be checked by other means (e.g., by human auditors). The
second audit algorithm, which we name RMA (for Regret Minimizing
Audits) learns from experience to provide operational guidance to
human auditors about the coverage and frequency of auditing such
subjective conditions. The algorithm takes pragmatic
considerations into account, such as the periodic nature of
audits, the audit budget and the loss that an organization
incurs from privacy violations. We prove that the audit mechanism
converges to the best fixed audit strategy over time.
I will conclude with a discussion
of remaining challenges in this area, in particular, semantics
and enforcement of privacy policies that place requirements on
the purposes for which a governed entity may use personal
Anupam Datta is an
Assistant Research Professor at Carnegie Mellon University. Dr.
Datta’s research focuses on foundations of security and privacy.
He has made significant contributions towards advancing the
scientific understanding of security protocols, privacy in
organizational processes, and trustworthy software systems. Dr. Datta has
co-authored a book and over 30 publications in conferences and
journals on these topics. Dr. Datta serves on the Steering
Committee of the IEEE Computer Security Foundations Symposium.
He obtained MS and PhD degrees from Stanford University and a
BTech from IIT haragpur, all in Computer Science.
Associate Professor of Computer Science,
University of Virginia
Secure Computation in the Real(ish) World
Alice and Bob meet in a campus bar in 2016. Being typical
students, they both have their genomes stored on their mobile
devices and, before expending any unnecessary effort in
courtship rituals, they want to perform a genetic analysis to
ensure that their potential offspring would have strong immune
systems and not be at risk for any recessive diseases. But Alice
doesn't want Bob to learn about her risk for Alzheimer's
disease, and Bob is worried a future employer might misuse his
propensity to alcoholism. Two-party secure computation provides
a way to solve this problem. It allows two parties to compute a
function that depends on inputs from both parties, but reveals
nothing except the output of the function.
A general solution to this
problem have been known since Yao's pioneering work on garbled
circuits in the 1980s, but only recently has it become
conceivable to use this approach in real systems. Our group has
developed a framework for building efficient and scalable secure
computations that achieves orders of magnitude performance
improvements over the best previous systems. In this talk, I
will describe the techniques we use to design scalable and efficient secure
computation applications, and present our designs and results
for some example applications including genomic analysis,
private set intersection, and biometric matching.
David Evans is an Associate Professor of Computer Science at the
University of Virginia. His research seeks to create systems
that can be trusted even in the presence of malicious attackers
and that mpower individuals to control how their data is used.
He won the Outstanding Faculty Award from the State Council of
Higher Education for Virginia in 2009, an All-University
Teaching Award in 2008, and was Program Co-Chair for the 2009
and 2010 IEEE Symposia on Security and Privacy. He has SB, SM
and PhD degrees in Computer Science from MIT.
Microsoft Research, Bangalore, India.
Secure Composition of Cryptographic Protocols
General positive results for secure computation were obtained
more than two decades ago. These results were for the setting
where each protocol execution is done in isolation. With the
proliferation of the network setting (and especially the
internet), an ambitious e_ort to generalize these results and
obtain concurrently secure protocols was started. However it was
soon shown that designing secure protocols in the concurrent
setting is unfortunately impossible in general. In this talk, we
will _rst describe the so called chosen protocol attack. This is
an explicit attack which establishes general impossibility of
designing secure protocols in the concurrent setting. The
negative results hold for the so called plain model where there
is no trusted party, no honest majority, etc. On the other hand,
several positive results for protocols composition have been
established in various related settings (which are either weaker
or incom- parable). A few examples are the setting of resettable
computation (where the parties may not be able to keep state
during the protocol execution and may be run several times with
the same random tape), bounded concurrent secure computation
(where there is an apriori bound on the total number of concur-
rent sessions), standalone protocol execution with
man-in-the-middle (i.e., the setting of non-malleable
protocols), the single input setting (where the honest party
uses the same input in all polynomially unbounded concurrent
protocol executions), etc. We will survey known results as well
various open problems in each of the above settings. We also
given an overview of an emerging technique which has been used
to construct secure protocols in several of these settings. We
will focus on the plain model throughout the talk.
Vipul Goyal is a researcher in the Cryptography, Security and
Applied Mathematics group at Microsoft Research, India. He is
interested in both theoretical and applied cryptography (and in
theoretical computer science in general). He has worked on
topics such as cryptography protocols, man-in-the-middle
attacks, zero-knowledge proofs, pairing based cryptography, etc.
He has published various technical papers at venues such as
Crypto, STOC and CCS. He completed his PhD from UCLA where he
won honors such as Microsoft research graduate fellowship and
Google outstanding graduate student award.
Assistant Professor in the Department of Computer Science at North Carolina State University, USA.
Defending Users Against Smartphone Apps: Techniques and Future Directions
Smartphone security research has become very popular in response
to the rapid, world-wide adoption of new platforms such as
Android and iOS. Smartphones are characterized by their ability
run third-party applications, and Android and iOS
take this concept to the extreme, offering hundreds of thousands
of "apps" through application markets. Thus, smartphone security
research has focused on protecting users from apps. In this
talk, I will discuss the current state of smartphone research,
including efforts in designing new OS protection mechanisms, as
well as performing security analysis of real apps. I will offer insight into what works, what has
clear limitations, and promising directions for future research.
William Enck is an Assistant Professor in the Department of
Computer Science at NC State University. William earned his
Ph.D. and M.S. in Computer Science and Engineering from the
Pennsylvania State University in 2011 and 2006, respectively,
and his B.S. in Computer Engineering from Penn State in 2004.
His research focuses primarily on security in smart phone and
mobile device platforms and the challenges that arise in this
new computing environment. However, he is also interested in the broader area of systems security.
His previous research efforts have included OS security,
hardware security, telecommunications security, network protocol
security, voting systems security, and large-scale network configuration.