Bio
Hello! I am an information security, applied cryptography, and technology policy researcher.
I am a doctoral candidate in Computer Science at the Center for Information Technology Policy at Princeton University, where I am supported by a Wallace Memorial Fellowship in Engineering.
I develop novel privacy-enhancing techniques that improve accountability of digital systems integral to our social, political, and economic lives: including elections, public auctions, private communication, foreign intelligence surveillance, content moderation, and censorship.
My research attempts to combine robust accountability with strong cryptographic privacy.
Education
Princeton University Ph.D. in Computer Science (expected March 2025)
M.A. in Computer Science
Stanford University M.A. in Public Policy
B.S. in Mathematics, B.S. in Computer Science
Work
Academia MIT Sloan · Stanford Health Policy
Government Office of MP Dr. Shashi Tharoor
Industry Microsoft Research · Keybase · Bloomberg
Current Research
-
Privacy-Respecting Web Analytics: Policy and Technology
Working Paper (2024)
Anunay Kulshrestha, Jonathan Mayer
-
Instant Cast-as-Intended Ballot Verification in ElectionGuard
Working Paper (2024)
Anunay Kulshrestha, Josh Benaloh, Jonathan Mayer
-
Surveillance Transparency after Quantum Computing: Quantum-Resistant Multiparty Private Set Operations
In Submission (2024)
Anunay Kulshrestha, Jonathan Mayer
Recent Publications
-
Account Verification on Social Media: User Perceptions and Paid Enrollment
32nd USENIX Security Symposium (2023)
Madelyne Xiao, Mona Wang, Anunay Kulshrestha, Jonathan Mayer
[Paper] [Code]We investigate how users perceive social media account verifi- cation, how those perceptions compare to platform practices, and what happens when a gap emerges. We use recent changes in Twitter’s verification process as a natural experiment, where the meaning and types of verification indicators rapidly and significantly shift. The project consists of two components: a user survey and a measurement of verified Twitter accounts. In the survey study, we ask a demographically represen- tative sample of U.S. respondents (n = 299 ) about social media account verification requirements both in general and for particular platforms. We also ask about experiences with online information sources and digital literacy. More than half of respondents misunderstand Twitter’s criteria for blue check account verification, and over 80% of respondents mis- understand Twitter’s new gold and gray check verification indicators. Our analysis of survey responses suggests that people who are older or have lower digital literacy may be modestly more likely to misunderstand Twitter verification. In the measurement study, we randomly sample 15 million English language tweets from October 2022. We obtain ac- count verification status for the associated accounts in Novem- ber 2022, just before Twitter’s verification changes, and we collect verification status again in January 2022. The result- ing longitudinal dataset of 2.85 million accounts enables us to characterize the accounts that gained and lost verification following Twitter’s changes. We find that accounts posting conservative political content, exhibiting positive views about Elon Musk, and promoting cryptocurrencies disproportion- ately obtain blue check verification after Twitter’s changes. We close by offering recommendations for improving ac- count verification indicators and processes.
-
Public Verification for Private Hash Matching
IEEE Symposium on Security and Privacy (2023)
Sarah Scheffler, Anunay Kulshrestha, Jonathan Mayer
[Paper] [Code]End-to-end encryption (E2EE) prevents online services from accessing user content. This important security property is also an obstacle for content moderation methods that involve content analysis. The tension between E2EE and efforts to combat child sexual abuse material (CSAM) has become a global flashpoint in encryption policy, because the predominant method of detecting harmful content—server-side perceptual hash matching on plaintext images—is unavailable. Recent applied cryptography advances enable private hash matching (PHM), where a service can match user content against a set of known CSAM images without revealing the hash set to users or nonmatching content to the service. These designs, especially a 2021 proposal for identifying CSAM in Apple’s iCloud Photos service, have attracted widespread criticism for creating risks to security, privacy, and free expression. In this work, we aim to advance scholarship and dialogue about PHM by contributing new cryptographic methods for system verification by the general public. We begin with motivation, describing the rationale for PHM to detect CSAM and the serious societal and technical issues with its deployment. Verification could partially address shortcomings of PHM, and we systematize critiques into two areas for auditing: trust in the hash set and trust in the implementation. We explain how, while these two issues cannot be fully resolved by technology alone, there are possible cryptographic trust improvements. The central contributions of this paper are novel cryptographic protocols that enable three types of public verification for PHM systems: (1) certification that external groups approve the hash set, (2) proof that particular lawful content is not in the hash set, and (3) eventual notification to users of false positive matches. The protocols that we describe are practical, efficient, and compatible with existing PHM constructions.
-
Leveraging Strategic Connection Migration-Powered Traffic Splitting for Privacy
22nd Privacy Enhancing Technologies Symposium (2022)
Mona Wang, Anunay Kulshrestha, Liang Wang, Prateek Mittal
[Paper] [Code]
[Runner-up Best Student Paper Award] [Best HotPETs Talk]Network-level adversaries have developed increasingly sophisticated techniques to surveil and control users’ network traffic. In this paper, we exploit our observation that many encrypted protocol connections are no longer tied to device IP address (e.g., the connection migration feature in QUIC, or IP roaming in WireGuard and Mosh), due to the need for performance in a mobile-first world. We design and implement a novel framework, Connection Migration Powered Splitting (CoMPS), that utilizes these performance features for enhancing user privacy. With CoMPS, we can split traffic mid-session across network paths and heterogeneous network protocols. Such traffic splitting mitigates the ability of a network-level adversary to perform traffic analysis attacks by limiting the amount of traffic they can observe. We use CoMPS to construct a website fingerprinting defense that is resilient against traffic analysis attacks by a powerful adaptive adversary in the open-world setting. We evaluate our system using both simulated splitting data and real-world traffic that is actively split using CoMPS. In our real-world experiments, CoMPS reduces the precision and recall of VarCNN to 29.9% and 36.7% respectively in the open-world setting with 100 monitored classes. CoMPS is not only immediately deployable with any unaltered server that supports connection migration, but also incurs little overhead, decreasing throughput by only 5-20%.
-
Estimating Incidental Collection in Foreign Intelligence Surveillance: Large-Scale Multiparty Private Set Intersection with Union and Sum
31st USENIX Security Symposium (2022)
Anunay Kulshrestha, Jonathan Mayer
[Paper] [Code] [Slides] [Video] [Demo]
[FSF Best Student Paper Award]Section 702 of the Foreign Intelligence Surveillance Act authorizes U.S. intelligence agencies to intercept communications content without obtaining a warrant. While Section 702 requires targeting foreigners abroad for intelligence purposes, agencies “incidentally” collect communications to or from Americans and can search that data for purposes beyond intelligence gathering. For over a decade, members of Congress and civil society organizations have called on the U.S. Intelligence Community (IC) to estimate the scale of incidental collection. Senior intelligence officials have acknowledged the value of quantitative transparency for incidental collection, but the IC has not identified a satisfactory estimation method that respects individual privacy, protects intelligence sources and methods, and imposes minimal burden on IC resources. In this work, we propose a novel approach to estimating incidental collection using secure multiparty computation (MPC). The IC possesses records about the parties to intercepted communications, and communications services possess country-level location for users. By combining these datasets with MPC, it is possible to generate an automated aggregate estimate of incidental collection that maintains confidentiality for intercepted communications and user locations. We formalize our proposal as a new variant of private set intersection, which we term multiparty private set intersection with union and sum (MPSIU-Sum). We then design and evaluate an efficient MPSIU-Sum protocol, based on elliptic curve cryptography and partially homomorphic encryption. Our protocol performs well at the large scale necessary for estimating incidental collection in Section 702 surveillance.
-
Identifying Harmful Media in End-to-End Encrypted Communication: Efficient Private Membership Computation
30th USENIX Security Symposium (2021)
Anunay Kulshrestha, Jonathan Mayer
[Paper] [Slides] [Video] [Demo]End-to-end encryption (E2EE) poses a challenge for automated detection of harmful media, such as child sexual abuse material and extremist content. The predominant approach at present, perceptual hash matching, is not viable because in E2EE a communications service cannot access user content. In this work, we explore the technical feasibility of privacy-preserving perceptual hash matching for E2EE services. We begin by formalizing the problem space and identifying fundamental limitations for protocols. Next, we evaluate the predictive performance of common perceptual hash functions to understand privacy risks to E2EE users and contextualize errors associated with the protocols we design. Our primary contribution is a set of constructions for privacy-preserving perceptual hash matching. We design and evaluate client-side constructions for scenarios where disclosing the set of harmful hashes is acceptable. We then design and evaluate interactive protocols that optionally protect the hash set and do not disclose matches to users. The constructions that we propose are practical for deployment on mobile devices and introduce a limited additional risk of false negatives.
Talks
-
Cryptographic Privacy and Accountability in Policy
December 2022: Brave Research - Estimating Incidental Collection in Foreign Intelligence Surveillance
December 2022: a16z crypto
September 2022: Privacy and Civil Liberties Oversight Board
June 2022: Boston University - Data Privacy and Policy Implications
November 2021: U.S. Senate Committee on Commerce - Privacy Preserving Health Misinformation Detection
March 2021: Stanford Internet Observatory E2EE Workshop
Writing
- Response to RFC: Cryptography Roadmap
Public Comment · Mar 14, 2024
Anunay Kulshrestha - How a US law allows American intelligence agencies to spy on internet users across the world
Scroll · Feb 19, 2024
Gurshabad Grover, Anunay Kulshrestha - The Indian Telecommunication Bill Engenders Security and Privacy Risks
Lawfare · Mar 9, 2023
Anunay Kulshrestha, Gurshabad Grover - Response to RFC: Indian Telecommunication Bill 2022
Public Comment · Nov 9, 2022
Anunay Kulshrestha, Gurshabad Grover - Response to RFC: PCLOB Oversight Project Examining Section 702 of FISA
Public Comment · Nov 4, 2022
Anunay Kulshrestha, Jonathan Mayer - Response to RFI: Advancing Privacy-Enhancing Technologies by the Office of Science and Technology Policy
Public Comment · Jul 8, 2022
Anunay Kulshrestha, Jonathan Mayer, Sarah Scheffler - Response to RFC: Scoping the Evaluation of CSAM Prevention and Detection Tools in the Context of E2EE Environments
Public Comment · Apr 8, 2022
Sarah Scheffler, Jonathan Mayer, Anunay Kulshrestha - We built a system like Apple’s to flag child sexual abuse material — and concluded the tech was dangerous
The Washington Post · Aug 19, 2021
Jonathan Mayer, Anunay Kulshrestha
Recent Media Coverage
- Elon Musk Sows Confusion on Twitter After Removing Legacy Check Marks
The Wall Street Journal · Apr 21, 2023 - Did an Ivy League professor crack the key to 702 oversight?
Politico · Mar 27, 2023 - If Congress Wants to Protect Section 702, It Needs to Rein in the FBI
Lawfare · Feb 9, 2023 - Your Phone Is Your Private Space
The Atlantic · Sep 4, 2021 - iPhone privacy: How Apple’s plan to go after child abuse could affect you
CNET · Sep 3, 2021 - The slippery slope of surveillance is real
The Boston Globe · Sep 1, 2021 - Researchers say they built a CSAM detection system like Apple’s and discovered flaws
Engadget · Aug 20, 2021 - Researchers Label Apple’s CSAM Detection System ‘Dangerous’
Forbes · Aug 21, 2021 - Apple’s Not Digging Itself Out of This One
Gizmodo · Aug 19, 2021 - Edward Snowden Slams Apple’s CSAM Scanning as a ‘Disaster-in-the-Making’
PC Magazine · Aug 26, 2021 - Princeton prof warns Apple over CSAM scanning – we’ve been there, don’t
The Register · Aug 23, 2021 - How Line is fighting disinformation without sacrificing privacy
Rest of World · Mar 7, 2021 - BJP way ahead of competition on social media in 2014, says Stanford University study
Hindustan Times · May 17, 2017 - How The BJP Out-Tweeted The Competition To Win The 2014 General Election
The Huffington Post · May 17, 2017
Podcasts
- Intern Insights: Dr. Josh Benaloh with Anunay Kulshrestha and Karan Newatia
Microsoft Research Podcast · Sep 8, 2023 - S5E7: Apple’s #SpyPhone, an Apple App Store Settlement, and the Expansion of Government Facial Recognition Software
DevNews Podcast · Sep 2, 2021
Recent Awards and Grants
- 2024 14th Annual Privacy Papers for Policymakers Award (Best Student Paper)
- 2023 Invited to the 10th Heidelberg Laureate Forum
- 2023 Wallace Memorial Fellowship in Engineering
- 2023 Jane Street Graduate Fellowship (Honorable Mention)
- 2023 Microsoft Azure Cloud Computing Grant
- 2023 Real World Crypto (RWC) Symposium Travel Grant
Teaching
Princeton University
-
Cryptographically Verifiable Elections: Wintersession 2024
-
Economics and Computation: Assistant in Instruction for Spring 2020-21
-
Cryptography: Assistant in Instruction for Fall 2020-21
Stanford University
-
Computer and Network Security: Teaching Assistant for Spring 2017-18 & Spring 2016-17
-
Introduction to Cryptography: Teaching Assistant for Winter 2017-18
-
Analysis of Networks: Teaching Assistant for Fall 2017-18
Service
- Artifact Evaluation USENIX Security 2023, WOOT 2023
- External Review USENIX Security 2021, Asiacrypt 2023
Miscellaneous
- A213544 in the Online Encyclopedia of Integer Sequences
- Is there an incumbency effect in elections in India?
Colophon
Jekyll · Jekyll Scholar · Cloudflare Pages · IPA Reader · Space Mono · Font Awesome · Academicons