News
Presentation Schedule
Written on 25.10.2024 11:55 by Yixin Wu
Dear all,
After receiving your responses, we have arranged a schedule for the presentations.
Starting from November 5th, every Tuesday from 4 pm to 5 pm, we will have two presenters introduce their preferred papers.
05.11.2024
Hafsa Zubair, Understanding Data Importance in Machine Learning Attacks: Does Valuable Data Pose Greater Harm
12.11.2024
Neshmia Hafeez, Quantifying Privacy Risks of Prompts in Visual Prompt Learning
Yashodhara Thoyakkat, PreCurious: How Innocent Pre-Trained Language Models Turn into Privacy Traps
19.11.2024
Keerat Singh Khurana, SeqMIA: Sequential-Metric Based Membership Inference Attack
Renard Hadikusumo, Students Parrot Their Teachers: Membership Inference on Model Distillation
26.11.2024
Niklas Lohmann, Detecting Pretraining Data from Large Language Models
Mohamed Salman, Scalable Extraction of Training Data from (Production) Language Models
03.12.2024
Shraman Jain, Membership inference attacks against in-context learning
Nick Nagel, PrivacyLens: Evaluating Privacy Norm Awareness of Language Models in Action
10.12.2024
Roshan Pavadarayan Ravichandran, ProPILE: Probing Privacy Leakage in Large Language Models
Anjali Sankar Eswaramangalath, Transferable Embedding Inversion Attack: Uncovering Privacy Risks in Text Embeddings without Model Queries
17.12.2024
Max Thomys, “I Don’t Know If We’re Doing Good. I Don’t Know If We’re Doing Bad”: Investigating How Practitioners Scope, Motivate, and Conduct Privacy Work When Developing AI Products
Thota Vijay Kumar, Exploring Privacy and Incentives Considerations in mHealth Technology Adoption: A Case Study of COVID-19 Contact Tracing Apps
07.01.2025
Wasif Khizar, Privacy Backdoors: Stealing Data with Corrupted Pretrained Models
Best,
Yixin