News
Presentation ScheduleWritten on 25.10.24 (last change on 05.11.24) by Yixin Wu Dear all,
After receiving your responses, we have arranged a schedule for the presentations. Starting from November 5th, every Tuesday from 4 pm to 5 pm, we will have two presenters introduce their preferred papers. 05.11.2024 Hafsa Zubair, Understanding Data Importance in Machine… Read more Dear all,
After receiving your responses, we have arranged a schedule for the presentations. Starting from November 5th, every Tuesday from 4 pm to 5 pm, we will have two presenters introduce their preferred papers. 05.11.2024 Hafsa Zubair, Understanding Data Importance in Machine Learning Attacks: Does Valuable Data Pose Greater Harm 12.11.2024 Neshmia Hafeez, Quantifying Privacy Risks of Prompts in Visual Prompt Learning Yashodhara Thoyakkat, PreCurious: How Innocent Pre-Trained Language Models Turn into Privacy Traps 19.11.2024 Keerat Singh Khurana, SeqMIA: Sequential-Metric Based Membership Inference Attack Renard Hadikusumo, Students Parrot Their Teachers: Membership Inference on Model Distillation 26.11.2024 Niklas Lohmann, Detecting Pretraining Data from Large Language Models Mohamed Salman, Scalable Extraction of Training Data from (Production) Language Models 03.12.2024 Shraman Jain, Membership inference attacks against in-context learning Nick Nagel, PrivacyLens: Evaluating Privacy Norm Awareness of Language Models in Action 10.12.2024 Roshan Pavadarayan Ravichandran, ProPILE: Probing Privacy Leakage in Large Language Models Anjali Sankar Eswaramangalath, Transferable Embedding Inversion Attack: Uncovering Privacy Risks in Text Embeddings without Model Queries 17.12.2024 Max Thomys, “I Don’t Know If We’re Doing Good. I Don’t Know If We’re Doing Bad”: Investigating How Practitioners Scope, Motivate, and Conduct Privacy Work When Developing AI Products Thota Vijay Kumar, Exploring Privacy and Incentives Considerations in mHealth Technology Adoption: A Case Study of COVID-19 Contact Tracing Apps 07.01.2025 Wasif Khizar, Privacy Backdoors: Stealing Data with Corrupted Pretrained Models Best, |
Paper AssignmentWritten on 23.10.24 by Yixin Wu Dear all, The paper list is on the main page of this seminar. Please send your paper preferences (3 papers ranked from high to low) to yixin.wu@cispa.de by Thursday. The assignment will be ready by 11 AM Friday! Best, Yixin |
Privacy of Machine Learning
Machine learning has witnessed tremendous progress during the past decade, and data is the key to such success. However, in many cases, machine learning models are trained on sensitive data, e.g., biomedical records, and such data can be leaked from trained machine learning models. In this seminar, we will cover the newest research papers in this direction.
Logistics:
Time: Tuesday 4pm - 6pm
Location: online via Zoom
TAs:
- Yixin Wu (yixin.wu@cispa.de)
- Xinyue Shen
- Ziqing Yang
List of Papers
-
Understanding Data Importance in Machine Learning Attacks: Does Valuable Data Pose Greater Harm?
-
Membership Inference Attacks Against In-Context Learning
-
SeqMIA: Sequential-Metric Based Membership Inference Attack
-
PLeak: Prompt Leaking Attacks against Large Language Model Applications
-
Detecting Pretraining Data from Large Language Models
-
A General Framework for Data-Use Auditing of ML Models
-
Privacy Backdoors: Stealing Data with Corrupted Pretrained Models
-
Quantifying Privacy Risks of Prompts in Visual Prompt Learning
-
The Janus Interface: How Fine-Tuning in Large Language Models Amplifies the Privacy Risks
-
ProPILE: Probing Privacy Leakage in Large Language Models
-
Black-box Membership Inference Attacks against Fine-tuned Diffusion Models
-
Students Parrot Their Teachers: Membership Inference on Model Distillation
-
Scalable Extraction of Training Data from (Production) Language Models
-
PrivLM-Bench: A Multi-level Privacy Evaluation Benchmark for Language Models
-
I Don't Know If We're Doing Good. I Don't Know If We're Doing Bad": Investigating How Practitioners Scope, Motivate, and Conduct Privacy Work When Developing AI Products
-
PrivacyLens: Evaluating Privacy Norm Awareness of Language Models in Action
-
PreCurious: How Innocent Pre-Trained Language Models Turn into Privacy Traps
-
ERASER: Machine Unlearning in MLaaS via an Inference Serving-Aware Approach
-
Exploring Privacy and Incentives Considerations in mHealth Technology Adoption: A Case Study of COVID-19 Contact Tracing Apps
-
Transferable Embedding Inversion Attack: Uncovering Privacy Risks in Text Embeddings without Model Queries
-
Text Embedding Inversion Security for Multilingual Language Models