News
Currently, no news are available
Privacy of Machine Learning
Machine learning has witnessed tremendous progress during the past decade, and data is the key to such success. However, in many cases, machine learning models are trained on sensitive data, e.g., biomedical records, and such data can be leaked from trained machine learning models. In this seminar, we will cover the newest research papers in this direction.
Logistics:
Time: Tuesday 4pm - 6pm
Location: online via Zoom
TAs:
- Yixin Wu (yixin.wu@cispa.de)
- Xinyue Shen
- Ziqing Yang
List of Papers
-
Understanding Data Importance in Machine Learning Attacks: Does Valuable Data Pose Greater Harm?
-
Membership Inference Attacks Against In-Context Learning
-
SeqMIA: Sequential-Metric Based Membership Inference Attack
-
PLeak: Prompt Leaking Attacks against Large Language Model Applications
-
Detecting Pretraining Data from Large Language Models
-
A General Framework for Data-Use Auditing of ML Models
-
Privacy Backdoors: Stealing Data with Corrupted Pretrained Models
-
Quantifying Privacy Risks of Prompts in Visual Prompt Learning
-
The Janus Interface: How Fine-Tuning in Large Language Models Amplifies the Privacy Risks
-
ProPILE: Probing Privacy Leakage in Large Language Models
-
Black-box Membership Inference Attacks against Fine-tuned Diffusion Models
-
Students Parrot Their Teachers: Membership Inference on Model Distillation
-
Scalable Extraction of Training Data from (Production) Language Models
-
PrivLM-Bench: A Multi-level Privacy Evaluation Benchmark for Language Models
-
I Don't Know If We're Doing Good. I Don't Know If We're Doing Bad": Investigating How Practitioners Scope, Motivate, and Conduct Privacy Work When Developing AI Products
-
PrivacyLens: Evaluating Privacy Norm Awareness of Language Models in Action
-
PreCurious: How Innocent Pre-Trained Language Models Turn into Privacy Traps
-
ERASER: Machine Unlearning in MLaaS via an Inference Serving-Aware Approach
-
Exploring Privacy and Incentives Considerations in mHealth Technology Adoption: A Case Study of COVID-19 Contact Tracing Apps
-
Transferable Embedding Inversion Attack: Uncovering Privacy Risks in Text Embeddings without Model Queries
-
Text Embedding Inversion Security for Multilingual Language Models