News

Currently, no news are available

Privacy of Machine Learning

Machine learning has witnessed tremendous progress during the past decade, and data is the key to such success. However, in many cases, machine learning models are trained on sensitive data, e.g., biomedical records, and such data can be leaked from trained machine learning models. In this seminar, we will cover the newest research papers in this direction.

 

Logistics:

Time: Tuesday 4pm - 6pm

Location: online via Zoom

TAs:

  • Yixin Wu (yixin.wu@cispa.de)
  • Xinyue Shen
  • Ziqing Yang

List of Papers

  1. Understanding Data Importance in Machine Learning Attacks: Does Valuable Data Pose Greater Harm?

  2. Membership Inference Attacks Against In-Context Learning

  3. SeqMIA: Sequential-Metric Based Membership Inference Attack

  4. PLeak: Prompt Leaking Attacks against Large Language Model Applications

  5. Detecting Pretraining Data from Large Language Models

  6. A General Framework for Data-Use Auditing of ML Models

  7. Privacy Backdoors: Stealing Data with Corrupted Pretrained Models

  8. Quantifying Privacy Risks of Prompts in Visual Prompt Learning

  9. The Janus Interface: How Fine-Tuning in Large Language Models Amplifies the Privacy Risks

  10. ProPILE: Probing Privacy Leakage in Large Language Models

  11. Black-box Membership Inference Attacks against Fine-tuned Diffusion Models

  12. Students Parrot Their Teachers: Membership Inference on Model Distillation

  13. Scalable Extraction of Training Data from (Production) Language Models

  14. PrivLM-Bench: A Multi-level Privacy Evaluation Benchmark for Language Models

  15. I Don't Know If We're Doing Good. I Don't Know If We're Doing Bad": Investigating How Practitioners Scope, Motivate, and Conduct Privacy Work When Developing AI Products

  16. PrivacyLens: Evaluating Privacy Norm Awareness of Language Models in Action

  17. PreCurious: How Innocent Pre-Trained Language Models Turn into Privacy Traps

  18. ERASER: Machine Unlearning in MLaaS via an Inference Serving-Aware Approach

  19. Exploring Privacy and Incentives Considerations in mHealth Technology Adoption: A Case Study of COVID-19 Contact Tracing Apps

  20. Transferable Embedding Inversion Attack: Uncovering Privacy Risks in Text Embeddings without Model Queries

  21. Text Embedding Inversion Security for Multilingual Language Models

Privacy Policy | Legal Notice
If you encounter technical problems, please contact the administrators.