News

Presentation Schedule

Written on 25.10.24 (last change on 05.11.24) by Yixin Wu

Dear all,

 

After receiving your responses, we have arranged a schedule for the presentations.

Starting from November 5th, every Tuesday from 4 pm to 5 pm, we will have two presenters introduce their preferred papers.

05.11.2024

Hafsa Zubair, Understanding Data Importance in Machine… Read more

Dear all,

 

After receiving your responses, we have arranged a schedule for the presentations.

Starting from November 5th, every Tuesday from 4 pm to 5 pm, we will have two presenters introduce their preferred papers.

05.11.2024

Hafsa Zubair, Understanding Data Importance in Machine Learning Attacks: Does Valuable Data Pose Greater Harm

12.11.2024

Neshmia Hafeez, Quantifying Privacy Risks of Prompts in Visual Prompt Learning

Yashodhara Thoyakkat, PreCurious: How Innocent Pre-Trained Language Models Turn into Privacy Traps

19.11.2024 

Keerat Singh Khurana, SeqMIA: Sequential-Metric Based Membership Inference Attack

Renard Hadikusumo, Students Parrot Their Teachers: Membership Inference on Model Distillation

26.11.2024

Niklas Lohmann, Detecting Pretraining Data from Large Language Models

Mohamed Salman, Scalable Extraction of Training Data from (Production) Language Models

03.12.2024

Shraman Jain, Membership inference attacks against in-context learning

Nick Nagel, PrivacyLens: Evaluating Privacy Norm Awareness of Language Models in Action

10.12.2024

Roshan Pavadarayan Ravichandran, ProPILE: Probing Privacy Leakage in Large Language Models

Anjali Sankar Eswaramangalath, Transferable Embedding Inversion Attack: Uncovering Privacy Risks in Text Embeddings without Model Queries

17.12.2024

Max Thomys, “I Don’t Know If We’re Doing Good. I Don’t Know If We’re Doing Bad”: Investigating How Practitioners Scope, Motivate, and Conduct Privacy Work When Developing AI Products

Thota Vijay Kumar, Exploring Privacy and Incentives Considerations in mHealth Technology Adoption: A Case Study of COVID-19 Contact Tracing Apps

07.01.2025

Wasif Khizar, Privacy Backdoors: Stealing Data with Corrupted Pretrained Models

Best,
Yixin

Paper Assignment

Written on 23.10.24 by Yixin Wu

Dear all,

The paper list is on the main page of this seminar.

Please send your paper preferences (3 papers ranked from high to low) to yixin.wu@cispa.de by Thursday.

The assignment will be ready by 11 AM Friday!

Best,

Yixin

Privacy of Machine Learning

Machine learning has witnessed tremendous progress during the past decade, and data is the key to such success. However, in many cases, machine learning models are trained on sensitive data, e.g., biomedical records, and such data can be leaked from trained machine learning models. In this seminar, we will cover the newest research papers in this direction.

 

Logistics:

Time: Tuesday 4pm - 6pm

Location: online via Zoom

TAs:

  • Yixin Wu (yixin.wu@cispa.de)
  • Xinyue Shen
  • Ziqing Yang

List of Papers

  1. Understanding Data Importance in Machine Learning Attacks: Does Valuable Data Pose Greater Harm?

  2. Membership Inference Attacks Against In-Context Learning

  3. SeqMIA: Sequential-Metric Based Membership Inference Attack

  4. PLeak: Prompt Leaking Attacks against Large Language Model Applications

  5. Detecting Pretraining Data from Large Language Models

  6. A General Framework for Data-Use Auditing of ML Models

  7. Privacy Backdoors: Stealing Data with Corrupted Pretrained Models

  8. Quantifying Privacy Risks of Prompts in Visual Prompt Learning

  9. The Janus Interface: How Fine-Tuning in Large Language Models Amplifies the Privacy Risks

  10. ProPILE: Probing Privacy Leakage in Large Language Models

  11. Black-box Membership Inference Attacks against Fine-tuned Diffusion Models

  12. Students Parrot Their Teachers: Membership Inference on Model Distillation

  13. Scalable Extraction of Training Data from (Production) Language Models

  14. PrivLM-Bench: A Multi-level Privacy Evaluation Benchmark for Language Models

  15. I Don't Know If We're Doing Good. I Don't Know If We're Doing Bad": Investigating How Practitioners Scope, Motivate, and Conduct Privacy Work When Developing AI Products

  16. PrivacyLens: Evaluating Privacy Norm Awareness of Language Models in Action

  17. PreCurious: How Innocent Pre-Trained Language Models Turn into Privacy Traps

  18. ERASER: Machine Unlearning in MLaaS via an Inference Serving-Aware Approach

  19. Exploring Privacy and Incentives Considerations in mHealth Technology Adoption: A Case Study of COVID-19 Contact Tracing Apps

  20. Transferable Embedding Inversion Attack: Uncovering Privacy Risks in Text Embeddings without Model Queries

  21. Text Embedding Inversion Security for Multilingual Language Models

Privacy Policy | Legal Notice
If you encounter technical problems, please contact the administrators.