News

Final results are out

Written on 01.02.24 by Xinyue Shen

Dear all,

The final results of the seminar are available on LSF.

Best,

Xinyue

Next week's seminar is postponed

Written on 24.11.23 by Xinyue Shen

Dear All,

Due to unforeseen circumstances, we need to postpone next week's seminar to the week after next.

The adjusted schedule is as follows.

05.12:
7. Majdi Maalej, Students Parrot Their Teachers: Membership Inference on Model Distillation
8. Elisa Mai, Truth Serum: Poisoning Machine… Read more

Dear All,

Due to unforeseen circumstances, we need to postpone next week's seminar to the week after next.

The adjusted schedule is as follows.

05.12:
7. Majdi Maalej, Students Parrot Their Teachers: Membership Inference on Model Distillation
8. Elisa Mai, Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets

12.12:
9. Meenu Anil, Multi-step Jailbreaking Privacy Attacks on ChatGPT
10. Tulasi Nayak, "My face, my rules": Enabling Personalized Protection against Unacceptable Face Editing

Best,

Xinyue

Register for the seminar on LSF

Written on 15.11.23 by Xinyue Shen

Dear all,

Please remember to register for this seminar on LSF.

Best,

Xinyue

Schedule of presentation

Written on 02.11.23 (last change on 07.11.23) by Xinyue Shen

Dear all,

After receiving your responses, we have arranged a schedule for you to give the presentations (see it at the end of this message).
Every Tuesday from 2 pm to 4 pm, we will have two presenters introduce their preferred papers.

See you next week. :)

Best,
Xinyue

Read more

Dear all,

After receiving your responses, we have arranged a schedule for you to give the presentations (see it at the end of this message).
Every Tuesday from 2 pm to 4 pm, we will have two presenters introduce their preferred papers.

See you next week. :)

Best,
Xinyue

-----------------------------------------------------------------

07.11:
1. Priya George, Detecting Pretraining Data from Large Language Models
2. Deepa Rani Mahato, Updates-Leak: Data Set Inference and Reconstruction Attacks in Online Learning

14.11:
3. Mrinal Mahindran, Analyzing Leakage of Personally Identifiable Information in Language Models
4. Sana Athar, CodeIPPrompt: Intellectual Property Infringement Assessment of Code Language Models.

21.11:
5. Ayce Idil Aytekin, Extracting Training Data from Diffusion Models
6. Sina Mavali, UnGANable: Defending Against GAN-based Face Manipulation

28.11:
7. Majdi Maalej, Students Parrot Their Teachers: Membership Inference on Model Distillation
8. Elisa Mai, Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets

05.12:
9. Meenu Anil, Multi-step Jailbreaking Privacy Attacks on ChatGPT
10. Tulasi Nayak, "My face, my rules": Enabling Personalized Protection against Unacceptable Face Editing

Paper assignment

Written on 31.10.23 (last change on 31.10.23) by Xinyue Shen

Dear all,

The paper list is on the main page of this seminar.

Please send your paper preferences (3 papers ranked from high to low) to xinyue.shen@cispa.de by noon tomorrow. If you plan to present next Tuesday, please also indicate it in your email.

The assignment will be ready by 11 AM… Read more

Dear all,

The paper list is on the main page of this seminar.

Please send your paper preferences (3 papers ranked from high to low) to xinyue.shen@cispa.de by noon tomorrow. If you plan to present next Tuesday, please also indicate it in your email.

The assignment will be ready by 11 AM Thursday!

Best,

Vera

Kick-off slides available

Written on 31.10.23 by Yang Zhang

Dear all, 

The slides for the kick-off today are available under Information-->material.

Best,

Yang

Show all

Privacy of Machine Learning

Machine learning has witnessed tremendous progress during the past decade, and data is the key to such success. However, in many cases, machine learning models are trained on sensitive data, e.g., biomedical records, and such data can be leaked from trained machine learning models. In this seminar, we will cover the newest research papers in this direction.

 

Logistics:

Time: Tuesday 2pm - 4pm

Location: CISPA Building, room 3.21

TAs:

  • Xinyue Shen (xinyue.shen@cispa.de)
  • Wai Man Si
  • Zeyang Sha
  • Ziqing Yang

List of Papers

  1. Detecting Pretraining Data from Large Language Models
  2. On the Risks of Stealing the Decoding Algorithms of Language Models
  3. Extracting Training Data from Large language Models
  4. Quantifying Privacy Risks of Masked Language Models Using Membership Inference Attacks
  5. Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets
  6. Quantifying Privacy Risks of Prompts in Visual Prompt Learning
  7. Extracting Training Data from Diffusion Models
  8. Multi-step Jailbreaking Privacy Attacks on ChatGPT
  9. Students Parrot Their Teachers: Membership Inference on Model Distillation
  10. Reconstructing Training Data with Informed Adversaries
  11. Membership Leakage in Label-Only Exposures
  12. GAN-Leaks: A Taxonomy of Membership Inference Attacks against Generative Models
  13. Membership Inference Attacks by Exploiting Loss Trajectory
  14. Updates-Leak: Data Set Inference and Reconstruction Attacks in Online Learning
  15. Tight Auditing of Differentially Private Machine Learning
  16. ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models
  17. Quantifying and Mitigating Privacy Risks of Contrastive Learning
  18. UnGANable: Defending Against GAN-based Face Manipulation
  19. Analyzing Leakage of Personally Identifiable Information in Language Models
  20. CodeIPPrompt: Intellectual Property Infringement Assessment of Code Language Models
  21. "My face, my rules": Enabling Personalized Protection against Unacceptable Face Editing
  22. On the Privacy Risk of In-context Learning
     
Privacy Policy | Legal Notice
If you encounter technical problems, please contact the administrators.