News

Grades are out

Written on 04.02.22 by Rui Wen

Dear all,

The grades are out, you can check it on LSF.

Thanks again for you participating in the whole semester!

Best,
Rui

No seminar on 01.11

Written on 26.10.21 by Rui Wen

Dear all,

Next Monday (01.11) is a holiday, so there will be no seminar.

The presentation will start on 08.11; please check the updated schedule.
Enjoy your holiday :)

Best,
Rui

Schedule of presentation

Written on 26.10.21 (last change on 12.11.21) by Rui Wen

Update: We decide to cancel the presentation on 24.01, please check the new schedule.

-------------------------------------------------------------

 

Dear all,

After receiving your responses, we have arranged a schedule for you to give the presentations (see it at the end of this… Read more

Update: We decide to cancel the presentation on 24.01, please check the new schedule.

-------------------------------------------------------------

 

Dear all,

After receiving your responses, we have arranged a schedule for you to give the presentations (see it at the end of this message).

Update: Next Monday is a holiday, there will be no seminar.

Thus, the presentation will start on 08.11.  Every Monday from 2 pm to 4 pm, we will have two presenters introduce their preferred papers.

See you next week. :)

Best,
Rui

 

-----------------------------------------------------------------

08.11:

1. Xinyue Shen, ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models

2. Yiyong Liu, Membership Leakage in Label-Only Exposures

 

15.11:

3. Xiangyu Dong, Extracting Training Data from Large Language Models

4. Elisa Ebler, Privacy Risks of Securing Machine Learning Models against Adversarial Examples

 

22.11:

5. Kazi Fozle Azim Rabi, Practical Blind Membership Inference Attack via Differential Comparison

6. Zeyang Sha, Quantifying and Mitigating Privacy Risks of Contrastive Learning

 

29.11:

7. Yixin Wu, Overlearning Reveals Sensitive Attributes

8. Ziqing Yang, Auditing Data Provenance in Text-Generation Models

 

06.12:

9. Tanvi Ajay Gunjal, The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks

10. Minxing Zhang, Membership Inference Attacks Against Recommender Systems

 

13.12:

11. Yiting Qu, Quantifying Privacy Leakage in Graph Embedding

12. Kavu Maithri Rao, The Secret Revealer: Generative Model-Inversion Attacks Against Deep Neural Networks.

 

10.01:

13. Prajvi Saxena, Updates-Leak: Data Set Inference and Reconstruction Attacks in Online Learning

14. MANUELA CERON, When Machine Unlearning Jeopardizes Privacy

 

17.01:

15. Elisa Ebler, Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting

16. Ankita Behura, Membership Inference Attacks Against Machine Learning Models

 


 

Paper list is online

Written on 25.10.21 by Rui Wen

Dear all,


The paper list is online, please select your favorite paper and mark its index on the doodle (https://doodle.com/poll/en9y69hepds8wg9h?utm_source=poll&utm_medium=link) by tomorrow 4 pm. One important change, no one needs to choose three papers anymore (that's what Yang mentioned half an… Read more

Dear all,


The paper list is online, please select your favorite paper and mark its index on the doodle (https://doodle.com/poll/en9y69hepds8wg9h?utm_source=poll&utm_medium=link) by tomorrow 4 pm. One important change, no one needs to choose three papers anymore (that's what Yang mentioned half an hour ago), instead, we will use the following rules.
For those who participate as pro-seminar students (Wang, Ye; Ebler, Elisa; and O Keefe, Shannon Angela), please select two papers that you want to present. For other students, please select one paper. The assignment will be based on the first-come, first-serve principle. The paper selection deadline is Tuesday (26.10) 4 pm.
If you want to present the paper next week (01.11), you will get extra points, please contact me (rui.wen@cispa.de) if you are interested.


Best,
Rui

kick-off slides available here

Written on 25.10.21 by Yang Zhang

https://cms.cispa.saarland/pml22/dl/1/pml22-kickoff.pdf

Privacy of Machine Learning

Machine learning has witnessed tremendous progress during the past decade, and data is the key to such success. However, in many cases, machine learning models are trained on sensitive data, e.g., biomedical records, and such data can be leaked from trained machine learning models. In this seminar, we will cover the newest research papers in this direction.

 

 

List of Papers

 

1. ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models

2. Membership Inference Attacks Against Machine Learning Models

3. Information Leakage in Embedding Models

4. Overlearning Reveals Sensitive Attributes

5. Auditing Data Provenance in Text-Generation Models

6. Exploiting Unintended Feature Leakage in Collaborative Learning

7. Privacy Risks of Securing Machine Learning Models against Adversarial Examples

8. Comprehensive Privacy Analysis of Deep Learning: Passive and Active White-box Inference Attacks against Centralized and Federated Learning 

9. Machine Learning with Membership Privacy using Adversarial Regularization

10. Membership Leakage in Label-Only Exposures

11. Dataset Inference: Ownership Resolution in Machine Learning

12. Extracting Training Data from Large Language Models

13. The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks

14. Practical Blind Membership Inference Attack via Differential Comparisons

15. MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples

16. Auditing Differentially Private Machine Learning: How Private is Private SGD

17. Inference Attacks Against Graph Neural Networks

18. Quantifying and Mitigating Privacy Risks of Contrastive Learning

19. When Machine Unlearning Jeopardizes Privacy

20. Node-Level Membership Inference Attacks Against Graph Neural Networks

21. GAN-Leaks: A Taxonomy of Membership Inference Attacks against Generative Models

22. Updates-Leak: Data Set Inference and Reconstruction Attacks in Online Learning

23. Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures

24. LOGAN: Evaluating Privacy Leakage of Generative Models Using Generative Adversarial Networks.

25. Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting

26. The Secret Revealer: Generative Model-Inversion Attacks Against Deep Neural Networks.

27. Quantifying Privacy Leakage in Graph Embedding

28. Membership Inference Attack on Graph Neural Networks

29. Membership Inference Attacks Against Recommender Systems

30. Deep  Learning  with  Differential  Privacy



 

Privacy Policy | Legal Notice
If you encounter technical problems, please contact the administrators.