News
Grades are outWritten on 04.02.22 by Rui Wen Dear all, The grades are out, you can check it on LSF. Thanks again for you participating in the whole semester! Best, |
No seminar on 01.11Written on 26.10.21 by Rui Wen Dear all, Next Monday (01.11) is a holiday, so there will be no seminar. The presentation will start on 08.11; please check the updated schedule. Best, |
Schedule of presentationWritten on 26.10.21 (last change on 12.11.21) by Rui Wen Update: We decide to cancel the presentation on 24.01, please check the new schedule. -------------------------------------------------------------
Dear all, After receiving your responses, we have arranged a schedule for you to give the presentations (see it at the end of this… Read more Update: We decide to cancel the presentation on 24.01, please check the new schedule. -------------------------------------------------------------
Dear all, After receiving your responses, we have arranged a schedule for you to give the presentations (see it at the end of this message). Update: Next Monday is a holiday, there will be no seminar. Thus, the presentation will start on 08.11. Every Monday from 2 pm to 4 pm, we will have two presenters introduce their preferred papers. See you next week. :)
----------------------------------------------------------------- 08.11: 1. Xinyue Shen, ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models 2. Yiyong Liu, Membership Leakage in Label-Only Exposures
15.11: 3. Xiangyu Dong, Extracting Training Data from Large Language Models 4. Elisa Ebler, Privacy Risks of Securing Machine Learning Models against Adversarial Examples
22.11: 5. Kazi Fozle Azim Rabi, Practical Blind Membership Inference Attack via Differential Comparison 6. Zeyang Sha, Quantifying and Mitigating Privacy Risks of Contrastive Learning
29.11: 7. Yixin Wu, Overlearning Reveals Sensitive Attributes 8. Ziqing Yang, Auditing Data Provenance in Text-Generation Models
06.12: 9. Tanvi Ajay Gunjal, The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks 10. Minxing Zhang, Membership Inference Attacks Against Recommender Systems
13.12: 11. Yiting Qu, Quantifying Privacy Leakage in Graph Embedding 12. Kavu Maithri Rao, The Secret Revealer: Generative Model-Inversion Attacks Against Deep Neural Networks.
10.01: 13. Prajvi Saxena, Updates-Leak: Data Set Inference and Reconstruction Attacks in Online Learning 14. MANUELA CERON, When Machine Unlearning Jeopardizes Privacy
17.01: 15. Elisa Ebler, Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting 16. Ankita Behura, Membership Inference Attacks Against Machine Learning Models
|
Paper list is onlineWritten on 25.10.21 by Rui Wen Dear all,
Dear all,
|
kick-off slides available hereWritten on 25.10.21 by Yang Zhang https://cms.cispa.saarland/pml22/dl/1/pml22-kickoff.pdf |
Privacy of Machine Learning
Machine learning has witnessed tremendous progress during the past decade, and data is the key to such success. However, in many cases, machine learning models are trained on sensitive data, e.g., biomedical records, and such data can be leaked from trained machine learning models. In this seminar, we will cover the newest research papers in this direction.
List of Papers
1. ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models
2. Membership Inference Attacks Against Machine Learning Models
3. Information Leakage in Embedding Models
4. Overlearning Reveals Sensitive Attributes
5. Auditing Data Provenance in Text-Generation Models
6. Exploiting Unintended Feature Leakage in Collaborative Learning
7. Privacy Risks of Securing Machine Learning Models against Adversarial Examples
8. Comprehensive Privacy Analysis of Deep Learning: Passive and Active White-box Inference Attacks against Centralized and Federated Learning
9. Machine Learning with Membership Privacy using Adversarial Regularization
10. Membership Leakage in Label-Only Exposures
11. Dataset Inference: Ownership Resolution in Machine Learning
12. Extracting Training Data from Large Language Models
13. The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks
14. Practical Blind Membership Inference Attack via Differential Comparisons
15. MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples
16. Auditing Differentially Private Machine Learning: How Private is Private SGD
17. Inference Attacks Against Graph Neural Networks
18. Quantifying and Mitigating Privacy Risks of Contrastive Learning
19. When Machine Unlearning Jeopardizes Privacy
20. Node-Level Membership Inference Attacks Against Graph Neural Networks
21. GAN-Leaks: A Taxonomy of Membership Inference Attacks against Generative Models
22. Updates-Leak: Data Set Inference and Reconstruction Attacks in Online Learning
23. Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures
24. LOGAN: Evaluating Privacy Leakage of Generative Models Using Generative Adversarial Networks.
25. Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting
26. The Secret Revealer: Generative Model-Inversion Attacks Against Deep Neural Networks.
27. Quantifying Privacy Leakage in Graph Embedding
28. Membership Inference Attack on Graph Neural Networks
29. Membership Inference Attacks Against Recommender Systems
30. Deep Learning with Differential Privacy