News
Currently, no news are available
Differential Privacy: Mathematical Foundations and Applications in Machine Learning
Course Description
Since machine learning (ML) becomes increasingly prevalent in sensitive areas, such as healthcare and finance, it is crucial to ensure privacy. This seminar is centered around the mathematical framework of differential privacy, a current gold standard for privacy protection. Differential privacy, in simple terms, ensures that the outcome of a data analysis remains roughly the same when a single data point in the underlying private dataset changes.
Throughout the seminar, we will delve deep into the core principles of differential privacy, dissecting its precise definition and exploring various relaxations that allow for adaptable privacy guarantees. We will also dive into the practical aspects, examining the mechanisms and strategies used to achieve differential privacy ML. Furthermore, we will study different mechanisms for privacy accounting, a critical aspect of quantifying and validating the level of privacy afforded by differential privacy mechanisms. In the concluding section of our seminar, we will take a closer look at the practical implementation of differential privacy in state-of-the-art foundational models, such as large language models. Understanding how to apply differential privacy to these cutting-edge models is crucial for addressing privacy concerns in advanced ML systems.
Requirements: This seminar is open to senior Bachelor, Masters, and doctoral students. Ideally, students should have a solid background in mathematics through the base lectures, and at least a basic understanding of deep learning.
Each student will present a topic during the seminar hours in the form of an oral presentation. In addition, each student will read the relevant papers for the other students’ presentations, and hand in a seminar paper at the end of the semester.
Time and Location
Time: Thursdays 12PM-2PM
Location: CISPA, Stuhlsatzenhaus 5, 66123 Saarbrücken. Usual location: Conference Room 0.07 (Only on 16.11.2023: C0 - 3.21, due to a conflict in booking)
First Meeting: November 2nd
ATTENTION: On January 25th, we are meeting in C0, Room 0.01!
Timeline
2.11.: Introduction and Presentation of Topics. Afterwards: Students share their preferences until 5.11., 8PM
9.11.: No seminar (topic assignment will be sent out via email until 7.11. latest)
Afterwards: Weekly Meetings
Topics and List of Papers
Reading List per Topic
Every student is supposed to read all the papers ahead of the respective presentations to be able to actively participate in the discussions.
(1) Differential Privacy: Background and Mathematics (Xu, Yuelin)
- Dwork et al., Differential Privacy (epsilon DP)
- Dwork and Roth, The Algorithmic Foundations of Differential Privacy (chapter 2, basic terms (including (ε, δ)-DP), chapter 3.3 Laplace Mechanism)
(2) Differential Private Stochastic Gradient Descent (DPSGD) and its Privacy Analysis (Ansar, Ayesha)
- Dwork and Roth, The Algorithmic Foundations of Differential Privacy (Appendix A, the Gaussian Mechanism)
- Abadi et al., Deep Learning with Differential Privacy (algorithm, privacy amplification by subsampling, moments accountant)
(3) Rényi Differential Privacy and its application for Subsampled Gaussian Mechanisms (Sanyal, Aniket)
- Mironov, Rényi Differential Privacy
- Wang et al., Subsampled Rényi Differential Privacy and Analytical Moments Accountant
- Mironov et al., Rényi Differential Privacy of the Sampled Gaussian Mechanism (especially emphasize the connection to DPSGD)
(4) The Private Aggregation of Teacher Ensembles (PATE) for Machine Learning with Privacy Guarantees (Meintz, Michel)
- Papernot et al., Semi-supervised knowledge transfer for deep learning from private training data (algorithm and particular emphasis on Section 3.3 Data dependent Privacy Analysis)
- Papernot et al., Scalable private learning with pate (special emphasis on the ConfidentGNMAX algorithm and its benefits for privacy)
(5) Heterogenous/Individualized Differential Privacy (Kulkarni, Nupur)
- Boenisch et al., Individualized PATE: Differentially Private Machine Learning with Individual Privacy Guarantees(especially highlight algorithmic differences and resulting privacy implications in contrast to PATE)
- Boenisch et al., Have it your way: Individualized Privacy Assignment for DP-SGD (especially highlight algorithmic differences and resulting privacy implications in contrast to DPSGD)
- Feldman and Zrnic, vidual Privacy Accounting via a Rényi Filter
(6) Privacy Auditing in Blackbox-Access (Grace Bella, Djouka Maka)
- Jagielski et al., Auditing differentially private machine learning: How private is private sgd?
- Tramer et al., Debugging differential privacy: A case study for privacy auditing
- Nasr et al., Tight Auditing of Differentially Private Machine Learning
(7) Differential Privacy for Large Language Models (Chen, Zeyuan)
- Li et al., Large Language Models Can Be Strong Differentially Private Learners
- Yu et al., Differentially Private Fine-tuning of Language Models
(8) Memorization (Borisov, Daniel)
- Feldman, Does learning require memorization? a short tale about a long tail
(9) Privacy Attacks (Muller, Léonie)
- Shokri et al., Membership inference attacks against machine learning models
- Carlini et al., Membership inference attacks from first principles
- Carlini et al., Extracting training data from diffusion models
- Debenedetti et al., Privacy Side Channels in Machine Learning Systems
Peer Groups
Peer Group | Name |
1 | Ansar, Ayesha |
1 | Meintz, Michel |
2 | Borisov, Daniel |
2 | Grace Bella, Djouka Maka |
3 | Xu, Yuelin |
3 | Muller, Léonie |
4 | Chen, Zeyuan |
4 | Sanyal, Aniket |
4 | Kulkarni, Nupur |