News

Currently, no news are available

Differential Privacy in the Era of Foundation Models

Abstract:

In recent years, foundation models, such as GPT, LLaMA, Dall-E, or Stable Diffusion, have transformed the field of machine learning, particularly in large-scale tasks like natural language processing and computer vision. These models, trained on vast datasets, are capable of transferring their learned knowledge to a wide range of applications, making them incredibly powerful and versatile. However, this also raises significant privacy concerns when sensitive data is involved.

This seminar will explore how differential privacy (DP), the leading standard for privacy protection, can be applied to foundation models to mitigate these risks. DP ensures that changes in individual data points in a model’s training data minimally affect the overall model predictions, providing a safeguard for privacy even in the most data-intensive models. We will dive into the fundamentals of both DP and foundation models, study how they intersect, and explore strategies for integrating privacy guarantees into these cutting-edge systems. Key topics will include the theory behind DP, practical privacy-preserving mechanisms, and case studies of DP implementation in advanced foundation models.

Requirements and Deliverables:

This seminar is open to senior Bachelor, Masters, and doctoral students. Ideally, students should have a solid background in mathematics through the base lectures, and a strong interest in deep learning.

Each student will present a topic during the seminar hours in the form of an oral presentation. In addition, each student will read the relevant papers for the other students’ presentations, and hand in a seminar paper at the end of the semester.

Time:

The seminar will take place on Wednesdays 4PM-6PM in the CISPA building. A concrete room will be published here soon.

Privacy Policy | Legal Notice
If you encounter technical problems, please contact the administrators.