News
Assignment 3Written on 23.06.25 by Adam Dziedzic Hello Everyone, The assignment 3 has been released. Please access the instructions and the example of the code in this GitHub repository: https://github.com/sprintml/tml_2025_task3 With kind regards, Adam and Franziska |
Points for the Midterm have been releasedWritten on 11.06.25 by Franziska Boenisch Dear everyone, The midterms are graded and you should be able to see your points in your personal page. We will be offering to view your exam both this week and next week at the end of the weekly lecture in the lecture hall. Looking forward to seeing you there, Franziska and Adam |
Assignment 2Written on 10.06.25 by Adam Dziedzic Hello Everyone, The assignment 2 has been released. Please access the instructions and the example of the code in this GitHub repository: https://github.com/sprintml/tml_2025_task2 All dates on the main page have been updated accordingly. With kind regards, Adam and Franziska |
Points for Assignment 1 are in!Written on 10.06.25 by Franziska Boenisch Hey everyone, Assignment 1 has been reviewed and you can access both your points and personal feedback in your personal status page here on CMS. If there are individual questions, please feel free to directly talk to us after the lecture tomorrow. Kind regards, The TML Team
|
June 18th: Both Lectures on Collaborative LearningWritten on 09.06.25 by Franziska Boenisch Hello everyone, Since we pushed the Robustness lecture to next week to reduce the preparation load before the midterm, we will offer both Collaborative Learning Lectures on June 18th. So please submit your questions to them beforehand. All dates on the main page have been updated… Read more Hello everyone, Since we pushed the Robustness lecture to next week to reduce the preparation load before the midterm, we will offer both Collaborative Learning Lectures on June 18th. So please submit your questions to them beforehand. All dates on the main page have been updated accordingly. Kind regards |
Exam Pre-Registration ONLY for Students of Image Processing and Computer VisionWritten on 09.06.25 by Franziska Boenisch Dear everyone, This message is only important for you if you are enrolled in Image Processing and Computer Vision and your exam in that lecture overlaps with our exam on the 30th of July. After multiple discussions, the fairest solution we came up with, that also causes minimal impact on the… Read more Dear everyone, This message is only important for you if you are enrolled in Image Processing and Computer Vision and your exam in that lecture overlaps with our exam on the 30th of July. After multiple discussions, the fairest solution we came up with, that also causes minimal impact on the other students is to allow you to write the same exam as everyone else on July 30th, but from 5:30-7PM. We will book an extra room for you. To ensure that we have the right size of room booked, you will have to complete a registration ("Registrations" -> "Exam at 5:30PM for People with Overlapping Exam of Image Processing and Computer Vision") until June 16th. We will only provide this option to people who will take the Image Processing and Computer Vision exam on that day and who are registered on time. Kind regards Franziska and Adam |
Trustworthy Machine Learning
Organization
Lecturers: Adam Dziedzic and Franziska Boenisch
Time: Wednesdays from 14:00 to 16:00.
Location: CISPA Building C0 (Stuhlsatzenhaus 5 66123 Saarbrücken) Lecture Hall Ground Floor (Room 0.05)
Starting Date: April 16th, 2025
Description: The deployment of machine learning applications in real-world systems necessitates methods to ensure their trustworthiness. This course explores the different aspects of trustworthy machine learning, including Privacy, Collaborative Learning, Model Confidentiality, Robustness, Fairness and Bias, Explainability, Security, and Governance.
Learning Objectives: The objective of this tutorial is to provide attendees with a comprehensive understanding of trustworthy machine learning, covering key aspects such as privacy, robustness, fairness, explainability, security, and governance. Participants will benefit by gaining practical skills in identifying and mitigating risks associated with machine learning models, including privacy attacks, model theft, bias, and adversarial threats. By the end of the course, attendees are expected to have enhanced their knowledge of cutting-edge defense strategies, developed practical skills in securing machine learning systems, and deepened their understanding of the ethical and societal implications of deploying these models in real-world scenarios.
What you have to do weekly?
This is a flipped classroom lecture. What does this mean?
You have to watch the lecture videos a week before the lecture. For example, if we consider "Lecture 1, Privacy" on April 30th in class, you should watch the video on April 23rd. Then, if you have questions regarding the lecture, you can post them in the forum. If you want your question to be considered for an answer in the in-person session (in that example on April 30th), you should ask them until the Friday before, i.e., in the example until April 25th. Questions regarding the course content will not be individually answered by the instructors in the forum. They will be answered in the in-person lecture hours. The in-person lecture hours are not recorded. Hence, if you have questions, you need to post them on time, and attend the Q&A session.
Additionally, we will post ungraded theoretical questions regarding the individual lectures. They give you an impression on how the exam and midterm will look like. We highly suggest solving them. Solutions can also be discussed in the in-person Q&A session.
What are our topics?
Privacy: We will analyze the landscape of privacy attacks against machine learning models and study the means to prevent these attacks.
Collaborative Learning: We will analyze the risks to trustworthy machine learning that arise in collaborative machine learning setups and look into their mitigations.
Model Confidentiality: We will see that machine learning models can be easily stolen through different methods, such as simple copies of the models, or the private training data, or the extraction of the model exposed via a public or private API. We will analyze different attack strategies to steal the models and the state-of-the-art defense methods.
Robustness: We will learn about different facets of robustness, such as robustness to out-of-distribution samples, natural noise present in the input data, or adversarial examples, where attackers incur imperceptible changes to the input to ML models to fool their predictions.
Fairness and Bias: We will scrutinize the behavior of ML models on different subgroups of the training data. We will assess the models’ responses to inputs with different attributes and will uncover the potential causes of unfair or biased model responses and current mitigations.
Explainability: We will address the challenges that arise from machine learning models’ black-box behavior and look into techniques to explain predictive behavior.
Security and Governance: Machine learning applications are usually not isolated but integrated in some systems in a given society with their respective norms and values. We will study the security risks that can arise from the deployment of machine learning applications and how to use governance approaches to mitigate these risks.
Throughout the course, we will discuss outstanding challenges and future research directions to make machine learning more robust, private, and trustworthy.
Assignments (with tentative dates):
The course entails 4 practical graded assignments based on implementing the concepts studied during the lecture.
Assignments need to be handed in groups of two. If you do not have a partner, yet, you can use the forum to find one.
-
Privacy: Implement a membership inference attack and achieve the highest attack success. We give a model and a list of data points, your task is to determine for each point yes/no, meaning: member/no-member. --> Due 28.5.
-
Model extraction: We will offer a machine-learning model over an API and you will extract the model behavior. You upload a PyTorch model. We check how close the uploaded model is in terms of the predictions to the victim model. --> Due 25.6.
-
Robustness: Train a model as robust as possible against the highest number of adversarial examples we will generate. You submit a PyTorch model, we load it, and run the attack. The goal is to create a model with the highest adversarial and clean accuracy. --> Due 9.7.
-
Fairness: Train a classifier that has the highest demographic parity. You submit a PyTorch model and we assess how fair it is. --> Due 23.7.
Submission of Assignments:
Please submit the assignments over CMS in a ZIP file. We only need the code and README+Report. Models and other artifacts should not be uploaded.
Exam and Grading
Exam format: written.
Tentative Lecture Dates
The videos for each lecture are linked at the respective lecture date. To access the full playlist, visit: https://www.youtube.com/playlist?list=PLNfU-a7sxIwvS7dhnOPdFtvhdNcrnufEW
-
Overview, Administration & Intro (16.4.)
-
Model stealing and defenses (28.5. supervised learning (SL) and self-supervised learning (SSL))
-
Midterm-Exam on 4.6. (4th of June) 2PM-2:45PM. We'll first be writing the exam, and then covering the questions about the robustness lecture, which is not going to be part of the midterm. Location: HS I in E 2.5
-
Adversarial Machine Learning / Robustness (11.6.)
-
Fairness and bias (25.6.)
-
Explainability (2.7.)
-
Security and Governance (9.7.)
-
Summary & Open Questions (9.7.)
-
Final Exam on 30.7. (30th of July) from 16.00 - 18.00. Location: HS I in E 2.5
Do I have the right qualifications for the lecture?
Students are required to have successfully completed a basic lecture on machine learning.
Additionally, you need to have hands-on coding experience with Python for Machine Learning. This means, preferably PyTorch (alternatively TensorFlow). All assignments will require you to do this type of coding. If you are unfamiliar and have never coded in PyTorch, it will be a steep learning curve.