News

Midterm Exam

Written on 15.05.26 by Adam Dziedzic

Dear Students,

This is the confirmation that the midterm exam will take place in E1 3 - HS 001 on 3.6. (3rd of June) 4PM-5:00PM.

With kind regards,

The TML Team

Video from today's tutorial

Written on 13.05.26 by Adam Dziedzic

Dear Students,

We uploaded the video from today's tutorial: https://youtu.be/8QuR8bHfM1E

With kind regards,

The TML Team

Next week: Robustness + Presentations by Best 3 Teams from Assignment 1

Written on 13.05.26 by Adam Dziedzic

Dear Students,

Please, watch the lecture on Robustness for the next week (May 20th) and post the questions on forum beforehand. 

Next week, we also would like the 3 best teams from the scoreboard on assignment 1 to give short (up to 5 min) talks on their solutions. You can gain additional bonus… Read more

Dear Students,

Please, watch the lecture on Robustness for the next week (May 20th) and post the questions on forum beforehand. 

Next week, we also would like the 3 best teams from the scoreboard on assignment 1 to give short (up to 5 min) talks on their solutions. You can gain additional bonus points (up to 2 points) per assignment for this presentation part. 

With kind regards,

The TML Team

Assignment 2 : Stolen Model Detection

Written on 11.05.26 (last change on 11.05.26) by Nima Dindarsafa

Dear students,

Assignment 2 is released. You can find the task description under the materials section on CMS. The deadline for this assignment is on 26.05.2026 23:59. Please read the task description and the following comments carefully:

  • The assignment deadline is due on 26.05.2026 23:59.… Read more

Dear students,

Assignment 2 is released. You can find the task description under the materials section on CMS. The deadline for this assignment is on 26.05.2026 23:59. Please read the task description and the following comments carefully:

  • The assignment deadline is due on 26.05.2026 23:59. Both CMS submissions for the code and report as well as the API submission to the leaderboard will be closed at that time. Please consider the deadline and do not postpone the submission to the last moments. 
  • No submissions via email will be accepted under any circumstances. You can make a submission to CMS and later, update that submission to avoid any issues.
  • Please note that for assignment 2 and the rest of the assignments you should submit the report as well as your code on CMS. You can refer to the task description for more information. 
  • API keys and cluster credentials are changed for some students due to change in teams. Please refer to the Personal Status page on CMS to find your new credentials and use those keys for the remaining assignments.

Best of luck in your assignment.

TML team

Room change

Written on 06.05.26 by Adam Dziedzic

Dear Students,

Please note the following room changes:

  • On 01.07.2026 and 29.07.2026, the class will take place in HS002, Building E1.3 instead of room 0.05, Building C0.
  • We will also need to use a different room on 03.06.2026. We will inform you of the new room as soon as it has been… Read more

Dear Students,

Please note the following room changes:

  • On 01.07.2026 and 29.07.2026, the class will take place in HS002, Building E1.3 instead of room 0.05, Building C0.
  • We will also need to use a different room on 03.06.2026. We will inform you of the new room as soon as it has been booked.

With kind regards,

TML team

Room for today: May 6th

Written on 06.05.26 by Adam Dziedzic

Dear Students,

Today's lecture is in building E1.3 in room HS 002 (due to another event in building C0).

With kind regards,

TML team

Questions on model stealing and defenses, both SL and SSL

Written on 06.05.26 by Adam Dziedzic

Dear Students,

Please, post the questions on model stealing and defenses (for both SL and SSL) on our forum. We will have the lecture today (May 6th) as planned and if there are any more questions afterwards then we will also answer them next week on May 13th.

With kind regards,

TML team

 

API keys and cluster credentials

Written on 21.04.26 by Nima Dindarsafa

Dear students,

For the teams that have two members we have placed the API keys (for submission to the evaluation system) and cluster credentials (for computation jobs) on their personal status page.

If you do not already have a teammate we suggest forming a team before the deadline 29.04. After… Read more

Dear students,

For the teams that have two members we have placed the API keys (for submission to the evaluation system) and cluster credentials (for computation jobs) on their personal status page.

If you do not already have a teammate we suggest forming a team before the deadline 29.04. After forming the teams you will get the aforementioned credentials to start working on the assignment 1 (due: 06.05). In order to have enough time to work on your solutions and enter your results on the leaderboard we recommend forming the teams as soon as possible. If you form a team after this news, please let the tutors know via email such that you can get access as soon as possible.

We will have our first tutorial on Wednesday 22.04 where we discuss how to set up the cluster and submit your solutions to the leaderboard. 

Best of luck,

TML team

Assignment 1: Membership Inference

Written on 20.04.26 by Adam Dziedzic

Dear Students,

The first assignment was released here: https://cms.cispa.saarland/tml2026/dl/9/Assignment1-MembershipInference.pdf 

Note: the assignments must be completed in pairs. API keys will only be generated for teams of two (if you do not have a partner, you will not receive an API key… Read more

Dear Students,

The first assignment was released here: https://cms.cispa.saarland/tml2026/dl/9/Assignment1-MembershipInference.pdf 

Note: the assignments must be completed in pairs. API keys will only be generated for teams of two (if you do not have a partner, you will not receive an API key and will not be able to submit solutions for the assignments).

Have a good week!

 

 

Team Signup has Opened

Written on 10.04.26 by Franziska Boenisch

Dear Everyone,

The signup for the team pairing has opened. You have until the submission of the first assignment to sign up there with a partner. As noted during the intro lecture: single-participant submissions cannot be considered for grading. I made a thread in the "Off-Topic" section of a forum… Read more

Dear Everyone,

The signup for the team pairing has opened. You have until the submission of the first assignment to sign up there with a partner. As noted during the intro lecture: single-participant submissions cannot be considered for grading. I made a thread in the "Off-Topic" section of a forum where you can look for assignment partners if you have not found one, yet.

See you next week,

Franziska and Adam

Show all

Trustworthy Machine Learning

 

Organization

Lecturers: Adam Dziedzic and Franziska Boenisch

Tutors: Nima DindarSafa and Maitri Vignesh Shah

Time: Wednesdays from 16:00 to 18:00.

Location: CISPA Building C0 (Stuhlsatzenhaus 5 66123 Saarbrücken) Lecture Hall Ground Floor (Room 0.05)

Credits: This course is enlisted with 9ETCS.

Starting Date:  April 7th

Description: The deployment of machine learning applications in real-world systems necessitates methods to ensure their trustworthiness. This course explores the different aspects of trustworthy machine learning, including Privacy, Collaborative Learning, Model Confidentiality, Robustness, Fairness and Bias, Explainability, Security, and Governance.

Learning Objectives: The objective of this tutorial is to provide attendees with a comprehensive understanding of trustworthy machine learning, covering key aspects such as privacy, robustness, fairness, explainability, security, and governance. Participants will benefit by gaining practical skills in identifying and mitigating risks associated with machine learning models, including privacy attacks, model theft, bias, and adversarial threats. By the end of the course, attendees are expected to have enhanced their knowledge of cutting-edge defense strategies, developed practical skills in securing machine learning systems, and deepened their understanding of the ethical and societal implications of deploying these models in real-world scenarios.

 

What you have to do weekly?

This is a flipped classroom lecture. What does this mean?

You have to watch the lecture videos a week before the lecture. Then you can ask your questions in the respective thread in the Forum on CMS. Questions regarding the course content will not be individually answered by the instructors in the forum. They will be answered in the in-person lecture hours. The in-person lecture hours are not recorded. Hence, if you have questions, you need to post them on time, and attend the Q&A session. 

Asking questions is not mandatory, but you can gain up to 1 bonus point for every *good* question that you post in the forum per session. To claim your bonus point, you then need to be in class in person. In total, you can make 10 bonus points over the semester that account for 10% bonus added to your final points of the course.

Additionally, we will post ungraded theoretical questions regarding the individual lectures before the tutorial sections. They give you an impression on how the exam and midterm will look like. We highly suggest solving them. Solutions can also be discussed in the in-person Q&A session.

 

What are our topics?

Privacy: We will analyze the landscape of privacy attacks against machine learning models and study the means to prevent these attacks.

Model Confidentiality: We will see that machine learning models can be easily stolen through different methods, such as simple copies of the models, or the private training data, or the extraction of the model exposed via a public or private API. We will analyze different attack strategies to steal the models and the state-of-the-art defense methods.

Robustness: We will learn about different facets of robustness, such as robustness to out-of-distribution samples, natural noise present in the input data, or adversarial examples, where attackers incur imperceptible changes to the input to ML models to fool their predictions.

Data Provenance: In the era of generative ML where data is generated and ingested by models, it becomes increasingly important to understand where data comes from, and how it shapes the models. We will address topics of training data identification and watermarking.

Collaborative Learning: We will analyze the risks to trustworthy machine learning that arise in collaborative machine learning setups and look into their mitigations.

Fairness and Bias: We will scrutinize the behavior of ML models on different subgroups of the training data. We will assess the models’ responses to inputs with different attributes and will uncover the potential causes of unfair or biased model responses and current mitigations.

Explainability: We will address the challenges that arise from machine learning models’ black-box behavior and look into techniques to explain predictive behavior.

 

Throughout the course, we will discuss outstanding challenges and future research directions to make machine learning more robust, private, and trustworthy.

 

Assignments (with tentative dates):

The course entails 4 practical graded assignments based on implementing the concepts studied during the lecture.

Assignments need to be handed in groups of two. If you do not have a partner, yet, you can use the forum to find one.

  1. Privacy: Implement a membership inference attack and achieve the highest attack success. We give a model and a list of data points, your task is to determine for each point yes/no, meaning: member/no-member. --> Due 6.5.

  2. Model extraction: We will offer a machine-learning model over an API and you will extract the model behavior. You upload a PyTorch model. We check how close the uploaded model is in terms of the predictions to the victim model. --> Due 27.5.

  3. Robustness: Train a model as robust as possible against the highest number of adversarial examples we will generate. You submit a PyTorch model, we load it, and run the attack. The goal is to create a model with the highest adversarial and clean accuracy. --> Due 17.6.

  4. Fairness: Train a classifier that has the highest demographic parity. You submit a PyTorch model and we assess how fair it is. --> Due 8.7.

 

Submission of Assignments:

Please submit the assignments over CMS in a ZIP file. We only need the code and README+Report. Models and other artifacts should not be uploaded. Submissions that are not uploaded by the submission time over CMS will not be considered. If you upload larger folders, you need to start a few minutes before the submission deadline as the upload may take some time. You can also upload an initial submission and then overwrite it. But we will not take "CMS upload took to long/failed" as an excuse for assignments that are not received. They will be graded with 0 points.

 

Exam and Grading

Exam format: written.

Grading: Grading will be based on the assignments (40% total: 10% per assignment), the written midterm exam (20% - in class), and the final exam (40%). The final grade is over all components. This means that you might still pass, for example, even if you fail the midterm. Additionally, you can earn up to 10% bonus by asking good questions and being present in the lectures.
 
Important: The course does not offer a re-exam!
Note: The exams (mid-term and final) are both closed-book exams. No tools or other help is allowed. If there are some calculations to be done, the numbers will be easy enough. There will be roughly 1 point to achieve per minute. The midterm takes 45 minutes, the final takes 90 minutes. 
 
 

Tentative Lecture Dates

The videos for each lecture are linked at the respective lecture date. To access the full playlist, visit: https://www.youtube.com/playlist?list=PLNfU-a7sxIwvS7dhnOPdFtvhdNcrnufEW 

  1. Overview on the Course, Administration, Intro, and Questions about Privacy I (8.4., Franziska and Adam)

  2. Questions on Privacy II (15.4., Franziska and Adam)

  3. Tutorial on Assignment 1, Coding in Python, Submitting your Solutions to our API (22.4., Nima and Maitri)

  4. Tutorial on Theoretical Exercises for the Topic Block Privacy (29.4., Nima and Maitri)

  5. Questions on Model stealing and defenses, both SL and SSL  (6.05., Franziska and Adam)

  6. Questions on Model Stealing and Defenses (Continue) and Tutorial on Assignment 2 (13.5., Nima and Maitri)

  7. Questions on Robustness and Tutorial on Theoretical Exercises for the Topic Block Model Stealing (20.5., Nima and Maitri)

  8. Midterm-Exam in E1 3 - HS 001 on 3.6. (3rd of June) 4PM-5:00PM.  We'll first be writing the exam, and then have a second part.

  9. Adversarial Machine Learning / Robustness (10.6.)

  10. Collaborative learning (17.6.a, 17.6.b)

  11. Fairness and bias (24.6.)

  12. Explainability (1.7. HS I in E 2.5!!!!! ATTENTION, DIFFERENT LOCATION THAN USUAL CHANGE

  13. Security and Governance (8.7.)

  14. Summary & Open Questions (8.7.)

  15. Final Exam on 29.7. (29th of July) from 16.00 - 18.00. Location: HS I in E 2.5

 

Do I have the right qualifications for the lecture?

Students are required to have successfully completed a basic lecture on machine learning.

Additionally, you need to have hands-on coding experience with Python for Machine Learning. This means, preferably PyTorch (alternatively TensorFlow). All assignments will require you to do this type of coding. If you are unfamiliar and have never coded in PyTorch, it will be a steep learning curve. 

 

 

Privacy Policy | Legal Notice
If you encounter technical problems, please contact the administrators.