News

Currently, no news are available

Trustworthy Machine Learning

Time: Wednesdays from 4:00 pm to 5:30 pm.

Location: CISPA Building C0 (Stuhlsatzenhaus 5 66123 Saarbrücken) Lecture Hall Ground Floor (Room 0.05)

Starting Date:  April 24, 2024

Description: The deployment of machine learning applications in real-world systems necessitates methods to ensure their trustworthiness. This course explores the different aspects of trustworthy machine learning, including Privacy, Collaborative Learning, Model Confidentiality, Robustness, Fairness and Bias, Explainability, Security, and Governance.

Privacy: We will analyze the landscape of privacy attacks against machine learning models and study the means to prevent these attacks.

Collaborative Learning: We will analyze the risks to trustworthy machine learning that arise in collaborative machine learning setups and look into their mitigations.

Model Confidentiality: We will see that machine learning models can be easily stolen through different methods, such as simple copies of the models, or the private training data, or the extraction of the model exposed via a public or private API. We will analyze different attack strategies to steal the models and the state-of-the-art defense methods.

Robustness: We will learn about different facets of robustness, such as robustness to out-of-distribution samples, natural noise present in the input data, or adversarial examples, where attackers incur imperceptible changes to the input to ML models to fool their predictions.

Fairness and Bias: We will scrutinize the behavior of ML models on different subgroups of the training data. We will assess the models’ responses to inputs with different attributes and will uncover the potential causes of unfair or biased model responses and current mitigations.

Explainability: We will address the challenges that arise from machine learning models’ black-box behavior and look into techniques to explain predictive behavior.

Security and Governance: Machine learning applications are usually not isolated but integrated in some systems in a given society with their respective norms and values. We will study the security risks that can arise from the deployment of machine learning applications and how to use governance approaches to mitigate these risks.

Throughout the course, we will discuss outstanding challenges and future research directions to make machine learning more robust, private, and trustworthy.

Organization: This course can only accommodate up to 40 students. Students will be admitted based on their prior courses and experience (to ensure a sufficient machine learning background). Given the combination of theoretical lectures and practical assignments that involve training machine learning models and implementing attacks and defenses, the workload is above an average course.

Structure: The course will be given in an inverted-classroom style. The lecturers, Adam Dziedzic and Franziska Boenisch will prepare the videos beforehand with the material for the next class. The in-person tutorial sessions on Wednesdays 4PM-6PM serve to discuss the most difficult concepts for each given lecture. Every student is required to post 2 questions online, at least 2 days before the session.

Assignments: The course entails 4 practical graded assignments based on implementing the concepts studied during the lecture:

  1. Privacy: Implement a membership inference attack and achieve the highest attack success. We give a model and a list of data points, your task is to determine for each point yes/no, meaning: member/no-member.

  2. Model extraction: We will offer a machine-learning model over an API and you will extract the model behavior. You upload a PyTorch model. We check how close the uploaded model is in terms of the predictions to the victim model.

  3. Robustness: Train a model as robust as possible against the highest number of adversarial examples we will generate. You submit a PyTorch model, we load it, and run the attack. The goal is to create a model with the highest adversarial and clean accuracy.

  4. Fairness: Train a classifier that has the highest demographic parity. You submit a PyTorch model and we assess how fair it is.

Exam format: written.

Requirements: The course presumes a basic understanding of machine learning. Students are required to have successfully completed a basic lecture on machine learning.

Grading: Grading will be based on the assignments (40% total: 10% per assignment), the questions asked before each flipped classroom meeting (10%), the written midterm exam (10% - in class), and the final exam (40%). Through outstanding results in all the assignments (top 3 submissions), students can collect additional bonus points in the form of half a grade improvement for the final exam.
 
Objectives: The objective of this tutorial is to provide attendees with a comprehensive understanding of trustworthy machine learning, covering key aspects such as privacy, robustness, fairness, explainability, security, and governance. Participants will benefit by gaining practical skills in identifying and mitigating risks associated with machine learning models, including privacy attacks, model theft, bias, and adversarial threats. By the end of the course, attendees are expected to have enhanced their knowledge of cutting-edge defense strategies, developed practical skills in securing machine learning systems, and deepened their understanding of the ethical and societal implications of deploying these models in real-world scenarios.
 
What lectures will we offer? (tentative dates)
  1. Overview, Administration & Intro (17.4.)

  2. Privacy (24.4., 8.5. (only 8.5. is remote: https://cispa-de.zoom-x.de/j/68320215592?pwd=MGwxc1hlT01QZ1ZwaXU5eGFtYVg3QT09 ))

  3. Model stealing and defenses (15.5 - supervised learning (SL), 22.5 - self-supervised learning (SSL).)

  4. Adversarial Machine Learning / Robustness (29.5.) 

  5. Midterm-Exam 5.6.

  6. Collaborative learning (5.6., 12.6.)

  7. Fairness and bias (19.6.)

  8. Explainability (10.7.) 

  9. Security and Governance (17.7.)

  10. Summary & Open Questions (24.7. (remote https://cispa-de.zoom-x.de/j/66878276009?pwd=dWFYV0xyMTVKYlVhcEpyUHZaSm1UZz09 ))

  11. Final Exam on 31.7 (31st of July) from 16.00 - 18.00. Location: lecture hall 002 in E1 3 building on UdS campus.

 
Assignments (tentative dates):
  • Privacy: Implement a membership inference attack and achieve the highest attack success. --> Due 29.5.

  • Model stealing: we will offer a machine learning model over and an API and the students extract the model behavior. --> Due 12.6.

  • Robustness: Train a model as robust as possible and be robust against the highest number of adversarial examples we will generate. --> Due 3.7.

  • Explainability: Provide the explanations of predictions in neural networks. --> Due 31.7.

 

Submission of Assignments:

1. Make a GitHub repository.

2. If it is private, add Adam and Franziska (adam-dziedzic and fraboeni).

3. For your final submission, please make a tag (click on tags, "create new release", create it)

4. Submit the link to the tagged version as the online Test in CMS under Assignment X (X is the number of the current assignment). 

Privacy Policy | Legal Notice
If you encounter technical problems, please contact the administrators.