News

Currently, no news are available

Generally speaking, ML Robustness concerns how machine learners should react when the training and testing distributions are not identical, which can arise from any of the following situations:

  • The underlying data collection procedure is corrupted due to human labeling error or measurement noise.
  • Test-time inputs are manipulated by malicious users, i.e., adversarial examples. 
  • Training data are manipulated by adversaries, i.e., poisoning and backdoor attacks.
  • Distribution shifts may exist whenever the model is deployed in a new environment.

In this advanced lecture, you will learn topics in robust statistics, adversarial machine learning, and out-of-distribution generalization. This course assumes that students have prior knowledge of machine learning and optimization.

Instructor: Xiao Zhang (xiao.zhang@cispa.de). My office is Room 3.12, CISPA Main Building, Stuhlsatzenhaus 5.

Meeting Time: 14:00 - 16:00 on every Thursday

Meeting Room: Bernd Therre Lecture Hall (0.05), CISPA Main Building (C0), Stuhlsatzenhaus 5.

Registration: You need to register for the course on CISPA CMS here. Registration will open on October 01, 2023. To receive the grades, you must also register for the course on LSF for the exam.

 

Stay tuned for more details about the course syllabus!

 

 

Privacy Policy | Legal Notice
If you encounter technical problems, please contact the administrators.