News

Submission of final report and presentation

Written on 11.03.23 by Krikamol Muandet

The deadline(s) are 14.03.2023 and 15.03.2023.

The presentation template is now available

Written on 19.12.22 by Krikamol Muandet

You can download the presentation template here.

The birth of OOD seminar

Written on 01.12.22 by Krikamol Muandet

The CMS page of the OOD seminar has been set up.

Topics in Out-of-Distribution (OOD) Generalization

The ability to acquire knowledge through learning and adapt it quickly to new environments is the hallmark of intelligence. Despite recent successes of machine learning (ML) based models such as Deep Neural Network (DNN), Transformer, GPT-3, and DALL-E, they still lack the ability to generalize out of training distributions. Traditionally, most ML algorithms were developed under the identically and independently distributed (i.i.d) assumption, i.e., test data come from the same distribution as training data. As these models are increasingly trained and deployed across heterogeneous and potentially massive environments, the i.i.d. assumption is almost always violated in practice. This problem significantly limits the scope of real-world applications of machine learning.

In this seminar, we will explore the research frontier that aims to push machine learning beyond the i.i.d. setting. We will study state-of-the-art theories and algorithms that enable machine learning models to generalize out of distribution (OOD). Topics of interest include deep learning, causality, meta-learning, reinforcement learning, domain adaptation, domain generalization, and robustness.
 

Format

As the field of OOD generalization is still immature, the goal of this seminar is to investigate state-of-the-art theories and algorithms by exploring different areas related to OOD generalization, namely,

  1. Area 1: Deep Architecture, Data Augmentation, and Implicit Biases 
  2. Area 2: Invariance Representation and Causality
  3. Area 3: Meta Learning
  4. Area 4: Domain Generalization
  5. Area 5: OOD Detection and Test-time Adaptation
  6. Area 6: Adversarial Training
  7. Area 7: Benchmark Datasets

The students pick one topic out of the above areas. Then, they pick 1-2 papers to study in detail, including the related literature. After studying the paper(s), they must submit the initial report and presentation for feedback. After receiving the feedback, the students prepare the presentation and then deliver it to the rest of the class. Finally, the students submit the final report and presentation.

As part of the seminar, there will also be special invited talks by external experts on topics related to OOD generalization.
 

Tentative Schedule

  • Topic Assignment (18 November 2022)
  • Submit initial report and presentation (Deadline: January 15th, 2023)
  • Receive feedback on the initial report and presentation (January 2023)
  • Student presentation (February 2023)
  • Submit the final report and presentation (March 2023)
     

Deliverables

Students who participate in this seminar are expected to deliver

  1. Report
    • A summary of the topic of your choice (1-2 papers)
    • A template is available here.
  2. Presentation
    • 30 minutes talk + 15 minutes for Q&A
    • Your classmates are your target audience
    • A template is available here.
       

Special Invited Talks

  • Cian Eastwood (Edinburgh) on 23/11/2022


    Title: Shift happens: how can we best prepare?

    Abstract: In the real world, the conditions under which a model is developed usually differ from those in which it is deployed. Thus, to be useful in practice, machine learning systems must be developed with such condition/distribution shifts in mind. In this talk, I will discuss several ways in which we have sought to prepare models for distribution shift under the umbrella terms of domain adaptation, domain generalization, and causal representation learning.

  • TBA

Contact

Dr. Krikamol Muandet (muandet@cispa.de)
Room 2.01, CISPA-0 building, Stuhlsatzenhaus 5

 

Privacy Policy | Legal Notice
If you encounter technical problems, please contact the administrators.