News
Aktuell gibt es keine Neuigkeiten
Trustworthy Machine Learning
Machine learning has made great advances over the past year and many techniques have found their ways into applications. This leads to an increasing demand of techniques that not only perform well - but are also "trustworthy".
Trustworthiness includes:
- Interpretability of the prediction
- Robustness against changes to the input, which occur naturally or with malicious intend
- Privacy preserving machine learning (e.g. when dealing with sensitive data such as in health applications)
- Fairness
- ...
Description
As a proseminar’s primary purpose is to learn presentation skills, the seminar will feature two presentations from each student. As presentation and writing skills are highly interlink for each presentation also a very short - at most 2 pages - report has to be handed in.
In the first half of the semester, we will have presentations of two topics each week. After each presentation, fellow students and lecturers will provide feedback on how to improve the presentation. This general feedback must then be taken into account for the second half of the semester, where again each student will present.
Grading
The *first presentations and report* will count towards 30% of the overall grade, the *second presentation and report* will count towards 70% of the overall grade. Attendance in the proseminar meetings is mandatory. At most one session can be skipped, after that you need to bring a doctor’s note to excuse your absence.
Logistics
The date for the meeting is fixed to Thursday, 14-16. All meetings will be virtualized via Zoom. Here are the details for joining the virtual meetings.
https://zoom.us/j/94291345987?pwd=SHVOb2dPVEdWYmh4OTkzWG1rQ2c3Zz09
Meeting ID: 942 9134 5987
Password: 7K?7MS
Schedule
May 7th | Kick off Meeting and topic overview (slides, video) |
May 14th | How to present and write (slides, video) |
May 21th | Holiday |
May 28th | Analysing and dissection writing and presentation (paper1, video1, paper2, video2, seminar video) |
June 4th | no seminar |
June 11th | Holiday |
June 18th | (first round) Interpretability, Adversarial Examples, DeepFakes, Model Stealing |
June 25th | (first round) Uncertainty, Privacy, Fairness, Causality |
July 2nd | no seminar |
July 9th | (second round) Interpretability, Adversarial Examples, DeepFakes, Model Stealing |
July 16th | (second round) Uncertainty, Privacy, Fairness, Causality |
First Round Papers (in random order)
- Uncertainty:
Dropout as a bayesian approximation: Representing model uncertainty in deep learning
Y Gal, Z Ghahramani - International conference on machine learning, 2016 - Model Stealing:
Stealing Machine Learning Models via Prediction APIs
Florian Tramèr, Fan Zhang, Ari Juels, Michael K. Reiter, Thomas Ristenpart, USENIX Security, 2016 - Interpretability:
Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)
Been Kim, Martin Wattenberg, Justin Gilmer, Carrie J. Cai, James Wexler, Fernanda B. Viégas, Rory Sayres, ICML 2018 - Adversarial Examples:
Towards Evaluating the Robustness of Neural Networks.
Nicholas Carlini, David Wagner, S&P 2017 - DeepFakes
Attributing Fake Images to GANs: Learning and Analyzing GAN Fingerprints
Ning Yu; Larry Davis; Mario Fritz, ICCV 2019 - Privacy
Deep Learning with Differential Privacy
Martín Abadi, Andy Chu, Ian J. Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, Li Zhang, CCS 2016 - Fairness
Fairness Constraints: A Flexible Approach for Fair Classification
Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez-Rodriguez, Krishna P. Gummadi, JMLR 2019 (previously AISTATS 2017) - Causality
Discovering Causal Signals in Images
David Lopez-Paz, Robert Nishihara, Soumith Chintala, Bernhard Schölkopf, Léon Bottou, CVPR 2017