Adversarial Machine Learning Kathrin Grosse

News

Currently, no news are available
 

Security Aspects of Machine Learning and Data Mining

Lecture Type: Seminar (weekly Meetings)

Timeslot: Friday, 16:00, room 3.21

Instructor: Michael Backes

Advisor: Kathrin Grosse, Yang Zhang

Contact: kathrin.grosse@cispa.saarland

Number of students: 10-13

Registration

Please use the Meta-Seminar page to register.

Description

Machine Learning (ML) has become almost omnipresent, as our society collects more and more data. At the same time, ML can be targeted by someone who tries to influence a decision to her favor. In this seminar we will answer the following questions: Can an ML based decision be influenced? To which extend? Is adversarial learning always malicious? Why are ML algorithms vulnerable? Which algorithms are less vulnerable than others?  How can we defend an algorithm? How can we formalize security in the context of data analysis? Is security orthogonal to accuracy and overfitting, or related to it?

The goal of this seminar is to relate concepts of Adversarial Machine Learning to the basic concepts in ML. In the beginning, there are two lectures reviewing important and basic material from ML. Yet, we will not be able to introduce all algorithms touched in the seminar in detail, and you will be required to read up on topics you do not know. Having attended either ML or Data Mining core lecture should be sufficient to cover what is needed.

Tentative Schedule

Comments: There are several slots left free. We might use those to either investigate some topics more in detail or to provide more background knowledge. We will decide this together in the first meeting.

Date Topic #presentations papers
13.4 Introduction I 0 slides
20.4 Introduction II 0 slides
4.5 Data generation and GAN 1 paper1,paper2
18.5 Attacks on Neural Networks 1 paper1,paper2
- Mitigations for Neural networks 1 paper1,(paper3),paper2
8.6 Support Vector Machines 1 paper1,paper2
15.6 Probabilistic Models 1 paper1,paper2,
22.6 Transferability of Attacks 1 paper1,paper2
29.6 Poisoning 1 paper1,paper2
- Theory 1 paper1
- Theory 1 paper1
6.7 other Attacks on ML 1 paper1,paper2
- tbd 1 tbd
- tbd 1 tbd

 

Assignment and Grading

Students are required to give one presentation of 60 minutes +10 minutes discussion (each on two papers) and to write a summary of one of the meetings (8-10 pages, not the meeting of own presentation). Further, participation in the class and interaction during feedback will be graded.



Privacy Policy | Legal Notice
If you encounter technical problems, please contact the administrators