News

Grades are out

Written on 09.02.21 by Yang Zhang

Dear all,

the grades are out, you can check it on LSF.

Thanks again for you participating in the whole semester and good luck on your other exams!

Yang

Schedule change

Written on 10.12.20 by Yang Zhang

Dear all,

the seminar next week will have two presenters.

The last seminar will happen on January 6th.

Many thanks and stay safe!

Yang

Paper assignment and the order of presentations.

Written on 05.11.20 by Yang Zou

Dear all,

We have finished the paper assignment and the order of presentations, please log in to CMS and check the specific dates and paper assignment.

Thanks.

Papers and Slides are online

Written on 04.11.20 by Yang Zhang

Hi Guys,

All the papers are online. Please make your 3 choices and send it to yang.zou@cispa.de ASAP.

If you want to volunteer yourself to be the first two speakers next week, please also mention that in the email.

In addition, my slides can be found in information-> material of this… Read more

Hi Guys,

All the papers are online. Please make your 3 choices and send it to yang.zou@cispa.de ASAP.

If you want to volunteer yourself to be the first two speakers next week, please also mention that in the email.

In addition, my slides can be found in information-> material of this CMS.

Best and stay healthy!

Yang

Data Privacy

The development of ICT has resulted in an unprecedented amount of data available. Big data, on the one hand, bring many benefits to society, on the other hand, raises serious concerns about people's privacy. In this seminar, students will learn, summarize, and present state-of-the-art scientific papers in data privacy. Topics include social network privacy, machine learning privacy, and biomedical data privacy. The seminar is organized as a reading group. Every week, one student will present her/his assigned papers on a certain topic, followed by a group discussion. All students are required to read the papers carefully and prepare a list of questions for discussion. Each student will write a summary of her/his assigned papers providing a general overview of the field.

Logistics

Time: 1pm-3pm Wednesday

Venue: Zoom

Instructors

Yang Zhang (zhang@cispa.saarland)

Zhikun Zhang (zhikun.zhang@cispa.saarland)

TAs

Min Chen (min.chen@cispa.saarland)

Xinlei He (xinlei.he@cispa.saarland)

Yang Zou (yang.zou@cispa.saarland)

Papers

  1. Exploiting Unintended Feature Leakage in Collaborative Learning (FL) (https://ieeexplore.ieee.org/abstract/document/8835269)
  2. Humpty Dumpty- Controlling Word Meanings via Corpus Poisoning (NLP) (https://www.cs.cornell.edu/~shmat/shmat_oak20.pdf)
  3. Stealing Links from Graph Neural Networks (GNN) (https://arxiv.org/abs/2005.02131)
  4. Information Leakage in Embedding Models (Embedding)(https://arxiv.org/abs/2004.00053)
  5. Deep Leakage from Gradients (Gradient) (https://papers.nips.cc/paper/9617-deep-leakage-from-gradients)
  6. ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models (Membership inference) (https://arxiv.org/abs/1806.01246)
  7. Property Inference Attacks on Fully Connected Neural Networks using Permutation Invariant Representations (Property inference) (https://dl.acm.org/doi/10.1145/3243734.3243834)
  8. Knockoff Nets: Stealing Functionality of Black-Box Models (Model Stealing) (https://openaccess.thecvf.com/content_CVPR_2019/papers/Orekondy_Knockoff_Nets_Stealing_Functionality_of_Black-Box_Models_CVPR_2019_paper.pdf)
  9. Blind Backdoors in Deep Learning Models (Backdoor attack)  (https://arxiv.org/abs/2005.03823)
  10. Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks (Backdoor defense) (https://people.cs.uchicago.edu/~ravenben/publications/pdf/backdoor-sp19.pdf)
  11. Overlearning Reveals Sensitive Attributes (Overlearning) (https://www.cs.cornell.edu/~shmat/shmat_iclr20.pdf)
  12. Adversarial Reprogramming of Neural Networks (Reprogramming) (https://arxiv.org/abs/1806.11146)
  13. walk2friends: Inferring Social Links from Mobility Profiles (for privacy) (https://dl.acm.org/doi/10.1145/3133956.3133972)
  14. Deep Learning with Differential Privacy (DP) (https://dl.acm.org/doi/10.1145/2976749.2978318)
  15. Evaluating Differentially Private Machine Learning in Practice (DP) (https://www.usenix.org/system/files/sec19-jayaraman.pdf)

Backup (if you like none of the 15 papers above, you can also choose from these backup papers):

  1. Differential Privacy Has Disparate Impact on Model Accuracy (https://omidpoursaeed.github.io/pdf/Differential_privacy.pdf)
  2. Local Model Poisoning Attacks to Byzantine-Robust Federated Learning (https://www.usenix.org/system/files/sec20summer_fang_prepub.pdf)
  3. AttriInfer: Inferring User Attributes in Online Social Networks Using Markov Random Fields (http://papers.www2017.com.au.s3-website-ap-southeast-2.amazonaws.com/proceedings/p1561.pdf)
  4. Towards Deep Learning Models Resistant to Adversarial Attacks (https://arxiv.org/abs/1706.06083)
  5. Machine learning models that remember too much (https://www.cs.cornell.edu/~shmat/shmat_ccs17.pdf)
  6. Adversarial Neural Network Inversion via Auxiliary Knowledge Alignment (https://dl.acm.org/doi/10.1145/3319535.3354261)
  7. Privacy Risks of Securing Machine Learning Models against Adversarial Examples (https://www.comp.nus.edu.sg/~reza/files/Shokri-CCS2019.pdf)
  8. Latent Backdoor Attacks on Deep Neural Networks (http://people.cs.uchicago.edu/~ravenben/publications/pdf/pbackdoor-ccs19.pdf)
  9. Model-Reuse Attacks on Deep Learning Systems (https://dl.acm.org/doi/pdf/10.1145/3243734.3243757)
  10. Differentially-private Federated Neural Architecture Search (https://ishikasingh.github.io/files/fl_icml2020workshop_FNAS.p
     

Paper Assignment

The current paper assignment and order:

  • 11.11.2020
    • Philipp Zimmermann: Information Leakage in Embedding Models
    • Dhruv Sunil Sharma: Latent Backdoor Attacks on Deep Neural Networks
  • 18.11.2020
    • Mirco Ferdinand Klein: Deep Leakage from Gradients
    • Luca Pasqual Bläsius Blind Backdoors in Deep Learning Models
  • 25.11.2020
    • Leonard Niemann: Knockoff Nets: Stealing Functionality of Black-Box Models
  • 02.12.2020
    • Franziska Drießler: Evaluating Differentially Private Machine Learning in Practice
    • Mahmoud Fawzi: AttriInfer: Inferring User Attributes in Online Social Networks Using Markov Random Fields
  • 09.12.2020
    • Julian Augustin: Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks
    • Lisa Hoffmann: Adversarial Reprogramming of Neural Networks
  • 16.12.2020
    • Lukas Strobel: walk2friends: Inferring Social Links from Mobility Profiles
    • Benjamin Blank: ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models
  • 06.01.2020
    • Koushik Chowdhury: Machine learning models that remember too much

 

 

 

 

 
Privacy Policy | Legal Notice
If you encounter technical problems, please contact the administrators.