News

final results are out

Written on 10.02.23 by Yang Zhang

Dear all,

The final results of the seminar are available on LSF.

Best,

Yang

Seminar evaluation

Written on 31.01.23 by Yang Zhang

Dear all,

The evaluation for the seminar started.

Please use the link below.

https://qualis.uni-saarland.de/eva/?l=1854&p=61k08y

Please do so before February 10th, after that, we can put your scores in LSF.

I'm sorry for the delay but I only got the evaluation link today from… Read more

Dear all,

The evaluation for the seminar started.

Please use the link below.

https://qualis.uni-saarland.de/eva/?l=1854&p=61k08y

Please do so before February 10th, after that, we can put your scores in LSF.

I'm sorry for the delay but I only got the evaluation link today from Qualis-team.

Again, I wish everyone good luck for the exams.

Best,

Yang

Schedule of presentation

Written on 09.11.22 by Xinyue Shen

Dear all,

After receiving your responses, we have arranged a schedule for you to give the presentations (see it at the end of this message).
Every Monday from 2 pm to 4 pm, we will have two presenters introduce their preferred papers.

See you next week. :)

Best,
Xinyue

Read more

Dear all,

After receiving your responses, we have arranged a schedule for you to give the presentations (see it at the end of this message).
Every Monday from 2 pm to 4 pm, we will have two presenters introduce their preferred papers.

See you next week. :)

Best,
Xinyue

-----------------------------------------------------------------
14.11:
1. Sai Pravallika Tummala, Stealing Machine Learning Models via Prediction APIs
2. Yukun Jiang, Overlearning Reveals Sensitive Attributes

21.11:
3. Shantanu Kumar Rahut, You are who you know and how you behave: Attribute inference attacks via users' social friends and behaviors
4. Janifa Jahan Hossain, Deep Learning with Differential Privacy

28.11:
5. Angelin Mary Jose, Auditing Data Provenance in Text-Generation Models
6. Sarah Breckner, Extracting Training Data from Large Language Models

05.12:
7. Lotfy Abdel Khaliq, Membership Leakage in Label-Only Exposures
8. Tulika Nayak, ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models

12.12:
9. Aatir Imran, Stealing Hyperparameters in Machine Learning
10. Pranav Subhash Shetty, Stealing Links from Graph Neural Networks

paper assignment

Written on 07.11.22 by Yang Zhang

Dear all,

Please send your paper preferences (3 papers ranked from high to low) to xinyue.shen@cispa.de by the end of tomorrow. If you plan to present next Monday, please also indicate it in your email.

The assignment will be ready by 1pm Wednesday!

Best,

Yang

Privacy of Machine Learning

Machine learning has witnessed tremendous progress during the past decade, and data is the key to such success. However, in many cases, machine learning models are trained on sensitive data, e.g., biomedical records, and such data can be leaked from trained machine learning models. In this seminar, we will cover the newest research papers in this direction.

 

Logistics:

Time: Monday 2pm-4pm

Location: Zoom

TAs:

- Xinyue Shen (xinyue.shen@cispa.de)

- Boyang Zhang (boyang.zhang@cispa.de)

- Zeyang Sha (zeyang.sha@cispa.de)

List of Papers:

  1. Membership Leakage in Label-Only Exposures
  2. GAN-Leaks: A Taxonomy of Membership Inference Attacks against Generative Models
  3. Membership Inference Attacks by Exploiting Loss Trajectory
  4. Dataset Inference: Ownership Resolution in Machine Learning
  5. Updates-Leak: Data Set Inference and Reconstruction Attacks in Online Learning
  6. Extracting Training Data from Large Language Models
  7. Reconstructing Training Data with Informed Adversaries
  8. You are who you know and how you behave: Attribute inference attacks via users' social friends and behaviors
  9. Overlearning Reveals Sensitive Attributes
  10. ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models
  11. Quantifying and Mitigating Privacy Risks of Contrastive Learning
  12. Model Stealing Attacks Against Inductive Graph Neural Networks
  13. Stealing Links from Graph Neural Networks
  14. Stealing Hyperparameters in Machine Learning
  15. Stealing Machine Learning Models via Prediction APIs
  16. Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures
  17. The Secret Revealer: Generative Model-Inversion Attacks Against Deep Neural Networks.
  18. Exploiting Explanations for Model Inversion Attacks
  19. Deep  Learning  with  Differential  Privacy
  20. Auditing Data Provenance in Text-Generation Models
  21. UnGANable: Defending Against GAN-based Face Manipulation
  22. SSLGuard: A Watermarking Scheme for Self-supervised Learning Pre-trained Encoders
Privacy Policy | Legal Notice
If you encounter technical problems, please contact the administrators.