News

Presentation Schedule

Written on 25.10.24 by Yixin Wu

Dear all,

 

After receiving your responses, we have arranged a schedule for the presentations.

Starting from November 5th, every Tuesday from 4 pm to 5 pm, we will have two presenters introduce their preferred papers.

05.11.2024

Wasif Khizar, Privacy Backdoors: Stealing Data with… Read more

Dear all,

 

After receiving your responses, we have arranged a schedule for the presentations.

Starting from November 5th, every Tuesday from 4 pm to 5 pm, we will have two presenters introduce their preferred papers.

05.11.2024

Wasif Khizar, Privacy Backdoors: Stealing Data with Corrupted Pretrained Models

Hafsa Zubair, Understanding Data Importance in Machine Learning Attacks: Does Valuable Data Pose Greater Harm

12.11.2024

Neshmia Hafeez, Quantifying Privacy Risks of Prompts in Visual Prompt Learning

Yashodhara Thoyakkat, PreCurious: How Innocent Pre-Trained Language Models Turn into Privacy Traps

19.11.2024 

Keerat Singh Khurana, SeqMIA: Sequential-Metric Based Membership Inference Attack

Renard Hadikusumo, Students Parrot Their Teachers: Membership Inference on Model Distillation

26.11.2024

Niklas Lohmann, Detecting Pretraining Data from Large Language Models

Mohamed Salman, Scalable Extraction of Training Data from (Production) Language Models

03.12.2024

Shraman Jain, Membership inference attacks against in-context learning

Nick Nagel, PrivacyLens: Evaluating Privacy Norm Awareness of Language Models in Action

10.12.2024

Roshan Pavadarayan Ravichandran, ProPILE: Probing Privacy Leakage in Large Language Models

Anjali Sankar Eswaramangalath, Transferable Embedding Inversion Attack: Uncovering Privacy Risks in Text Embeddings without Model Queries

17.12.2024

Max Thomys, “I Don’t Know If We’re Doing Good. I Don’t Know If We’re Doing Bad”: Investigating How Practitioners Scope, Motivate, and Conduct Privacy Work When Developing AI Products

Thota Vijay Kumar, Exploring Privacy and Incentives Considerations in mHealth Technology Adoption: A Case Study of COVID-19 Contact Tracing Apps

07.01.2025

Yavor Ivanov, ERASER: Machine Unlearning in MLaaS via an Inference Serving-Aware Approach

Best,
Yixin

Presentation Schedule

Written on 25.10.24 by Yixin Wu

Dear all,

 

After receiving your responses, we have arranged a schedule for the presentations.

Starting from November 5th, every Tuesday from 4 pm to 5 pm, we will have two presenters introduce their preferred papers.

05.11.2024

Wasif Khizar, Privacy Backdoors: Stealing Data with… Read more

Dear all,

 

After receiving your responses, we have arranged a schedule for the presentations.

Starting from November 5th, every Tuesday from 4 pm to 5 pm, we will have two presenters introduce their preferred papers.

05.11.2024

Wasif Khizar, Privacy Backdoors: Stealing Data with Corrupted Pretrained Models

Hafsa Zubair, Understanding Data Importance in Machine Learning Attacks: Does Valuable Data Pose Greater Harm

12.11.2024

Neshmia Hafeez, Quantifying Privacy Risks of Prompts in Visual Prompt Learning

Yashodhara Thoyakkat, PreCurious: How Innocent Pre-Trained Language Models Turn into Privacy Traps

19.11.2024 

Keerat Singh Khurana, SeqMIA: Sequential-Metric Based Membership Inference Attack

Renard Hadikusumo, Students Parrot Their Teachers: Membership Inference on Model Distillation

26.11.2024

Niklas Lohmann, Detecting Pretraining Data from Large Language Models

Mohamed Salman, Scalable Extraction of Training Data from (Production) Language Models

03.12.2024

Shraman Jain, Membership inference attacks against in-context learning

Nick Nagel, PrivacyLens: Evaluating Privacy Norm Awareness of Language Models in Action

10.12.2024

Roshan Pavadarayan Ravichandran, ProPILE: Probing Privacy Leakage in Large Language Models

Anjali Sankar Eswaramangalath, Transferable Embedding Inversion Attack: Uncovering Privacy Risks in Text Embeddings without Model Queries

17.12.2024

Max Thomys, “I Don’t Know If We’re Doing Good. I Don’t Know If We’re Doing Bad”: Investigating How Practitioners Scope, Motivate, and Conduct Privacy Work When Developing AI Products

Thota Vijay Kumar, Exploring Privacy and Incentives Considerations in mHealth Technology Adoption: A Case Study of COVID-19 Contact Tracing Apps

07.01.2025

Yavor Ivanov, ERASER: Machine Unlearning in MLaaS via an Inference Serving-Aware Approach

Best,
Yixin

Paper Assignment

Written on 23.10.24 by Yixin Wu

Dear all,

The paper list is on the main page of this seminar.

Please send your paper preferences (3 papers ranked from high to low) to yixin.wu@cispa.de by Thursday.

The assignment will be ready by 11 AM Friday!

Best,

Yixin

Privacy Policy | Legal Notice
If you encounter technical problems, please contact the administrators.