News

Results are out

Written on 07.08.24 by Xinyue Shen

Dear all,

The final results are now posted on LSF.

Best,

Xinyue

Schedule of presentation is online

Written on 03.05.24 (last change on 24.05.24) by Xinyue Shen

Dear all,


After receiving your responses, we have arranged a schedule for you to give the presentations (see it at the end of this message).

The presentation starts on 8th May. Every Wednesday from 4 pm to 5 pm, we will have two presenters introduce their preferred papers.

See you on the… Read more

Dear all,


After receiving your responses, we have arranged a schedule for you to give the presentations (see it at the end of this message).

The presentation starts on 8th May. Every Wednesday from 4 pm to 5 pm, we will have two presenters introduce their preferred papers.

See you on the 8th of May :)


Best,
Xinyue


08.05.2024:
1. Chi Cui, “It’s Stressful Having All These Phones”: Investigating Sex Workers’ Safety Goals, Risks, and Practices Online
2. Moritz Leonhard Hübner, TUBERAIDER: Attributing Coordinated Hate Attacks on YouTube Videos to their Source Communities
 

15.05.2024:
3. Venkata Udhay Kiran Pabbathi Sathyanarayana, Aunties, Strangers, and the FBI: Online Privacy Concerns and Experiences of Muslim-American Women
4. Leonard Nicolas Tran, Why So Toxic? Measuring and Triggering Toxic Behavior in Open-Domain Chatbots


22.05.2024: 
5. Lena Miriam Pelz, “Dummy Grandpa, Do You Know Anything?”: Identifying and Characterizing Ad Hominem Fallacy Usage in the Wild
6. Gleb Rostanin, You Only Prompt Once: On the Capabilities of Prompt Learning on Large Language Models to Tackle Toxic Content
 

29.05.2024:
7. Amirhossein Saemi, Zoom Out and Observe: News Environment Perception for Fake News Detection
8. Junaed Tariq, Understanding and Detecting Hateful Content Using Contrastive Learning

 

05.06.2024: 
9. Abhishek Ganesh Shinde, RIATIG: Reliable and Imperceptible Adversarial Text-To-Image Generation With Natural Prompts
10. Pavan Raviteja Upadhyayula, Misinformation: Susceptibility, Spread, and Interventions to Immunize the Public


12.06.2024: 
11. Carlos Zender Fernandez, Machine-Made Media: Monitoring the Mobilization of Machine-Generated Articles on Misinformation and Mainstream News Websites
12. Justin Steuer, Understanding the Use of Images to Spread COVID-19 Misinformation on Twitter


19.06.2024:
13. Raj Piyushbhai Sheth, DeepPhish: Understanding User Trust Towards Artificially Generated Profiles in Online Social Networks
14. Kamila Szewczyk, Can LLM-Generated Misinformation Be Detected


26.06.2024:
15. Niklas Lohmann, On the Risk of Misinformation Pollution with Large Language Models
16. Rustam Nurullayev, DISARM: Detecting the Victims Targeted by Harmful Memes

Paper list is online

Written on 30.04.24 by Xinyue Shen

Dear all,

The paper list is online, please select three papers (ranked by preference) and send them to Xinyue Shen (xinyue.shen@cispa.de) by 10 am on 02.05.2024.

Note that the assignment will be based on the first-come, first-served principle.

The assignment will be informed at 2 pm on… Read more

Dear all,

The paper list is online, please select three papers (ranked by preference) and send them to Xinyue Shen (xinyue.shen@cispa.de) by 10 am on 02.05.2024.

Note that the assignment will be based on the first-come, first-served principle.

The assignment will be informed at 2 pm on 02.05.2024.

Best,

Xinyue


Paper list

  1. Moderating New Waves of Online Hate with Chain-of-Thought Reasoning in Large Language Models

  2. Aunties, Strangers, and the FBI: Online Privacy Concerns and Experiences of Muslim-American Women

  3. “It’s Stressful Having All These Phones”: Investigating Sex Workers’ Safety Goals, Risks, and Practices Online

  4. TUBERAIDER: Attributing Coordinated Hate Attacks on YouTube Videos to their Source Communities

  5. On Xing Tian and the Perseverance of Anti-China Sentiment Online

  6. “Dummy Grandpa, Do You Know Anything?”: Identifying and Characterizing Ad Hominem Fallacy Usage in the Wild

  7. On the Evolution of (Hateful) Memes by Means of Multimodal Contrastive Learning

  8. Understanding and Detecting Hateful Content Using Contrastive Learning

  9. Moderating Illicit Online Image Promotion for Unsafe User-Generated Content Games Using Large Vision-Language Models

  10. DISARM: Detecting the Victims Targeted by Harmful Memes

  11. Machine-Made Media: Monitoring the Mobilization of Machine-Generated Articles on Misinformation and Mainstream News Websites

  12. DeepPhish: Understanding User Trust Towards Artificially Generated Profiles in Online Social Networks

  13. Open-Domain, Content-Based, Multi-Modal Fact-Checking of Out-Of-Context Images via Online Resources

  14. Understanding the Use of Images to Spread COVID-19 Misinformation on Twitter

  15. Misinformation: Susceptibility, Spread, and Interventions to Immunize the Public

  16. Why So Toxic? Measuring and Triggering Toxic Behavior in Open-Domain Chatbots

  17. Red Teaming Language Models to Reduce Harms: Methods, Scaling Behaviors, and Lessons Learned

  18. Toxicity in ChatGPT: Analyzing Persona-assigned Language Models

  19. A Holistic Approach to Undesired Content Detection in the Real World

  20. You Only Prompt Once: On the Capabilities of Prompt Learning on Large Language Models to Tackle Toxic Content

  21. Unsafe Diffusion: On the Generation of Unsafe Images and Hateful Memes From Text-To-Image Models

  22. Safe Latent Diffusion: Mitigating Inappropriate Degeneration in Diffusion Models

  23. SneakyPrompt: Jailbreaking Text-to-image Generative Models

  24. On the Proactive Generation of Unsafe Images From Text-to-Image Models Using Benign Prompts

  25. RIATIG: Reliable and Imperceptible Adversarial Text-To-Image Generation With Natural Prompts

  26. Synthetic Lies: Understanding AI-Generated Misinformation and Evaluating Algorithmic and Human Solutions

  27. Bad Actor, Good Advisor: Exploring the Role of Large Language Models in Fake News Detection

  28. Can LLM-Generated Misinformation Be Detected

  29. On the Risk of Misinformation Pollution With Large Language Models

  30. Zoom Out and Observe: News Environment Perception for Fake News Detection

Privacy Policy | Legal Notice
If you encounter technical problems, please contact the administrators.