News

Paper list is online

Written on 30.04.2024 16:15 by Xinyue Shen

Dear all,

The paper list is online, please select three papers (ranked by preference) and send them to Xinyue Shen (xinyue.shen@cispa.de) by 10 am on 02.05.2024.

Note that the assignment will be based on the first-come, first-served principle.

The assignment will be informed at 2 pm on 02.05.2024.

Best,

Xinyue


Paper list

  1. Moderating New Waves of Online Hate with Chain-of-Thought Reasoning in Large Language Models

  2. Aunties, Strangers, and the FBI: Online Privacy Concerns and Experiences of Muslim-American Women

  3. “It’s Stressful Having All These Phones”: Investigating Sex Workers’ Safety Goals, Risks, and Practices Online

  4. TUBERAIDER: Attributing Coordinated Hate Attacks on YouTube Videos to their Source Communities

  5. On Xing Tian and the Perseverance of Anti-China Sentiment Online

  6. “Dummy Grandpa, Do You Know Anything?”: Identifying and Characterizing Ad Hominem Fallacy Usage in the Wild

  7. On the Evolution of (Hateful) Memes by Means of Multimodal Contrastive Learning

  8. Understanding and Detecting Hateful Content Using Contrastive Learning

  9. Moderating Illicit Online Image Promotion for Unsafe User-Generated Content Games Using Large Vision-Language Models

  10. DISARM: Detecting the Victims Targeted by Harmful Memes

  11. Machine-Made Media: Monitoring the Mobilization of Machine-Generated Articles on Misinformation and Mainstream News Websites

  12. DeepPhish: Understanding User Trust Towards Artificially Generated Profiles in Online Social Networks

  13. Open-Domain, Content-Based, Multi-Modal Fact-Checking of Out-Of-Context Images via Online Resources

  14. Understanding the Use of Images to Spread COVID-19 Misinformation on Twitter

  15. Misinformation: Susceptibility, Spread, and Interventions to Immunize the Public

  16. Why So Toxic? Measuring and Triggering Toxic Behavior in Open-Domain Chatbots

  17. Red Teaming Language Models to Reduce Harms: Methods, Scaling Behaviors, and Lessons Learned

  18. Toxicity in ChatGPT: Analyzing Persona-assigned Language Models

  19. A Holistic Approach to Undesired Content Detection in the Real World

  20. You Only Prompt Once: On the Capabilities of Prompt Learning on Large Language Models to Tackle Toxic Content

  21. Unsafe Diffusion: On the Generation of Unsafe Images and Hateful Memes From Text-To-Image Models

  22. Safe Latent Diffusion: Mitigating Inappropriate Degeneration in Diffusion Models

  23. SneakyPrompt: Jailbreaking Text-to-image Generative Models

  24. On the Proactive Generation of Unsafe Images From Text-to-Image Models Using Benign Prompts

  25. RIATIG: Reliable and Imperceptible Adversarial Text-To-Image Generation With Natural Prompts

  26. Synthetic Lies: Understanding AI-Generated Misinformation and Evaluating Algorithmic and Human Solutions

  27. Bad Actor, Good Advisor: Exploring the Role of Large Language Models in Fake News Detection

  28. Can LLM-Generated Misinformation Be Detected

  29. On the Risk of Misinformation Pollution With Large Language Models

  30. Zoom Out and Observe: News Environment Perception for Fake News Detection

Privacy Policy | Legal Notice
If you encounter technical problems, please contact the administrators.