News
Paper list is online
Written on 30.04.2024 16:15 by Xinyue Shen
Dear all,
The paper list is online, please select three papers (ranked by preference) and send them to Xinyue Shen (xinyue.shen@cispa.de) by 10 am on 02.05.2024.
Note that the assignment will be based on the first-come, first-served principle.
The assignment will be informed at 2 pm on 02.05.2024.
Best,
Xinyue
Paper list
-
Moderating New Waves of Online Hate with Chain-of-Thought Reasoning in Large Language Models
-
Aunties, Strangers, and the FBI: Online Privacy Concerns and Experiences of Muslim-American Women
-
“It’s Stressful Having All These Phones”: Investigating Sex Workers’ Safety Goals, Risks, and Practices Online
-
TUBERAIDER: Attributing Coordinated Hate Attacks on YouTube Videos to their Source Communities
-
On Xing Tian and the Perseverance of Anti-China Sentiment Online
-
“Dummy Grandpa, Do You Know Anything?”: Identifying and Characterizing Ad Hominem Fallacy Usage in the Wild
-
On the Evolution of (Hateful) Memes by Means of Multimodal Contrastive Learning
-
Understanding and Detecting Hateful Content Using Contrastive Learning
-
Moderating Illicit Online Image Promotion for Unsafe User-Generated Content Games Using Large Vision-Language Models
-
DISARM: Detecting the Victims Targeted by Harmful Memes
-
Machine-Made Media: Monitoring the Mobilization of Machine-Generated Articles on Misinformation and Mainstream News Websites
-
DeepPhish: Understanding User Trust Towards Artificially Generated Profiles in Online Social Networks
-
Open-Domain, Content-Based, Multi-Modal Fact-Checking of Out-Of-Context Images via Online Resources
-
Understanding the Use of Images to Spread COVID-19 Misinformation on Twitter
-
Misinformation: Susceptibility, Spread, and Interventions to Immunize the Public
-
Why So Toxic? Measuring and Triggering Toxic Behavior in Open-Domain Chatbots
-
Red Teaming Language Models to Reduce Harms: Methods, Scaling Behaviors, and Lessons Learned
-
Toxicity in ChatGPT: Analyzing Persona-assigned Language Models
-
A Holistic Approach to Undesired Content Detection in the Real World
-
You Only Prompt Once: On the Capabilities of Prompt Learning on Large Language Models to Tackle Toxic Content
-
Unsafe Diffusion: On the Generation of Unsafe Images and Hateful Memes From Text-To-Image Models
-
Safe Latent Diffusion: Mitigating Inappropriate Degeneration in Diffusion Models
-
SneakyPrompt: Jailbreaking Text-to-image Generative Models
-
On the Proactive Generation of Unsafe Images From Text-to-Image Models Using Benign Prompts
-
RIATIG: Reliable and Imperceptible Adversarial Text-To-Image Generation With Natural Prompts
-
Synthetic Lies: Understanding AI-Generated Misinformation and Evaluating Algorithmic and Human Solutions
-
Bad Actor, Good Advisor: Exploring the Role of Large Language Models in Fake News Detection
-
Can LLM-Generated Misinformation Be Detected
-
On the Risk of Misinformation Pollution With Large Language Models
-
Zoom Out and Observe: News Environment Perception for Fake News Detection