News
Schedule of presentation is online
Written on 03.05.2024 00:37 by Xinyue Shen
Dear all,
After receiving your responses, we have arranged a schedule for you to give the presentations (see it at the end of this message).
The presentation starts on 8th May. Every Wednesday from 4 pm to 5 pm, we will have two presenters introduce their preferred papers.
See you on the 8th of May :)
Best,
Xinyue
08.05.2024:
1. Chi Cui, “It’s Stressful Having All These Phones”: Investigating Sex Workers’ Safety Goals, Risks, and Practices Online
2. Moritz Leonhard Hübner, TUBERAIDER: Attributing Coordinated Hate Attacks on YouTube Videos to their Source Communities
15.05.2024:
3. Venkata Udhay Kiran Pabbathi Sathyanarayana, Aunties, Strangers, and the FBI: Online Privacy Concerns and Experiences of Muslim-American Women
4. Leonard Nicolas Tran, Why So Toxic? Measuring and Triggering Toxic Behavior in Open-Domain Chatbots
22.05.2024:
5. Lena Miriam Pelz, “Dummy Grandpa, Do You Know Anything?”: Identifying and Characterizing Ad Hominem Fallacy Usage in the Wild
6. Gleb Rostanin, You Only Prompt Once: On the Capabilities of Prompt Learning on Large Language Models to Tackle Toxic Content
29.05.2024:
7. Amirhossein Saemi, Zoom Out and Observe: News Environment Perception for Fake News Detection
8. Junaed Tariq, Understanding and Detecting Hateful Content Using Contrastive Learning
05.06.2024:
9. Abhishek Ganesh Shinde, RIATIG: Reliable and Imperceptible Adversarial Text-To-Image Generation With Natural Prompts
10. Pavan Raviteja Upadhyayula, Misinformation: Susceptibility, Spread, and Interventions to Immunize the Public
12.06.2024:
11. Carlos Zender Fernandez, Machine-Made Media: Monitoring the Mobilization of Machine-Generated Articles on Misinformation and Mainstream News Websites
12. Justin Steuer, Understanding the Use of Images to Spread COVID-19 Misinformation on Twitter
19.06.2024:
13. Raj Piyushbhai Sheth, DeepPhish: Understanding User Trust Towards Artificially Generated Profiles in Online Social Networks
14. Kamila Szewczyk, Can LLM-Generated Misinformation Be Detected
26.06.2024:
15. Niklas Lohmann, On the Risk of Misinformation Pollution with Large Language Models
16. Rustam Nurullayev, DISARM: Detecting the Victims Targeted by Harmful Memes