News
Presentation ScheduleWritten on 27.10.25 by Ziqing Yang Dear all,
Starting from November 4th, every Tuesday from 2 pm to 3 pm, we will have two presenters introduce their preferred papers.
04.11.2025 Dear all,
Starting from November 4th, every Tuesday from 2 pm to 3 pm, we will have two presenters introduce their preferred papers.
04.11.2025 11.11.2025 Best, |
Paper List AvailableWritten on 23.10.25 by Ziqing Yang Dear all, The paper list is online. Please select three papers (ranked by preference) and send them to Ziqing Yang (ziqing.yang@cispa.de) by 24.10.2025. Note that the assignment will be based on the first-come, first-served principle. The assignment will be announced at 11 am on… Read more Dear all, The paper list is online. Please select three papers (ranked by preference) and send them to Ziqing Yang (ziqing.yang@cispa.de) by 24.10.2025. Note that the assignment will be based on the first-come, first-served principle. The assignment will be announced at 11 am on 27.10.2025. Best, Ziqing
Paper List
|
AI Safety
As AI systems become increasingly powerful and integrated into critical aspects of society, ensuring they behave safely and reliably has never been more important. AI Safety is the interdisciplinary field focused on minimizing risks associated with AI, from algorithmic bias and system failures to the long-term challenges posed by advanced autonomous agents.
In this seminar, we will explore the key technical, ethical, and societal issues related to AI safety. Topics include value alignment, robustness, and the governance of powerful AI systems. By the end of the seminar, students will gain a foundational understanding of how to assess and mitigate risks, design safer AI systems, and contribute to responsible AI development.
Logistics:
Time: Tuesday 2pm - 4pm
Location: TBD
TAs:
- Ziqing Yang (ziqing.yang@cispa.de)
- Yihan Ma
- Bo Shao
