The rapid progress in artificial intelligence and machine learning has lead to the deployment of AI-based systems in a number of areas of modern life, such as manufacturing, transportation, and healthcare. However, serious concerns about the safety and trustworthiness of such systems still remain, due to the lack of assurance regarding their behavior. To address this problem, significant efforts in the area of formal methods in recent years have been dedicated to the development of rigorous techniques for the design of safe AI-based systems.
In this seminar, we will read and discuss research papers that present the latest results in this area. We will cover a range of topics, including the formal specification and verification of correctness properties of AI components of autonomous systems, and the design of reinforcement learning agents that respect safety constraints.
Each participant will give a presentation of an assigned paper, followed by a group discussion. All students are expected to read each paper carefully and to actively participate in the discussions. To facilitate the discussions, each participant is required to submit two questions to the presenter of each presented paper in advance of the presentation. Each student will write a seminar paper on the topic assigned to them.
When: Thursday 16:15-17:45
Where: E9.1 (CISPA Building), meeting room 3.21
Please see the Timetable (or the presentation schedule) for the actual dates of the meetings.
4 May, 16:15 Kick-off meeting; Location: E9.1 (CISPA Building), meeting room 0.01