News
Aktuell gibt es keine Neuigkeiten
Opportunities and Risks of Large Language Models and Foundation Models
The advent of Large Language Models (e.g. ChatGPT, Github CoPilot) and other foundation models (e.g. stable diffusion,CLIP) has and will continue to change the way AI/ML applications are developed and deployed. E.g. the behavior and functionality of Large Language Models can be changed entirely by prompting the model, which can be understood as re-programming in plain English.
On the one hand, these models show unprecedented performance and can often be adapted to new tasks with little effort. In particular, large language models like ChatGPT have the potential to change the way we implement and deploy functionality.
On the other hand, these models raise several questions related to safety, security, and general aspects of trustworthiness, that urgently need to be addressed to comply with our high expectations for future AI systems.
Therefore, this seminar will investigate aspects of trustworthiness, security, safety, privacy, robustness, and intellectual property.
This is a lecture in the context of the ELSA - European Lighthouse on Secure and Safe AI: https://elsa-ai.eu
Requirements: Solid understanding of machine learning and deep learning.
Kickoff:
Material:
- F2.1, F2.2, F2.3, ... : chapters of Foundational Challenges in Assuring Alignment and Safety of Large Language Models: https://llm-safety-challenges.github.io/challenges_llms.pdf
- A1: AI Deception: https://www.cell.com/action/showPdf?pii=S2666-3899%2824%2900103-X
- individual papers in security
- S1: not what you signed up for: https://arxiv.org/abs/2302.12173
- S2: design pattern: https://arxiv.org/abs/2506.08837
- S3: camel: https://arxiv.org/abs/2503.18813
Timeline:
- April 27th :topics available
- April 30th: enter your preferences
- May 29th: start of presentations
- Deadline for registering officially with examination office/LSF should also be May 29th - please check.
