News

Currently, no news are available

Opportunities and Risks of Large Language Models and Foundation Model

The advent of Large Language Models (e.g. ChatGPT) and other foundation models (e.g. stable diffusion) has and will continue to change the way to AI/ML applications are developed and deployed.

On the one hand, these models show unprecedented performance and can often be adapted to new tasks with little effort. In particular, large language models like ChatGPT have the potential to change the way we implement and deploy functionality.

On the other hand, these models raise several questions related to safety, security and general aspects of trustworthiness, that urgently need to be addressed to comply with our high expectations for future AI systems.

Therefore, this seminar will investigate aspects of trustworthiness, security, safety, privacy, robustness, and intellectual property.

This is a lecture in the context of the ELSA - European Lighthouse on Secure and Safe AI: https://elsa-ai.eu

 

Date and time need to be confirmed. Tentatively, the seminar is scheduled for Tuesday's 8:30am to 10am.

 

Literature and Resources:

Foundational Challenges in Assuring Alignment and Safety of Large Language Models

OWASP Top 10 for Large Language Model Applications

MITRA ATLAS Matrix

NIST AI 100-2 E2023: Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations

Not What You've Signed Up For: Compromising Real-World LLM-Integrated Applications with Indirect Prompt Injection

Privacy Policy | Legal Notice
If you encounter technical problems, please contact the administrators.