News

Next Seminar on 14.08.2024

Written on 08.08.2024 10:41 by Xinyi Xu

Dear All,


The next seminar(s) will take place on 2024-08-14 at  14:00 (Session A) and 14:00 (Session B).


Session A: (14:00 - 14:30, 14:30 - 15:00, 15:00 - 15:30)

Linda Müller, SUBRAT DUTTA, Tobias Lorig

https://cispa-de.zoom.us/j/96786205841?pwd=M3FOQ3dSczRabDNLb3F1czVXVUpvdz09

Meeting-ID: 967 8620 5841

Password: BT!u5=

 

Session B: (14:00 - 14:30, 14:30 - 15:00, 15:00 - 15:30)

 

Devansh Srivastav, Girija Bangalore Mohan, Luca Nimsgern

https://cispa-de.zoom-x.de/j/66136901453?pwd=YVBSZU9peUpvUlk4bWp3MDR4cGlUUT09

Meeting-ID: 661 3690 1453

Password: sxHhzA004}

 

Session A

14:00 - 14:30

Speaker: Linda Müller

Type of Talk: Bachelor Intro

Advisor: Michael Schwarz, Ruiyi Zhang

Title: Implementing Page Coloring in the Linux Kernel for x86

Research Area: RA3: Threat Detection and Defenses

Abstract: Side-channels share information by unintended means, e.g., the speed of a memory access shares whether the accessed memory content was recently accessed or not. The Prime+Probe attack leaverages such a cache-based side-channel by continuously evicting a victim's memory from the cache and measuring the required time. To mitigate Prime+Probe attacks, each process' pages should map to different cache-sets, so called "colors". In this thesis, we will implement page coloring against Prime+Probe attacks that originate from user-space and target user-space in the Linux kernel.

 

14:30 - 15:00

 

Speaker: SUBRAT DUTTA

Type of Talk: Master Intro

Advisor: Mario Fritz, Xiao Zhang

Title: Stealthy Targeted Adversarial Patch Attacks through Perceptibility-Aware Optimization

Research Area: RA3: Threat Detection and Defenses

Abstract: Adversarial patch attacks, where the adversary is only allowed to modify a small localised area of the input image, have recently attracted lots of attention due to their potential to be transformed into physical-world attacks. Existing methods either are not successful in producing visually imperceptible patches or cannot achieve satisfactory performance under targeted attack scenarios. To bridge this gap, we hypothesise a novel adversarial patch attack based on perceptibility-aware optimization schemes, achieving a strong targeted attack performance while maintaining the invisibility of the attached patch. In particular, we propose a two step method where on the first step we search for a proper location for patch placement by leveraging class localization and sensitivity maps, thus balancing the susceptibility of the patch location to both victim model prediction and human perception. Secondly we have observed that the current update rules employed for patch update do not consider perceptibility which results in them being highly salient. We therefore believe that major improvements in the optimization process can be made which inclines towards patch imperceptibility while achieving the state-of-the-art attack efficacy. We believe that integrating imperceptibility as a part of the objective function and also the update rule can improve the current state of imperceptibility by significant magnitudes

 

15:00 - 15:30

 

Speaker: Tobias Lorig

Advisor: Mario Fritz, Hossein Hajipour

Research Area: RA3: Threat Detection and Defenses

Abstract: In recent years, the art of software engineering has been transformed by the accelerating development of Large Language Models. The emergence of ChatGPT, GitHub's Copilot, and now Devin, the first autonomous LLM driven software engineer, further increases the presence of AI generated code in software. The convenience and perceived intelligence of such tools can be alluring for programming novices and software engineers alike, possibly leading to a neglect of best practices like code reviews and eventual introduction of insecure code. We aim to analyze the current state of publicly accessible automated code generation frameworks, coupled with popular Large Language Models. By employing static analysis, we will evaluate the security of programming projects created by these frameworks, based on common weaknesses, listed in the CWE. Finally, we will investigate the effectiveness of prompt engineering and other approaches in improving the security of generated code by directly comparing the rate of common CWE's.

 

Session B

 

14:00 - 14:30

Speaker: Devansh Srivastav

Type of Talk: Master Intro

Advisor: Xiao Zhang

Title: Jailbreak Strategies for Base and Defended Large Language Models from a Red Team Perspective

Research Area: RA1: Trustworthy Information Processing

Abstract: The adoption of Large Language Models (LLMs) has significantly enhanced natural language processing across various domains, yet their susceptibility to jailbreak attacks remains a critical concern. Jailbreak attacks exploit weaknesses to bypass safety mechanisms, posing risks such as misinformation and privacy breaches. While existing studies often target vulnerabilities in base models, the proposed research focuses on evaluating both base and defended LLMs against sophisticated jailbreak techniques. Using techniques like Multilingual Prompting, Instruction Manipulation, Zero-shot Chain of Thought (CoT), Chaining and Agentic methods, and Retrieval-Augmented Generation (RAGs), the study aims to comprehensively assess current defense mechanisms. By adopting a red team perspective, this research seeks to identify potential weaknesses in defended models and provide insights for developing more robust defense strategies, ensuring safe and secure LLM applications.

 

14:30 - 15:00

 

Speaker: Girija Bangalore Mohan

Type of Talk: Master Intro

Advisor: Mridula Singh

Title: Physical World Sensor Attack on LiDAR-camera-based Perception in Autonomous Driving

Research Area: RA4: Secure Mobile and Autonomous Systems

Abstract: Autonomous Vehicles (AVs) rely on sensors like cameras and LiDAR, to perceive their surroundings and make informed decisions regarding path planning and vehicle control. Understanding the vulnerabilities in these perception systems is crucial for ensuring road safety and building robust AV systems. While cameras have been traditionally used for perception, they are susceptible to spoofing attacks. Hence, AVs are increasingly adopting LiDARs as they show an advantage over other sensors due to their ability to create detailed 3D maps, providing precise distance and depth information for all surrounding objects and free space, and are also a reasonable buy today. However, the researchers continue to study the vulnerability of LiDARs and explore new ways to attack them. The technical functionality of LiDAR makes the environment with mirrors challenging for LiDARs to work with. Existing research has not yet explored this as a potential attack vector. In this research, we will exploit the property of light reflection to design and model a physical-world attack on LiDAR and camera sensors. We will demonstrate the effectiveness of our attack against state-of-the-art AV obstacle detectors like PointPillars. Additionally, we will evaluate the impact of these attacks on driving decisions using industry-grade Autonomous Driving Simulators (LGSVL or CARLA) and propose defense strategies to mitigate such attacks. By shedding light on these vulnerabilities and proposing defense mechanisms, this research contributes to the development of more resilient AV perception systems, ultimately enhancing road safety in autonomous driving environments.

 

15:00 - 15:30

 

Speaker: Luca Nimsgern

Type of Talk: Bachelor Intro

Advisor: Lucjan Hanzlik

Title: Multi-party signatures on FIDO tokens

Research Area: RA1: Trustworthy Information Processing

Abstract: Consisting of the W3C Web Authentication (WebAuthn) and the FIDO Client to Authenticator Protocol (CTAP), FIDO2 introduces a standard for strong authentication in the web environment. In this thesis, we will implement a multi-party signature on FIDO keys. As the name suggests, in multi-party signatures the private key for signing a message is distributed by multiple parties. The idea is that each FIDO key holds his own share of the private key, so in the end a certain amount of FIDO keys (which we can specify before) is needed to produce a valid signature. After the implementation phase, we will evaluate this approach by its performance and security, in order to compare it with the common approach.

 

Privacy Policy | Legal Notice
If you encounter technical problems, please contact the administrators.