News

Next Seminar on 04.12.2024

Written on 27.11.2024 21:39 by Xinyi Xu

Dear All,


The next seminar(s) will take place on 2024-12-04 at  14:00 (Session A) and 14:00 (Session B).


Session A: (14:00 - 14:30, 14:30 - 15:00, 15:00 - 15:30)

Lea Jeanette Vorndran, Julian Augustin, Daniel Erceg

https://cispa-de.zoom.us/j/96786205841?pwd=M3FOQ3dSczRabDNLb3F1czVXVUpvdz09

Meeting-ID: 967 8620 5841

Password: BT!u5=

 

Session B: (14:00 - 14:30, 14:30 - 15:00, 15:00 - 15:30)

 

Linda Müller, Simon Pietsch, Sree Harsha Nelaturu

https://cispa-de.zoom-x.de/j/66136901453?pwd=YVBSZU9peUpvUlk4bWp3MDR4cGlUUT09

Meeting-ID: 661 3690 1453

Password: sxHhzA004}

 

Session A

14:00 - 14:30

Speaker: Lea Jeanette Vorndran

Type of Talk: Bachelor Intro

Advisor: Ben Stock

Title: Measuring Rewritable Third-Party Code

Research Area: RA5: Empirical and Behavioural Security

Abstract: Cross-Site Scripting attacks (XSS) happen when an attacker can inject and run their code in an otherwise benign or trusted website. This can allow an attacker to steal sensitive user data. Eventhough this is a long known issue, it still occurs frequently in today's web. In order to mitigate this, the Content Security Policy (CSP) was deployed. But configuring a secure CSP without breaking functionality can be really challenging, especially if a website uses third-party code that hinders a secure CSP. Nowadays, many websites rely on third-party code to add functionality or ads to their own site. If the third-party code is not compatible with a secure CSP due to the usage of sinks like eval, innerHTML or document.write, the developer has to decide between security and functionality. In this work we want to explore how much third-party code actually needs to use these sinks and how many scripts could actually be rewritten such that they do not hinder the usage of a secure CSP anymore.

 

14:30 - 15:00

 

Speaker: Julian Augustin

Type of Talk: Bachelor Final

Advisor: Andreas Zeller

Title: Hierarchical Delta Debugging and DDSet on context-sensitive Inputs

Research Area: RA4: Secure Mobile and Autonomous Systems

Abstract: Fuzzing is a widely adopted technique that is used to identify inputs that trigger bugs in software systems. However, analyzing and fixing these bugs often requires isolating the specific part of the failure-inducing input that causes the malfunction. Due to the complexity and unreadability of many such inputs, it is crucial to minimize their size while retaining the bug-triggering characteristics. Delta Debugging (DD) is an established algorithm designed to reduce the input size without losing the error-triggering properties. However, traditional delta debugging struggles with context-sensitive data, where issues such as incorrect length fields or checksum mismatches can cause the debugging process to fail before the actual bug is encountered. To address these challenges, we leverage FormatFuzzer, a framework capable of fuzzing and handling context-sensitive inputs, to implement a refined variant of delta debugging known as Hierarchical Delta Debugging (HDD). By integrating FormatFuzzer’s mutation functions, HDD achieves better precision and resilience when minimizing structured data inputs, preserving the semantics of context-sensitive fields. Recent advancements in the field have led to Delta Debugging for Input Sets (DDSet), which extends the concept beyond individual inputs. Instead of merely reducing a single error-inducing input, DDSet can generate a grammar that captures the structure of multiple inputs responsible for the same error. This grammar helps to systematically identify the subset of inputs affected by the bug, providing a comprehensive understanding of the fault domain. This capability is particularly useful when a bug fix only addresses a specific hardcoded input and fails to generalize to the broader set of faulty inputs. In this thesis, we implement the key functionalities of DDSet for context-sensitive data using FormatFuzzer. The generated grammars can guide developers in creating additional test inputs, verifying the robustness of bug fixes, and ensuring that program patches are effective across all relevant inputs, thus improving overall software reliability.

 

15:00 - 15:30

 

Speaker: Daniel Erceg

Type of Talk: Bachelor Intro

Advisor: Nils Ole Tippenhauer

Title: Higher Level Function Classification using LLMs in Reverse Engineering

Research Area: RA5: Empirical and Behavioural Security

Abstract: Reverse engineering (RE) is a cornerstone of cybersecurity, enabling analysts to dissect and understand software with minimal documentation, particularly in malware analysis and vulnerability research. While advancements in AI-supported RE have enhanced low-level details recovery—such as function names, variable names, and type annotations—these techniques primarily focus on syntactic restoration. However, analysts require a deeper semantic understanding of binary structures, including the high-level roles of functions, such as memory management or cryptographic operations, to effectively navigate and prioritize complex binaries. This thesis explores the automation of semantic role inference for functions in stripped binaries, addressing key challenges like domain-specific differences, obfuscation, and contextual limits in large programs. Leveraging recent advancements in large language models (LLMs), the project aims to classify functions based on higher-level purposes by integrating contextual information from call relationships and structural analysis. By creating a structured dataset enriched with architectural and memory-layout details, the study develops a pipeline to infer function roles using LLMs and evaluates its effectiveness against source code classifications. This work seeks to streamline RE tasks, enabling faster and more efficient analysis for cybersecurity professionals.

 

Session B

 

14:00 - 14:30

Speaker: Linda Müller

Type of Talk: Bachelor Final

Advisor: Michael Schwarz, Jan Reineke

Title: Implementation of Page Coloring in the Linux Kernel for x86

Research Area: RA3: Threat Detection and Defenses

Abstract: Side-channels share information by unintended means, e.g., the speed of a memory access shares whether or not the accessed memory was recently accessed. The Prime+Probe attack leverages such a cache-based side-channel by continuously evicting a victim's memory from the cache and measuring the required time. To mitigate Prime+Probe attacks, each process' pages should map to different cache sets, so-called "page colors". In this thesis, we present our proof-of-concept implementation of page coloring against eviction-based cache side-channel attacks that originate from user space and target user space in the Linux kernel. Additionally, we show that our kernel is secure against those attacks. However, our kernel out-of-memory killed 14 out of our 24 total tests. Additionally, our kernel is on average 84.81 +- 317.29 (n=75) times slower than a kernel compiled with the default x86 kernel configurations and on average 85.96 +- 321.60 (n=75) times slower than a kernel compiled with the same kernel configurations as our kernel. Thus, although our kernel is secure, the functionality and performance overheads deny widespread usage of our kernel.

 

14:30 - 15:00

 

Speaker: Simon Pietsch

Type of Talk: Bachelor Intro

Advisor: Sebastian Stich, Anton Rodomanov

Title: Combining a Relaxed Smoothness Assumption with Structural Nonconvexity

Research Area: RA1: Trustworthy Information Processing

Abstract: Training neural networks using gradient-based optimization is highly successful in practice, yet this success remains challenging to explain theoretically. Traditional convergence guarantees in optimization rely on assumptions such as convexity and L-smoothness, conditions that do not necessarily apply to the complex loss landscapes of neural networks. To address this gap, two new research directions have emerged: relaxing smoothness assumptions and exploring alternatives to convexity. While each of these approaches has been studied individually, their combination remains largely unexplored. This thesis aims to bridge this gap by providing convergence proofs under a framework that integrates these two types of relaxations. Through this work, we aim to contribute to a deeper understanding of the mathematical principles behind the successful training of neural networks.

 

15:00 - 15:30

 

Speaker: Sree Harsha Nelaturu

Type of Talk: Master Intro

Advisor: Rebekka Burkholz

Title: Accelerating Sparse Optimization

Research Area: RA1: Trustworthy Information Processing

Abstract: It is increasingly of interest to be able to perform model compression without compromising on the performance of the underlying deep neural network. One such paradigm is pruning, which refers to removing parameters or deactivating parameters in a network based on a criterion such as the magnitude. State of State of the art methods in training sparse neural networks currently require multiple prune-retrain cycles which are time consuming and computationally expensive. In addition, similar challenges are also present in methods that sparsity continuously and at-initialization. As part of this work, we will explore optimization strategies to improve conditioning, optimization and integrate techniques to improve both the wall-clock and overall training steps required for training sparse neural networks.

 

Privacy Policy | Legal Notice
If you encounter technical problems, please contact the administrators.