Dear All,
The next seminar(s) will take place on 08.10.2025, 14:00 - 16:00, CISPA C0 (Stuhlsatzenhaus 5, 66123 Saarbrücken) - CISPA C0 Room 0.02, Stuhlsatzenhaus 5, 66123 Saarbrücken. Presenters and their advisors are encouraged to present in person. We especially encourage other students and teachers to attend and present in person as well.
For presenters,
1. We would book the room half an hour in advance, so you are encouraged to arrive a few minutes early to set up your own poster.
2. For this session, you need to print the poster on your own. The size of the poster should be 116x86cm or 86x116cm. You can use the poster printing service of Saarland University (https://www.uni-saarland.de/en/page/uds-card/functions/printing.html -> Posterdruck A0).
3. You need to present your poster in a much smaller group, but you are encouraged to roam around and ask questions about other posters.
4. We encourage you to bring your laptop to present your demo; there will be small tables in the room where you can put your laptop.
Presenters: Peter Gastauer, Bushra Ashfaque, Malik Ali Haider Awan, Simran Kathpalia, Prerak Mittal, Bushra Ashfaque, Prerak Mittal
08.10.2025, 14:00 - 16:00, CISPA C0 (Stuhlsatzenhaus 5, 66123 Saarbrücken)
Presenter: Peter Gastauer
Type of Poster: Master Intro
Advisor: Swen Jacobs
Title: Compiling Distributed Algorithms in Pseudocode into Extended Threshold Automata
Research Area: RA3: Reliable Security Guarantees
Abstract: Extended threshold automata have proven effective in the automatic verification of fault-tolerant distributed algorithms. The first important step in verification, however, lies in the faithful translation of the algorithm into a threshold automaton. This step can be tedious and error-prone when done by hand and also requires a solid understanding of the model. To ensure correctness throughout the verification process, an accurate automatic translation is thus preferable. Earlier work proposed a computationally expensive translation from pseudocode into less expressive threshold automata, via receive threshold automata. This work improves on the state of the art by directly compiling from a pseudocode representation of a distributed algorithm into an extended threshold automaton. This would allow users to work with a commonly used format and avoid the need of an error prone manual translation.
08.10.2025, 14:00 - 16:00, CISPA C0 (Stuhlsatzenhaus 5, 66123 Saarbrücken)
Presenter: Bushra Ashfaque
Type of Poster: Master Intro
Advisor: Andreas Zeller, Max Eisele
Title: Automated Embedded Pentesting using Fandango and LLM Agents
Research Area: RA4: Threat Detection and Defenses
Abstract: While Large Language Model (LLM) agents have demonstrated autonomous capabilities in exploiting vulnerabilities in web applications [1], a significant research gap exists in applying these AI-driven methodologies to the unique challenges of embedded systems. These systems, critical to automotive and IoT domains, are characterized by hardware-specific interfaces, real-time constraints, and specialized protocols inaccessible to conventional AI pentesting tools. This thesis, conducted in collaboration with Robert Bosch GmbH, addresses this gap by designing, implementing, and evaluating a comprehensive framework and testbench architecture that bridges the divide between AI agents and embedded hardware, following established pentesting methodologies [2]. The core of this work is a modular testbench and a novel abstraction layer, the Model Context Protocol (MCP), enabling standardized communication with hardware interfaces like CAN and UART. We employ the Fandango fuzzing framework [3] as the primary engine for test generation and execution. By translating formal protocol specifications, such as ISO 14229-1 UDS (Unified Diagnostic Services) [4], into a stateful, executable grammar within a self-contained .fan file, we empower Fandango's engine to autonomously manage and validate complex, multi-step interactions. This is achieved by embedding Python ConnectParty classes directly within the grammar, allowing Fandango to orchestrate the entire test flow from generation to response validation. The methodology will be validated using an ESP32 microcontroller, where this framework will be used to pentest a sample UDS implementation and evaluate its security features, such as secure boot and flash encryption. The ultimate goal of this thesis is to create a complete, automated pentesting pipeline that takes system specifications and security goals as input, generates a formal test plan as a Fandango grammar, executes a comprehensive fuzzing campaign against the target hardware, and leverages an LLM to generate a final, structured vulnerability report from the factual test results. This research will deliver a novel, open-source architecture for AI-assisted embedded security and provide empirical insights into its effectiveness in identifying protocol violations and security flaws in real-world embedded systems. References [1] R. Fang, R. Bindu, A. Gupta, Q. Zhan, and D. Kang. LLM Agents can Autonomously Exploit One-day Vulnerabilities. arXiv preprint arXiv:2404.08144, 2024. [2] OWASP Foundation. OWASP Web Security Testing Guide v4.2. 2021. Available: https://owasp.org/www-project-web-security-testing-guide/ [3] https://fandango-fuzzer.github.io/ [4] https://drive.google.com/file/d/1tZzNG2Dz3EGsmsdWdHP5Z98Z3YrD9BGT/view?usp=sharinghttps://drive.google.com/file/d/1tZzNG2Dz3EGsmsdWdHP5Z98Z3YrD9BGT/view?usp=sharing
08.10.2025, 14:00 - 16:00, CISPA C0 (Stuhlsatzenhaus 5, 66123 Saarbrücken)
Presenter: Malik Ali Haider Awan
Type of Poster: Master Intro
Advisor: Rafael Dutra, Andreas Zeller
Title: LEARNING FORMAT CONSTRAINTS FOR ENHANCED FUZZING
Research Area: RA4: Threat Detection and Defenses
Abstract: This thesis proposes learning-based enhancement to FormatFuzzer to automatically discover and integrate format constraints—such as magic numbers and chunk identifiers—from valid sample files. By incorporating these learned constraints either manually or dynamically during fuzzing, the approach aims to significantly increase the validity of generated inputs and improve fuzzing effectiveness.
08.10.2025, 14:00 - 16:00, CISPA C0 (Stuhlsatzenhaus 5, 66123 Saarbrücken)
Presenter: Simran Kathpalia
Type of Poster: Master Intro
Advisor: Christian Rossow, Marcel Böhme
Title: Efficient Software-Based Memory Tagging
Research Area: RA4: Threat Detection and Defenses
Abstract: Memory safety vulnerabilities represent one of the most critical security challenges in modern software systems. Despite decades of research and deployment of various mitigation techniques, C and C++ programs remain susceptible to memory corruption attacks such as buffer overflows, use-after-free, and unitialized memory. Memory tagging has emerged as a promising defense mechanism, enabling the detection of illegal memory operations at runtime by associating lightweight metadata, or “tags,” with both pointers and memory allocations. When implemented in hardware, such as in ARM’s Memory Tagging Extension (MTE), SPARC’s Application Data Integrity (ADI) and now Apple's recent Memory Integrity Enforcement (MIE), demonstrate comprehensive protection with minimal overhead (<5%). However, the dominant x86 architecture lacks native hardware support for memory tagging, motivating software-based solutions such as xTag and Stickytags. While software approaches can achieve broad memory safety coverage, they incur substantial runtime and memory overhead, limiting their practicality in real-world deployments. This thesis investigates how to make software-based memory tagging on x86 efficient without compromising security guarantees. The main focus is on identifying the dominant sources of overhead in current schemes. Based on the results we intent to design a partial tagging scheme, reducing the performance without undermining security. By combining selective tagging with complementary hardware defenses, it may be possible to approximate the strong protections of hardware memory tagging while significantly reducing performance costs. This research seeks to make software memory tagging on x86 both practical and efficient, bridging the gap between hardware-supported security guarantees and deployable software defenses.
08.10.2025, 14:00 - 16:00, CISPA C0 (Stuhlsatzenhaus 5, 66123 Saarbrücken)
Presenter: Prerak Mittal
Type of Poster: Master Intro
Advisor: Aleksei Stafeev, Giancarlo Pellegrino
Title: e-BOLA Screening: Backtracking Object Lineage In Web APIs To Detect Authorization Issues
Research Area: RA6: Empirical and Behavioural Security
Abstract: Modern web applications are increasingly architected around APIs, a design choice that, despite its benefits, often leads to severe authorization vulnerabilities like Broken Object Level Authorization (BOLA). Traditional methods for identifying BOLA flaws are constrained by their reliance on static documentation (e.g., OpenAPI) or manual penetration testing, rendering them unscalable and inadequate for dynamic application environments. Robust BOLA detection requires a deep understanding of the logical connections between data objects managed by the API, unlike generic fuzzing. This thesis introduces a novel LLM-assisted black-box approach that automates the discovery of these relationships. By analyzing live traffic, our system reconstructs the object lineage, inferring dependencies and hierarchies between disparate API entities. This data model is then leveraged to generate test-cases and fuzz the API to uncover hidden authorization flaws.
08.10.2025, 14:00 - 16:00, CISPA C0 (Stuhlsatzenhaus 5, 66123 Saarbrücken)
Presenter: Bushra Ashfaque
Type of Poster: Master Intro
Advisor: Max Eisele, Andreas Zeller, Alexander Liggesmeyer
Title: Penetration Testing on Embedded Systems using Fandango Constraints
Research Area: RA4: Threat Detection and Defenses
Abstract: Connected devices increasingly rely on standardized protocols to enable remote maintenance, configuration, and updates. Across domains such as IoT, industrial control, and automotive, many authentication mechanisms depend on the unpredictability of random values such as seeds, nonces, or challenges. If these values are predictable or biased, attackers can bypass protections and gain unauthorized access. We propose a grammar-based fuzzing framework built on the Fandango engine that integrates NIST randomness tests into protocol testing. The framework generates valid diagnostic sequences, evaluates challenge values in real time, and adapts its strategy when weaknesses are detected. Our case study is the Unified Diagnostic Services (UDS) protocol SecurityAccess mechanism, where Electronic Control Units (ECUs) issue authentication seeds that must resist prediction. The results include a reusable fuzzing- and-analysis toolchain, empirical insights into seed unpredictability, and recommendations for robust random number generation. Beyond automotive security, this approach provides a general methodology for testing any protocol whose authentication relies on high-quality randomness.
08.10.2025, 14:00 - 16:00, CISPA C0 (Stuhlsatzenhaus 5, 66123 Saarbrücken)
Presenter: Prerak Mittal
Type of Poster: Master Intro
Advisor: Aleksei Stafeev, Giancarlo Pellegrino
Title: e-BOLA Screening: Backtracking Object Lineage In Web APIs To Detect Authorization Issues
Research Area: RA6: Empirical and Behavioural Security
Abstract: Modern web applications are increasingly architected around APIs, a design choice that, despite its benefits, often leads to severe authorization vulnerabilities like Broken Object Level Authorization (BOLA). Traditional methods for identifying BOLA flaws are constrained by their reliance on static documentation (e.g., OpenAPI) or manual penetration testing, rendering them unscalable and inadequate for dynamic application environments. Robust BOLA detection requires a deep understanding of the logical connections between data objects managed by the API, unlike generic fuzzing. This thesis introduces a novel LLM-assisted black-box approach that automates the discovery of these relationships. By analyzing live traffic, our system reconstructs the object lineage, inferring dependencies and hierarchies between disparate API entities. This data model is then leveraged to generate test-cases and fuzz the API to uncover hidden authorization flaws.