News

Next Seminar on 17.07.2024

Written on 11.07.2024 13:33 by Niklas Medinger

Dear All,


The next seminar(s) take place on 17.07.2024 at 14:00 (Session A) and 14:00 (Session B).


Session A: (14:00 - 15:30)

Lucas Layfield, Mihirraj Dixit, Ayushi Churiwala

https://cispa-de.zoom.us/j/96786205841?pwd=M3FOQ3dSczRabDNLb3F1czVXVUpvdz09

Meeting-ID: 967 8620 5841
Kenncode: BT!u5=

 

Session B: (14:00 - 15:30)

Tim Göttlicher, Felix Fierlings, Philipp Settegast

https://cispa-de.zoom-x.de/j/66136901453?pwd=YVBSZU9peUpvUlk4bWp3MDR4cGlUUT09

 

Session A:

14:00 - 14:30

Speaker: Lucas Layfield
Type of talk: Bachelor Final
Advisor: Prof. Dr. Michael Backes, Xaver Fabian
Title: Sharpening the Blade: Extending the Blade tool with functionality for Spectre-BTB
Research Area: RA1
Abstract:
The WebAssembly (Wasm) standard describes a low-level language that is executed inside a sandboxed execution environment.
Originally introduced to complement JavaScript in web browsers, it has now found numerous applications in different areas,
one of them being for edge cloud applications. Whenever multiple programs from untrusted sources share the same computational
hardware, it introduces the possibility of Spectre-type attacks. These leverage micro-architectural prediction mechanisms
to redirect control flow during speculative execution to leak secret data.

The Blade tool of Vassena et al. aims to eliminate information leakage through Spectre-PHT attacks in constant-time Wasm code.
It is based on a formal model of a Wasm-like language that is capable of modelling the speculative execution of conditional branches.
A type system tracks the information flow of transient values to expressions that might leak them.
To protect programs against Spectre attacks, Blade cuts those information flows by inserting protection mechanisms.

In this thesis, we extend the Blade tool to also protect against Spectre-BTB attacks on indirect function calls.
We add indirect function calls to the language of the formal model and extend the semantics to model their speculative execution.
New rules for the type system track the information flows introduced by the speculative execution of indirect calls.
Finally, we implement those changes in the Blade tool and evaluate it on a set of vulnerable example programs.

14:30 - 15:00

Speaker: Mihirraj Dixit
Type of talk: Master Final.
Advisor: Mridula Singh, Wouter Lueks
Research Area: RA4

No further information given.
 

15:00 - 15:30

Speaker: Ayushi Churiwala

Type of talk: Master Final.

Advisor: Prof. Andreas Zeller, Prof. Mario Fritz

Title: LLM-based Active Code Repair

Research Area: RA3: Threat Detection and Defenses

Abstract: Code generation through generative AI is an emerging and novel field that involves predicting code or program structures using incomplete data sources, natural language descriptions, alternate programming languages, or execution logs, offering the potential to  rastically decrease the developer’s workload and invested time. Developers have long resorted to using code from various online platforms and modifying it for their purposes. However, with generative AI advancements especially in Large Language Models (LLMs) like ChatGPT, they can now instruct the machine(in natural language) to generate code making external code search redundant.

OpenAI’s language model, ChatGPT, has recently gained prominence for its ability to produce human-like responses across various natural language/ textual inputs, including those related to code generation. Nevertheless, the true effectiveness of ChatGPT in code generation remains uncertain, as it can produce logically questionable results and its performance could be significantly impacted by the selection of chosen prompts. This raises important questions about seamlessly integrating the code generated by ChatGPT into the development process, given its potential to expedite coding workflows and automate code generation. Especially, there is currently a lack of an automated testing and improvement framework specifically tailored for code generation systems. To address these issues, this research proposes to analyze the code generated by ChatGPT by exploring various prompt types and identifying and repairing inconsistent outputs. Our goal is to actively investigate the model’s ability to self-repair. We examine how adding additional I/ O pairs to the prompt, along with appropriate feedback, affects code generation and automatic self-repair capabilities of ChatGPT, all within an automated conversational approach.

 

Session B:

14:00 - 14:30

Speaker: Tim Göttlicher
Type: Master Intro
Advisor: Sebastian Brandt

Title: Locality in Graph Algorithms with Local and Global Memory
Research Area: RA1

Abstract:
The theory of distributed graph algorithms studies how large networks can agree on a global solution with communication limited to a local radius. In this work we want to compare the locality in a model with globally shared memory (Online-LOCAL) to a model with only local communication (S-LOCAL). In particular we will look at Sinkless Orientation, a simple but fundamental problem that other distributed problems can be reduced to. We show proof techniques to find bounds on the locality of this problem under the different models of communication.

14:30 - 15:00

Speaker: Felix Fierlings
Type of Talk: Bachelor Intro
Advisor: Valentin Dallmeier
Title: Using end-to-end tests to generate network-based load tests
Abstract:
Load testing is an important aspect of ensuring functionality
for web servers. Usually, there is a trade-off between generating
sufficient random load on the network layer or running realistic
but resource heavy end-to-end tests in parallel. We aim to
find a middle ground by using the request flows of predefined
end-to-end tests via Playwright to generate realistic
but resource efficient request flows. We will present and
compare different strategies for generating such request flows
and will evaluate whether they can achieve the desired effect
of generating sufficient load while having similar behavior
as the original end-to-end test.

15:00 - 15:30

Speaker: Philipp Settegast

Type of talk: Bachelor Intro

Advisor: Dr.-Ing. Ben Stock, Trung Tin Nguyen

Title: Did I really agree to it? - A large-scale study about sensitive information leakage in WebViews

Research Area: RA5

ABSTRACT:

On the mobile platform, various options exist to present internet content for the user. Traditionally, browsers are well-suited for this need. Namely, Google Chrome and Mozilla Firefox are fully-developed mobile browsers that follow the latest developed security and privacy standards.

However, the mobile environment is not limited to browser apps. Software components like WebView, Custom Tabs or Trusted Web Activity provide another option for integrating web content into the native app environment. This approach increases in popularity as these components offer developers a great level of flexibility. This trend has many reasons, which often depend on the particular situation. To name just one example: There is no longer a need to develop an application from scratch for each operating system. Instead, developers have to create the web content once and embed it on the respective system using one of the methods above. Consequently, this would simplify updating the functionality and keep the user within the app environment even longer. Still, this flexibility comes at a cost. In the case of the WebView class, developers have to configure and implement the WebView properly. Otherwise, they risk violating the privacy of the users. This violation stems from an incorrect implementation of the permission handling process, whereby scripts of any website can access sensitive information without the user's consent.

In our thesis, we investigate permission-handling classes related to the android.webkit.WebView class and evaluate whether their implementation influences the users' privacy. For this purpose, we have created a pipeline that first identifies all apps with an actively used WebView. Based on the results, we will then automatically check these WebView apps for potentially dangerous configurations and implementations. Based on the analysis of 1 million apps and supported by the results gained from the analysis, we will provide insights into the impact of the vulnerability described in our work.

Privacy Policy | Legal Notice
If you encounter technical problems, please contact the administrators.