News

Next Seminar on 22.05.2024

Written on 16.05.2024 11:16 by Niklas Medinger

Dear All,


The next seminar(s) take place on 22.05.2024 at 14:00 (Session A) and 14:30 (Session B).


Session A: (14:00-15:30)

Ujjval Desai, Marco Schichtel, Prathvish Mithare

https://cispa-de.zoom.us/j/96786205841?pwd=M3FOQ3dSczRabDNLb3F1czVXVUpvdz09

Meeting-ID: 967 8620 5841
Kenncode: BT!u5=

 

Session B: (14:30-15:30)

Tobias Risch, Vasili Nikolaev

https://cispa-de.zoom-x.de/j/66136901453?pwd=YVBSZU9peUpvUlk4bWp3MDR4cGlUUT09

 

Session A:

14:00 - 14:30

Speaker: Ujjval Desai

No information provided.

 

14:30 - 15:00

Speaker: Marco Schichtel
Type of talk: Master Intro
Advisor: Dr. Nils Ole Tippenhauer
Title: Fingerprinting Peripherals in Blackbox Firmware
Research Area: RA4: Secure Mobile and Autonomous Systems
Abstract:
With an increasing amount of smart embedded devices in use, it becomes more and more relevant to analyze their functionality for potential
security flaws. However the firmware for these kind of devices is generally either proprietary or not well documented, making it difficult
to analyze security relevant functions.
Our highlevel goal is to enable analysis of such proprietary firmware for example through rehosting. Yet one big roadblock for this is
the handling of peripherals which need to be simulated in order to ensure smooth execution of the firmware. It is not trivial to
determine which types of peripherals are used by firmware without supplementary documentation or sources.
In this thesis, we develop a framework to analyze peripherals in blackbox firmware. Our strategy consists of clustering peripheral accesses
for each peripheral configuration and then using symbolic execution to determine potential semantic connections between different
peripheral configuration fields. This should yield a better understanding of how the peripheral works and could for example lead to more
efficient peripheral models.

 

15:00 - 15:30

Speaker: Prathvish Mithare
Type of talk: Master Intro
Advisor: Dr. Lea Schönherr
Title: Fooling Review Summarizers Using Adversarial Attacks
Research Area: RA1
Abstract: With the rise of more powerful large language models (LLMs), such as GPT-3.5 and GPT-4, these models can perform a wide range of tasks, including sentiment analysis and text summarization. Often trained on vast amounts of data, these models showcase an impressive ability to understand and generate human-like text. However, as their applications become more widespread, so does the recognition of their vulnerability to manipulation and adversarial attacks.

Previous studies have delved into adversarial attacks on LLMs across different contexts such as sentiment analysis, with relatively limited exploration into adversarial attacks for text generation. The primary objective is to investigate the potential impact of adversarial attacks on existing Large Language Models (LLMs) specifically within the domain of text summarization.

In a scenario where these models are utilized to automatically generate summaries for product reviews, intended to aid potential buyers in their decision-making process, a seller with malicious intent strategically inserts adversarial reviews into the pool of authentic reviews. These malicious reviews are crafted to deceive the LLM, leading it to unintentionally highlight the misleading content while generating the summary. Consequently, the biased summary may present a skewed perspective that supports the harmful intentions of the seller, potentially misleading the prospective buyer who relies on these summaries to make informed decisions, thus making them the victim in this situation.

In this study, we aim to execute this attack by focusing on identifying a suffix that, when added to a diverse set of reviews presented to a language model, prompts the model to generate a biased summary. Our primary objective is to maximize the likelihood of the model producing a biased response rather than an unbiased summary. This approach allows product sellers to insert adversarial reviews, thereby influencing the output of review summarization models. Consequently, it has the potential to manipulate the summarization process to favor the seller's desired outcome, such as promoting specific sentiments or biases.

 

Session B:

14:30 - 15:00

Speaker: Tobias Risch
Type of talk: Master Intro
Advisor: Prof. Dr. Andreas Zeller
Title: Fuzzing X509 Certificates - A Tale of Chains and Circles
Research Area: RA1 Trustworthy Information Processing
Abstract:

With the usage of x509 certificates for identification growing more and more popular, there also was an
increasing necessity for validating these certificates. This necessity led to the creation of multiple
implementations for certificate checking. To ensure that these implementations work correctly, they need testing.
As x509 certificates are highly complex structures, their generation for testing is quite a costly task.

Over the past two years, there were multiple bachelor theses about possibilities to automate the generation of certificates.
However, they mostly covered the generation of single certificates, directly signed by a root authority.

In this work, I will extend an existing approach for certificate generation by the features that are required for
generating certificate chains. Furthermore, I will use the certificate chains, generated with this extended technique,
to perform differential testing on different clients (command-line, as well as web browser).

 

15:00 - 15:30

Speaker: Vasili Nikolaev

No information provided.

Privacy Policy | Legal Notice
If you encounter technical problems, please contact the administrators.