News

Next Seminar on 02.09.2020

Written on 28.08.2020 00:01 by Stella Wohnig

Dear All,

the next seminar(s) take place on 2.9. at 14:00.

Please remember to upload your submission info on time (= 1 week in advance, so on Wednesday)! Also, please, if you have multiple advisors who need to attend your talk, please put all of them onto your calendly registration.

Session A:
Marius Smytzek - Florian Grün - Janis Schmitt

https://zoom.us/j/96357406051?pwd=bW52em9CYUNqeG1aeC9razZEb2tVQT09

Meeting-ID: 963 5740 6051
Kenncode: 5p?d!N


Session B:
Aurora Naska - Vladislav Skripniuk

https://zoom.us/j/95417204858?pwd=cmMrSGdKcVk3NFpIZ285bHlka2hOQT09

Meeting-ID: 954 1720 4858
Kenncode: 3a*grG



Session A:

14:00-14:30 

Speaker: Marius Smytzek
Type of talk: Master Thesis intro talk
Advisor: Rahul Gopinath
Title: Mutation-Based Impact Analysis

Abstract:
Evaluating today’s complex software applications is unaccomplishable without effort. The verification of its functional correctness is achievable by testing the software to the extent that the confidence in it is significant enough to establish its overall rightness. The validation regarding stated claims or derived conclusions remains even more troublesome.

In this thesis, I propose a technique to determine the impact of single program components on the eventual result. I consider two program elements as the substantial implementation features: functions and constants. The approach utilizes dynamic mutations activatable at runtime to disable particular components. Combined with the derived outcome, the deviation to the actual result indicates the impact of the disabled element. The awareness of the essential parts of a program provides a better understanding of the origins of the outcomes. Hence, the approach could guide a developer to enhance or expand the program to improve the quality of the results. Moreover, the extracted information provides aids for validating scientific applications based on the derived claims or conclusions by researchers, which would reinforce the quality and correctness of their work.

I evaluate the approach on a diverse collection of interacting program components from a set of real-world programs and show that my approach works in practice.

 
14:30-15:00

Speaker: Florian Grün
Type of talk: Bachelor Thesis Intro Talk
Advisor: Rahul Gopinath
Title: ARM Processor Fuzzing via QEMU

Abstract:

Each processor manufacturer hands out documentation of all usable processor instructions
for their specific processor architectures. But not all possible sequences
of instructions are well-defined in those documentations, hence the result of their
execution is undefined. In this bachelor thesis, I will implement a tool that is capable
to test a qemu-emulated arm architecture and I will use it to investigate if there
appears any recognizable misbehavior when trying to execute undefined sequences
of instructions.

To do so, the grammar-based F1 prototype fuzzer in combination with a self-written
instruction grammar manually derived from an instruction set description will be
used. This will create a massive amount of instruction inputs that will then be
executed on the emulated ARM processor architecture. The target of this fuzzing
campaign will be a qemu-emulated ARM Cortex-A9 processor and it will be tested
in user as well as in supervisor execution mode.
 

15:00-15:30

Speaker: Janis Schmitt
Type of talk: Bachelor Thesis Final Talk
Advisor: Dr.-Ing. Sven Bugiel
Title: Hardening Android Against UI Deception Attacks

Abstract:

Imagine you want to login to your online banking app on your android phone. You open
your banking app and click on the login button. You enter your credentials, press login
and an error message informs you that the server could not be reached and asks you to
try again. What happened?
A malicious application on your phone was able to detect that the login activity of your
banking app went into foreground and launched a phishing attack by jumping infront
of the original banking app showing you a fake login screen, visually indistinguishable
from the real one. We propose a defence mechanism against this and other UI based
deception attacks without major limitations in user experience.

Session B:

14:00-14:30

Speaker: Aurora Naska
Type of talk: Master thesis intro talk
Advisor: Cas Cremers
Title: Clone detection in secure messaging

Abstract: In this work, we will examine the relationship between strong security guarantees, offered by modern protocols in the event of compromise, and the actual detection of the latter. After the compromise of one of the partners, there are two reasonable requirements from the protocol; to lock the attacker out and to detect its activity. Post-compromise security (PCS), a guarantee of "healing" the communication channel after the compromise, satisfies the first requirement while clone detection satisfies the second one. Motivated by the seem-
ingly complementary nature of the two problems, we are interested in identifying exactly how these two properties and the protocols that achieve them, are related to one another.

We therefore propose an analysis of the two aforementioned security properties and the respective protocols that achieve them, namely protocols that "heal" after a compromise and protocols that detect and approprietly flag the compromise. In the end, we want to show how these security properties are achievable in modern secure messaging protocols.

14:30-15:00

Speaker: Vladislav Skripniuk
Type of talk: Final
Advisor: Prof. Dr. Mario Fritz
Title: Black-box Watermarking for Generative Adversarial Networks

Abstract: As companies start using deep learning to provide value to their customers, the demand for solutions to protect the ownership of trained models becomes evident. Several watermarking approaches have been proposed for protecting discriminative models. However, rapid progress in the task of photorealistic image synthesis, boosted by Generative Adversarial Networks (GANs), raises an urgent need for extending protection to generative models.

We propose the first watermarking solution for GAN models. We leverage steganography techniques to watermark GAN training dataset, transfer the watermark from the dataset to GAN models, and then verify the watermark from generated images. In the experiments, we show that the hidden encoding characteristic of steganography allows preserving generation quality and supports the watermark secrecy against steganalysis attacks. We validate that our watermark verification is robust in wide ranges against several image and model perturbations. Critically, our solution treats GAN models as an independent component: watermark embedding is agnostic to GAN details and watermark verification relies only on accessing the APIs of black-box GANs.

We further extend our watermarking applications to generated image detection and attribution, which delivers a practical potential to facilitate forensics against deep fakes and responsibility tracking of GAN misuse.

 

15:00-15:30
No talk due to cancellation - you may still register last-minute by mailing bamaseminar@cispa.saarland
Privacy Policy | Legal Notice
If you encounter technical problems, please contact the administrators.