News
Next Seminar on 7.06.2023
Written on 05.06.2023 12:50 by Niklas Medinger
Dear All,
The next seminar(s) take place on 07.06.2023 at 14:00 (Session A) and 14:00 (Session B).
Session A: (14:00-15:00)
Justin Steuer, HTMA Riyadh
https://cispa-de.zoom.us/j/96786205841?pwd=M3FOQ3dSczRabDNLb3F1czVXVUpvdz09
Meeting-ID: 967 8620 5841
Kenncode: BT!u5=
Session B: (14:00-14:30)
Thomas Boisvert-Bilodeau
https://cispa-de.zoom.us/j/69371224982?pwd=amFFbmVBcVhDeGg5Q2VacXh0M3pKQT09
Session A:
14:00 - 14:30
Time: 14:00 - 14:30
Speaker: Justin Steuer
Type of talk: Bachelor Intro
Advisor: Andreas Zeller
Title: Constraint-Aware Parsing
Research Area: RA5: Empirical and Behavioural Security
Abstract:
Parsing is an integral tool of software development for disassembling input and to check it for correctness.
However, parsers that solely rely on context-free grammars, while versatile, can only check input for syntactic validity and can not verify context-sensitive properties.
ISLa, a declarative specification language for context-sensitive properties, enables users to specify context-sensitive constraints on top of a context-free grammar that
each valid string must satisfy. ISLa cannot only produce valid inputs, but can also check for a specified string whether it fulfills all given constraints.
While this feature is functional, it is not optimal in the way that it is implemented, since it first parses the string through a parser for context-free grammars
(thus verifying its syntactic correctness) and only then verifies its semantic correctness afterwards.
This can be quite inefficient when a lot of inputs have to be verified, since each input needs to be fully parsed regardless of whether it fulfills the semantic requirements or not.
This talk introduces the concept of Constraint-Aware Parsing, which aims to build upon the Earley Parser, a parser for context-free grammars,
and give it additional functionality to verify context-sensitive constraints alongside the traditional parsing process and extend it into a so-called 'Constraint Parser'.
The general idea is that when the parser itself is aware of the context-sensitive properties that valid input needs to conform to,
it will be able to detect an invalid input much earlier than with the current method, especially in the case of a syntactically correct, but semantically invalid input.
Furthermore, it will also offer the advantage of being able to use constraints to resolve ambiguity while parsing, which can make parsing with ambiguous grammars
much more efficient compared to the Earley Parser, which creates a parse forest to handle ambiguity.
14:30 - 15:00
Speaker: HTMA Riyadh
Type of Talk: Master Final
Advisor: Katharina Krombholz
Title: Authentication Usability in Virtual Reality (VR)
Research Area: RA5: Empirical and Behavioural Security
Abstract:
Virtual Reality (VR) offers an immersive 3D environment for social, entertainment, and
research applications that require authentication. To achieve the end user’s confidence
and satisfaction, reliable usability for authentication is a must. Though the prior research
shows promising results in terms of the security of authentication but lacks a usability
study. In this thesis (N=40), we investigate the usability of the authentication process in
VR using 1. 2D PIN, which is well-established and frequently used in daily activities,
and 2. Gesture-based authentication method, which is relatively new for the common
people but a natural way of interaction. We identify that the authentication type and the
experience status have an impact on usability. Our result shows that the gesture has high
usability score than a PIN. We also notice that performance gets better if the interaction
mode is natural. This work helps to get a better understanding of authentication usability
in virtual reality and helps to counterbalance the trade-off between usability and security.
Session B:
14:00 - 14:30
Speaker: Thomas Boisvert-Bilodeau
Type of talk: Bachelor Intro
Advisor: Dr. Yang Zhang
Title: Understanding the relationship between backdoor attacks and membership inference attacks
Research Area: Trustworthy Information Processing
Abstract: In the domain of deep learning, there are proven risks associated with using third-party resources like datasets, training services or pre-trained models. A backdoor attack can be employed to control the behavior of a neural network when presented with a trigger. Once trained, classifiers can also be vulnerable to a membership inference attack. If a model has noticeable differences in the values it outputs when presented with inputs that were used in it's training versus inputs that are new, it can be inferred if a data point was part of the training dataset. This is obviously a privacy concern when datasets contain personal or sensitive information. While both those attack have been studied an refined, there is little knowledge on how one influence the other. This work is exploring the relationship between backdoor attacks and membership inference attack.