News

Next Seminar on 6.1.2020

Written on 31.12.2020 13:14 by Stella Wohnig

Dear All,

the next seminar takes place on 6.1. at 14:00.

Session A:
Pavithra Krishna - David Butscher - Marc Katz 

https://cispa-de.zoom.us/j/96786205841?pwd=M3FOQ3dSczRabDNLb3F1czVXVUpvdz09

Meeting-ID: 967 8620 5841
Kenncode: BT!u5=
 

14:00-14:30 

Speaker:Pavithra Krishna
Type of talk: Master Intro talk
Advisor: Prof.Dr.Andreas Zeller
Supervisor:Dr.Rahul Gopinath
Title:Fuzzing the Extended Berkeley PacketFilter verifier using Grammars

Abstract:
The use of the Linux Kernel is widespread in several Linux-based distributions. One feature which is a part of the Linux kernel is BPF(Berkeley Packet Filter), introduced in the 1990s. BPF makes it possible for a user-space process with minimum privileges to directly provide a filter program to the kernel space, which is of the highest privileges. Then the kernel provides only those packets that match the filter to the user process. From the efficiency point of view, this saves time, memory space. The evolution of this technology today has extended in many ways into being called eBPF (Extended Berkeley Packet Filter). It supports the filtering of packets from the network and allows programs to be attached to trace points to gain information about the process and CPU resources. This mechanism also forms the basis of Seccomp(Secure Computing) BPF, which is a provision to filter the system calls between the user and kernel space. As a security mechanism, it has an in-built static analyzer called the eBPF verifier, which stands as a doorman to ensure that the user-supplied programs are safe to execute. Therefore any flaw or vulnerability in the eBPF verifier gives direct control over the kernel. There is a possibility to leak memory addresses, read-write access to arbitrary locations, etc. A self-test framework is currently used to do the security testing of the eBPF verifier. A couple of research activities in the past primarily use the existing self-test framework with well-known mutation-based fuzzers. Hence this indicates that there is a scope for improvement using other techniques to find deeper bugs. This paper aims to find deeper bugs using Grammar-based Fuzzing as the core technique.  Additionally, the work also attempts to optimize the overall Fuzzing process by reusing input fragments from existing program code fragments while generating new ones using the Grammar.

14:30-15:00

Speaker: David Butscher
Type of talk: Bachelor Intro
Advisor: Ben Stock
Title: Measuring the Impact of the Crawling Context on the Results of Web Scanners

Abstract:
Web scanners are essential tools for security researchers. They are used to measure the occurrence of security flaws in the known web. Unfortunately, their efficiency suffers from multiple limitations. One limitation is that in the first phase - the crawling phase - the crawling module needs to explore as many resources of a domain as possible. The found URLs are the input for the attack modules in the second phase. Hence the efficiency of the whole web scanner depends on the efficiency of the crawler module. This work aims to identify factors that influence the success of web scanners, especially in the crawling phase. To reach this goal, we will change the context of the crawling process and examine the results of the applied attack modules. We will target three factors in detail:
1. IP address: the change of the IP addresses could mitigate the countermeasures of web application firewalls against known attackers. There is also a possibility that some sites are only accessible from particular locations.
2. User-Agent: site owners may deliver multiple versions of their site customized for the end-user device.
3. Authentication: signed up users may access pages that unknown users will never be able to reach. Since authentication in an automated fashion is not trivial, we will mainly focus on websites that offer SSO.

We will use a limited set of top domains listed by Tranco for our experiments. For our evaluation, we will run the web scanner without modifications and later with one modified factor. Afterward, we will count the number of detected security flaws. In this way, we can measure the direct influence of the changes to the crawling context.

 
15:00-15:30

Speaker: Marc Katc    
Type of talk: Bachelor Intro
Advisor: Ben Stock
Title: Nasty Tag Soup - How gracious HTML parsing helps Mallory

Abstract: HTML is one of the first technologies that came with the invention of the internet and evolved around the last 30 years, sometimes in more than one direction. Today, with HTML 5 as the current version, the standard itself acknowledges that faulty implementations are common and can even influence the specification itself. Because of this and back- and forward compatibility, modern browsers are not strict about the parsing of HTML and often times reveal huge differences in their parsing routines.
The goal of this bachelor’s thesis is to analyze the gray areas of gracious HTML parsing in different browsers, to evaluate the security implications that come with it as well as to understand the consequences of stricter parsing of security relevant HTML parts in the wild. In particular, we identify tags that are or become more dangerous if they are parsed in violation to their specifications and create recommendations for parsing routines. We are also using crawled web data to measure the frequency of misused HTML tags in the wild and therefore estimate the impact on usability of stricter parsing.

 

Privacy Policy | Legal Notice
If you encounter technical problems, please contact the administrators.