News

Currently, no news are available

Reproducibility in Computer Security

A scientific paper consists of a constellation of artifacts beyond the document itself: software, hardware, evaluation data and documentation, raw survey results, mechanized proofs, models, test suites, benchmarks, etc. In some cases, the quality of these artifacts is as important as that of the document itself. In this seminar, you will learn more about replicability and reproducibility of artifacts in computer security. The seminar is structured in two parts, similar to the Machine Learning Security seminar:

  1. You will write a survey paper on the main topic of your assigned paper.
  2. You will evaluate the code of two papers in the context of an artifact evaluation. 

You will work in a group of two students, i.e., two students will work together on these two parts. The general schedule of the seminar follows the typical processes used in computer security conferences and you will learn more about the latest results published at prominent security venues. The seminar is also inspired by the ML Reproducibility Challenge. We will discuss the option of regular meetings to discuss the selected papers in detail, a final decision on the design of the seminar is still pending. The planned outline is as follows:

 

Survey

Your group of two students will be assigned a topic (related to your assigned paper) for which you will read current research papers and summarize them in a survey paper. The resulting survey paper will undergo a peer-review process, similar to academic conferences. This will include review, rebuttal, and revision phases in which everyone must participate.

This includes:

  • Writing a survey paper on the main topic of the group's papers (max. 6 pages)
  • Reviewing two survey papers from other groups to provide feedback
  • Improving the original survey paper based on the feedback received during the review process

 

Artifact Evaluation

The goal of this phase is to replicate or reproduce the results presented in two recent computer security papers. Ideally, you should be able to fully replicate the results (e.g., all the main results presented in the paper) and check that the results you obtain are consistent with the description in the paper. Note that this will require some effort, as you will need to familiarize yourself with the code and other artifacts published by the authors. If possible, you could also try to further evaluate the research artifacts, you could for example perform additional experiments to study the limitations of the proposed techniques.

At the end of the semester, your group will submit a report that presents the evaluation results. We will provide a structure for this report, the report should be about 8 pages long. 

 

Important Dates

  • Kick-off meeting in the second week of the semester: April 19 at 14:15 o'clock in CISPA main building, room 0.07 
  • Group and paper assignments: April 25
  • Submission of first version of survey paper and first AE report: May 31 June 14
  • Submission of reviews: June 23 July 7
  • Submission of final version of survey paper: July 21
  • Submission of artifact evaluation report: July 21

 

List of Papers

We will focus on papers published at the top computer security venues such as IEEE Symposium on Security and Privacy, USENIX Security Symposium, ACM Conference on Computer and Communications Security (CCS), and Network and Distributed System Security Symposium (NDSS). The theme of the seminar is on automated security testing, but other topics are also possible. A list with suggested topics will be provided, but you can also pick a topic you are interested in.

 

Deliverables and Grading

  • Final survey paper (50 % of your final grade)
  • Reviews (10 % of your final grade)
  • Artifact evaluation report (40 % of your final grade)
Privacy Policy | Legal Notice
If you encounter technical problems, please contact the administrators.