We are hiring new student research assistants and tutors. Apply now!
4 papers accepted at SPIN 2024!

Dr. Philipp Wendler

Akademischer Rat

Picture of Dr. Philipp Wendler

Software and Computational Systems Lab
Department of Computer Science
Ludwig-Maximilians-Universität München (LMU Munich)
Oettingenstraße 67
80538 Munich (Germany)

Office
Room F 008, Oettingenstr. 67
Office hours
Monday 10-11 please contact via e-mail
Phone
+49 89 2180-9181
E-Mail
lastname @ sosy.ifi.lmu.de
Personal Homepage
https://www.philippwendler.de
ORCID
0000-0002-5139-341X

GPG-Key

Please send me encrypted mails!
My GPG key: 0x31A9DE8C
Fingerprint: 19D5 A10B 2D97 88D8 7CBE E5ED 62C0 F78C 31A9 DE8C

Thesis Mentoring

Available topics
Gather verification tasks from test suites of compilers and verifiers [1]

The SV-Benchmarks repository is a collection of verification tasks and used for example in competitions of verifiers and other tools. Compilers like GCC and Clang likely have test suites with C programs that would be suitable as verification tasks, if adjusted to the expected format. This should be done systematically for at least these two compilers and potentially others, and the result should be submitted to SV-Benchmarks. If not enough potential tasks can be found, another source of tasks would be regression-test suites of verifiers and testing tools.

Consolidate pointer-aliasing analyses in CPAchecker [1]

CPAchecker has several different implementations of pointer-aliasing analyses, and the current state of what they support is unclear. This leads to the fact that they are rarely used and causes wrong results in case of pointer aliasing. The goal of this thesis is to create a strong CPA for pointer aliasing (either by merging the existing CPAs or rewriting them) that supports all necessary use cases (fast and imprecise as well as more precise) and migrate all existing uses of other pointer CPAs, such that the old code can be deleted (#358). Afterwards, it may be possible to add additional features (e.g., #742) or extend other components of CPAchecker to make use of the new CPA (e.g., #527, #398).

Refinement Selection with Adjustable-Block Encoding [1, 2, 3]
Improvements to generated HTML tables [1]

For the results of benchmarking runs, BenchExec produces interactive HTML tables (example) with filtering of rows, plots, etc. These tables are implemented as a React app and we have several open feature requests and maintenance tasks, such as upgrading or replacing used libraries.

Timestamps in logs [1]

During benchmarking runs, BenchExec collects the output of the benchmarked tool into files. In some use cases, it would be beneficial if each produced line would be prefixed with a timestamp. For this we need to pipe the tool output through the BenchExec process instead of writing it directly (GitHub issue). While implementing this it might be possible to solve some related issues as well.

Overlay handling with fuse-overlayfs [1]

For benchmarking, BenchExec creates a container with a file-system overlay on top of the system's file system. However, due to a kernel change this is not possible for the root file system anymore. Using fuse-overlayfs as an alternative overlay implementation might also work and might solve this issue. So we want to implement that and evaluate how well it works compared to the kernel-based overlayfs.

Currently assigned topics
Modern Tests for BenchExec [1]

BenchExec's tests are currently written using the test framework nose, which is not maintained anymore and has problems on new Python versions. The goal of this project is to switch the test framework used for BenchExec. As a first step, an appropriate framework needs to be chosen. This should be done systematically based on criteria like ease of use, how common the framework is, whether it is well maintained etc. Next, the framework needs to be integrated in BenchExec's project structure and the existing tests need to adjusted where necessary. This might also include restructuring the tests and in general improving their state. Filling gaps in our test coverage with more unit tests would also be highly welcome.

Find appropriate garbage collector(s) for CPAchecker via a systematic study [1]

A study should be conducted that evaluates options for garbage collectors when executing CPAchecker as described in the linked issue. The goal of this thesis is to have usable suggestions on what garbage collectors CPAchecker should use in which situation. For this it is crucial that the study is performed in a systematic and well-planned manner such that the results are reliable and meaningful.

Finished topics
Upgrade of AngularJS, Refactoring UI and automated testing for CPAchecker [1, 2]

CPAchecker produces as result a report about its verification work. An example can be seen here. This report is a web application based on AngularJS and D3. The goal of this project is to replace AngularJS with Angular and to write tests for the functionality of this report using an appropriate framework for automated tests.

String analysis in CPAchecker [1, 2]
String analysis in CPAchecker [1, 2]
LTL software model checking
A JavaScript frontend for CPAchecker
Improve analysis of Java programs in CPAchecker
Support for exceptions in the Java analysis of CPAchecker
Towards understandability of verification result: visualization of graphs produced by verifiers
Predicate analysis based on splitting states for CPAchecker [1, 2]
Improve predicate analysis based on domain-type of variables in CPAchecker [1, 2]
New Approaches and Visualization for Verification Coverage [1]
Good, bad, ugly interpolants. Measure interpolants and build heuristics. Topic might be related to refinement selection. [1]
Make use of the Intel Cache Allocation Technology [1, 2]

If several benchmarking runs should be executed in parallel on the same system, the memory usage of one run can influence the performance of others (because CPU caches and the memory bandwidth is shared), even if each run gets its own reserved share of the memory. Recent Intel CPUs have a cache allocation technology, which allows to control the usage of caches and memory bandwidth and reduce the influence that parallel benchmarking runs have on each other. The goal of this project is to add support for using this technology to BenchExec.

Extend allocation of CPU cores in BenchExec [1, 2]

BenchExec is a benchmarking framework and assigns a specific set of CPU cores to each benchmark run. The calculation which core to use for which run needs to consider low-level details of the hardware architecture (like hyper threading, NUMA, etc.) in order to avoid interference between runs as far as possible. Our current implementation supports multi-CPU systems with NUMA and hyper threading, but not some more recent features like hybrid-architecture CPUs or CPUs with split Level 3 caches. The goal of this thesis is to implement a more general core allocation (in Python) with support for all kinds of modern CPUs and additional features and rigorous tests. There should also be an experimental evaluation on how different allocation strategies and such hardware details influence benchmarking results.

Use cgroups v2 for BenchExec to improve container isolation [1, 2]

Our benchmarking framework BenchExec uses cgroups in order to limit and measure time and memory usage during benchmarking. The Linux kernel is replacing cgroups with a new API (v2) and we need to support this in BenchExec (#133). Once we have it, we can use cgroup namespaces to better isolate our containers and provide full support for nested containers (#436).

Modern architecture and tests for HTML tables of BenchExec [1]
Improve usability of summary tab for HTML tables of BenchExec [1]

If you're a student interested in writing your thesis at our chair, you should also have a look at our full list of currently available theses.

Projects

Publications

(List at DBLPMy ORCID)

Other Responsibilities