Thomas Lemberger

Software and Computational Systems Lab
Department of Computer Science
Ludwig-Maximilians-Universität München (LMU Munich)
Oettingenstraße 67
80538
Munich
(Germany)
- Office
- Room EU 108, Oettingenstr. 67
- thomas.lemberger@sosy.ifi.lmu.de
- ORCID
-
0000-0003-0291-815X
GPG-Key
Please send me encrypted mails!
My GPG key: 0x033DE66F
Fingerprint: BBC4 36E1 F2BA BA4E 8E81 872E 9787 7E1F 033D E66F
Thesis Mentoring
Currently assigned topics
An IDE Plugin for Software Verification
This topic is available as a BSc thesis or as a MSc research training course with 6/12 ECTS. Mentoring is available in English and German.
Goal: We integrate the program-analysis tool CPAchecker into IDEs like VSCode and IntelliJ by building a language server for it.
Background: CPAchecker is a powerful tool for program analysis and formal verification. It automatically finds errors in code. But currently, CPAchecker only has a command-line interface. To make it easier to use CPAchecker during development, we want to integrate it into IDEs. Traditionally, a separate IDE plugin had to be written for each IDE that we want to support, but the language server protocol provides one protocol that allows us to integrate CPAchecker into a variety of IDEs by building a single language server.
Details: You build a language server for CPAchecker and the C programming language. The server receives requests from IDEs that contain program code. The server then runs CPAchecker on that code and responds to the IDE with a list of found issues. A basic web service for communicating with CPAchecker already exists and can be used in the backend. The goal is to be able to both deploy the language server locally and remotely (e.g., in the cloud).
Knowledge in software verification is helpful. An understanding of Java micro-services is helpful. The language server should be written in Java.
In the scope of this topic, you have to create a design of the language server, implement this design, and test the server.
Verification of Micro Services based on OpenAPI Specifications
This topic is available as a BSc/MSc thesis or as a research training course with 6/12 ECTS. Mentoring is available in English and German.
Goal: We build a tool that reads the API specification of a micro-service, extracts some program properties from it, and uses a Java verifier (formal verifier or fuzz tester) to check these properties.
Background: We want to verify Java micro-services for correct behavior. The interface of most Java micro-services are RESTful HTTP APIs. These are often defined using the OpenAPI specification language in JSON or YAML format. OpenAPI allows developers to define constraints on the allowed HTTP requests and responses. For example: 'age': { 'type': 'integer', 'format': 'int32', 'minimum': 0 }
. These constraints can be used for findings bugs in micro-services and verifying that they always work as expected. The available constraints are defined in the JSON Schema Validation.
Details: You build a tool that receives an OpenAPI specification and a Java micro service. The tool reads the OpenAPI specification, extracts the constraints on response parameters (for example age.minimum: 0
), turns them into a specification that the Java verifier can understand (for example response.age >= 0
), and then runs the verifier to verify that the micro-service implementation always fulfills this specification (or to report test inputs that trigger a bug). The constraints on request parameters can be used to restrict the input values to the micro service and make the verification more precise. Depending on the verifier that is used, the specification may be given to the verifier or directly encoded in the Java code (e.g. to be able to use a fuzzer for verification).
Knowledge in software verification is helpful. An understanding of Java micro-services is necessary. The command-line tool can be written in one of the following languages: Java, Kotlin.
In the scope of this topic, the tool must be designed, implemented, and evaluated.
Verification of Micro Services based on API Specifications
This topic is available as a BSc/MSc thesis or as a research training course with 6/12 ECTS. Mentoring is available in English and German.
Goal: We build a tool that reads the API specification of a micro-service, extracts some program properties from it, and uses a Java verifier (formal verifier or fuzz tester) to check these properties.
Background: We want to verify Java micro-services for correct behavior. The interface of most Java micro-services are RESTful HTTP APIs. These are often defined using the OpenAPI specification language in JSON or YAML format. OpenAPI allows developers to define constraints on the allowed HTTP requests and responses. For example: 'age': { 'type': 'integer', 'format': 'int32', 'minimum': 0 }
. These constraints can be used for findings bugs in micro-services and verifying that they always work as expected. The available constraints are defined in the JSON Schema Validation.
Details: You build a tool that receives an OpenAPI specification and a Java micro service. The tool reads the OpenAPI specification, extracts the constraints on response parameters (for example age.minimum: 0
), turns them into a specification that the Java verifier can understand (for example response.age >= 0
), and then runs the verifier to verify that the micro-service implementation always fulfills this specification (or to report test inputs that trigger a bug). The constraints on request parameters can be used to restrict the input values to the micro service and make the verification more precise. Depending on the verifier that is used, the specification may be given to the verifier or directly encoded in the Java code (e.g. to be able to use a fuzzer for verification).
Knowledge in software verification is helpful. An understanding of Java micro-services is necessary. The command-line tool can be written in one of the following languages: Java, Kotlin.
In the scope of this topic, the tool must be designed, implemented, and evaluated.
Adding Assertions for Undefined Behavior to Programs
This topic is available as a BSc/MSc thesis or as a research training course with 6/12 ECTS. Mentoring is available in English and German.
Goal: We want to build a command-line tool that inserts assertion statements into a program to check for undefined behavior before it happens.
Background: We want to use verification to find undefined behavior in C software. Examples for undefined behavior in C are: signed integer overflows, bit shifts that go too far, an array access that is out of bounds, or the use of uninitialized variables. A software verifier is a tool that checks some program code and a verification property ("the program never triggers any assertion failure"). Only few verifiers support the search for undefined behavior in C code, but many support the search for assertion failures.
Details: You build a command-line tool that receives a C program and inserts assertions that check for undefined behavior before it happens. This program can then be verified by an off-the-shelf verifier to check for undefined behavior. The focus is on assertions that can be added without an expensive program analysis. Ideally, it is possible to map the modified program back to the original program. Before implementing the tool, we have to select undefined behavior that we want to check for and design methods to recognize them.
We focus on C code. A large selection of verifiers can be used off-the-shelf for an evaluation of the tool, and example programs exist.
Knowledge in software verification and an understanding of C is necessary. The command-line tool can be written in one of the following languages: Python, Java, Kotlin, C++.
In the scope of this topic, the command-line tool must be designed, implemented, and evaluated.
A Library for Unit Verification
This topic is available as a BSc/MSc thesis or as a research training course with 6/12 ECTS. Mentoring is available in English and German.
Goal: We want to build a command-line tool that builds a verification harness. The verification harness allows to use software verification like unit testing.
Background: Similar to Unit Testing, we want to use verification to find bugs in individual software methods (or proof that there are no bugs). A software verifier is a tool that checks some program code and a verification property ("the program never triggers any assertion failure"). But there are two issues: (1) For unit verification, there is no library like JUnit that provides methods for managing tests. (2) Users are not used to software verification and don't know how to write a verification harness. The second issue is tackled by the topic "Verification-Harness Synthesis for Unit Verification". The first issue is tackled by this topic.
Details: You build a C library that helps software developers write and execute verification tasks, similar to unit testing with JUnit. The library provides methods to define the range of input values (examples: int temperature = anyInt();
, restrict(temperature > -273);
) and to assert postconditions (example: assert(output == 0)
). Ideally, the library provides utility methods that make it easier to define conditions on complex data structures like structs and/or pointer constructs. The library may also come with a small tool that runs all defined verification-methods with a verifier and report the individual outputs (like a test runner).
We focus on C code. A large selection of verifiers can be used off-the-shelf.
Knowledge in software verification and a basic understanding of C is necessary. The library must be written in C.
In the scope of this topic, the library is designed, implemented, and evaluated.
Finished topics
Mutation based Automatic Program Repair in CPAchecker [1]
Debugging software usually consists of three steps: 1. finding an error in the program, 2. finding the cause for that error (the fault), and 3. fixing that fault. By default, fixing the fault is done manually by a programmer, which often consumes vast amount of resources and may introduce new bugs. To ease that task, this topic is concerned with automatic program repair. After finding a potential fault in a program, automatic program repair proposes fixes and often checks the validity of these fixes. The goal is to implement and evaluate a technique for automatic program repair in C programs in the state-of-the-art verification framework CPAchecker. The complexity of the technique and the scope of the topic is based on the thesis type (project, bachelor or master). Programming is done in Java.
Fault Localization in Model Checking. Implementation and Evaluation of Fault-Localization Techniques with Distance Metrics [1]
There are many different ways to find out whether a program is buggy, for example testing or formal verification. Once we know that a program is buggy, i.e., that it shows some faulty behavior, a developer has to manually debug the program to find the cause of the problem and fix it - this is a difficult and long-taking process. The aim of this thesis is to help programmers with debugging through the use of distance metrics (cf. Explaining Abstract Counterexamples, 2004).
Fault Localization for Formal Verification. An Implementation and Evaluation of Algorithms based on Error Invariants and UNSAT-cores [1]
There are many different ways to find out whether a program is buggy, for example testing or formal verification. Once we know that a program is buggy, i.e., that it shows some faulty behavior, a developer has to manually debug the program to find the cause of the problem and fix it - this is a difficult and long-taking process. The aim of this thesis is to help programmers with debugging by implementing Error Invariants and bug-finding with unsat cores, techniques for bug-finding based on boolean representations of the faulty program, to (1) automatically find potential causes for faulty behavior and (2) present the found causes.
Test-based Fault Localization in the Context of Formal Verification: Implementation and Evaluation of the Tarantula Algorithm in CPAchecker [1]
There are many different ways to find out whether a program is buggy, for example testing or formal verification. Once we know that a program is buggy, i.e., that it shows some faulty behavior, a developer has to manually debug the program to find the cause of the problem and fix it - this is a difficult and long-taking process. The aim of this thesis is to help programmers with debugging by implementing Tarantula, a technique for bug-finding based on test coverage, to (1) automatically find potential causes for faulty behavior and (2) present the found causes.
Converting Test Goals to Condition Automata [1]
Conditional Model Checking and Conditional Testing are two techniques for the combination of respective verification tools. Conditional Testing describes work done through a set of covered test goals and Conditional Model Checking describes work done through condition automata. Because of this discrepancy, the two techniques can not be combined. To bridge this gap, this thesis transforms a set of test goals into a condition automaton to allow easy cooperation between Conditional Testing and Conditional Model Checking.
A Language Server and IDE Plugin for CPAchecker [1]
At the moment, there are two ways to use CPAchecker: Locally through the command-line, and in the cloud through a web interface. Both options require C developers to leave their IDE everytime they want to run CPAchecker, which disrupts their workflow. The goal of this project is to build an IDE plugin that provides a seamless experience of using CPAchecker (e.g., for Eclipse CDT, VisualStudio Code or mbeddr). We would like the plugin to focus on ease of use and immediate feedback that is easy to comprehend. The plugin should:
- allow developers to run CPAchecker on their current project,
- provide some options for customization (e.g., whether to run CPAchecker locally or in the cloud), and
- provide useful feedback (e.g., if a possibly failing assert-Statement was found by CPAchecker)
If you're a student interested in writing your thesis at our chair, you should also have a look at our full list of currently available theses.
Teaching
- SS'22: Seminar Machine Learning in Software Engineering
- SS'22: Science and Practice in Software Engineering
- SS'22: Testen (Blockveranstaltung)
- WS'21: Semantics of Programming Languages (Blockveranstaltung)
- WS'21: Training Course (Praktikum) "Planmäßige Entwicklung eines größeren Softwaresystems"
- SS'21: Training Course (Praktikum) "SEP: Java-Programmierung"
- SS'21: Testen (Blockveranstaltung)
- WS'20: Semantics of Programming Languages (Blockveranstaltung)
- WS'20: Software Verification
- WS'20: Seminar Software Quality Assurance
- WS'20: Testen (Blockveranstaltung)
- SS'20: Theoretische Informatik für Medieninformatik
- WS'19/20: Training Course (Praktikum) "SEP: Java-Programmierung"
- SS'19: Testen (Blockveranstaltung)
- SS'19: Training Course (Praktikum) on "Advanced Software Engineering"
- WS'18/19: Training Course (Praktikum) "SEP: Java-Programmierung"
- SS'18: Training Course (Praktikum) on "Advanced Software Engineering"
- WS'17/18: Formal Specification and Verification 2
- WS'17/18: Training Course (Praktikum) on "Formal Specification and Verification 2"
Projects
- TestCov: Robust Test Execution, Coverage Measurement, and Test-suite Reduction (Maintainer)
- CondTest: A Proof of Concept for Conditional Software Testing (Creator)
- LLVM-j: Java Bindings for LLVM Libraries (Maintainer)
- CPAchecker: The Configurable Software-Verification Platform (Contributor)
- BenchExec: A Framework for Reliable Benchmarking and Resource Measurement (Contributor)