Publications of year 2017

Articles in journal or book chapters

  1. Dirk Beyer, Sumit Gulwani, and David Schmidt. Combining Model Checking and Data-Flow Analysis. In E. M. Clarke, T. A. Henzinger, and H. Veith, editors, Handbook on Model Checking. Springer, 2017, to appear. [ PDF ]
    @InCollection{HBMC17,
    author = {Dirk Beyer and Sumit Gulwani and David Schmidt},
    title = {Combining Model Checking and Data-Flow Analysis},
    booktitle = {Handbook on Model Checking},
    editor = {E.~M.~Clarke and T.~A.~Henzinger and H.~Veith},
    publisher = {Springer},
    pages = {},
    year = {2017, to appear},
    isbn = {978-3319105741},
    pdf = {https://www.sosy-lab.org/research/pub/Combining_Model_Checking_and_Data-Flow_Analysis.pdf},
    
    }
    

  2. Dirk Beyer, Rolf Hennicker, Martin Hofmann, Tobias Nipkow, and Martin Wirsing. Software-Verifikation. In A. Bode, M. Broy, H.-J. Bungartz, and F. Matthes, editors, 50 Jahre Universitäts-Informatik in München, pages 75-86. Springer, 2017.
    @InCollection{InfMUC17,
    author = {Dirk Beyer and Rolf Hennicker and Martin Hofmann and Tobias Nipkow and Martin Wirsing},
    title = {Software-Verifikation},
    booktitle = {50 Jahre Universit{\"a}ts-Informatik in M{\"u}nchen},
    editor = {A.~Bode and M.~Broy and H.-J.~Bungartz and F.~Matthes},
    publisher = {Springer},
    pages = {75-86},
    year = {2017},
    isbn = {978-3-662-54711-3},
    doi = {10.1007/978-3-662-54712-0_5},
    
    }
    

  3. Dirk Beyer, Matthias Dangl, and Philipp Wendler. A Unifying View on SMT-Based Software Verification. Journal of Automated Reasoning, 2017. [ Info ] [ PDF ]
    Abstract:
    After many years of successful development of new approaches for software verification, there is a need to consolidate the knowledge about the different abstract domains and algorithms. The goal of this paper is to provide a compact and accessible presentation of four SMT-based verification approaches in order to study them in theory and in practice. We present and compare the following different ``schools of thought'' of software verification: bounded model checking, k-induction, predicate abstraction, and lazy abstraction with interpolants. Those approaches are well-known and successful in software verification and have in common that they are based on SMT solving as the back-end technology. We reformulate all four approaches in the unifying theoretical framework of configurable program analysis and implement them in the verification framework CPAchecker. Based on this, we can present an evaluation that thoroughly compares the different approaches, where the core differences are expressed in configuration parameters and all other variables are kept constant (such as parser front end, SMT solver, used theory in SMT formulas). We evaluate the effectiveness and the efficiency of the approaches on a large set of verification tasks and discuss the conclusions.

    @Article{AlgorithmComparison-JAR,
    author = {Dirk Beyer and Matthias Dangl and Philipp Wendler},
    title = {A Unifying View on {SMT}-Based Software Verification},
    journal = {Journal of Automated Reasoning},
    year = {2017},
    doi = {10.1007/s10817-017-9432-6},
    pdf = {https://www.sosy-lab.org/research/pub/2017-JAR.A_Unifying_View_on_SMT-Based_Software_Verification.pdf},
    url = {https://www.sosy-lab.org/research/k-ind-compare/},
    abstract = {After many years of successful development of new approaches for software verification, there is a need to consolidate the knowledge about the different abstract domains and algorithms. The goal of this paper is to provide a compact and accessible presentation of four SMT-based verification approaches in order to study them in theory and in practice. We present and compare the following different ``schools of thought'' of software verification: bounded model checking, k-induction, predicate abstraction, and lazy abstraction with interpolants. Those approaches are well-known and successful in software verification and have in common that they are based on SMT solving as the back-end technology. We reformulate all four approaches in the unifying theoretical framework of configurable program analysis and implement them in the verification framework CPAchecker. Based on this, we can present an evaluation that thoroughly compares the different approaches, where the core differences are expressed in configuration parameters and all other variables are kept constant (such as parser front end, SMT solver, used theory in SMT formulas). We evaluate the effectiveness and the efficiency of the approaches on a large set of verification tasks and discuss the conclusions. },
    
    }
    

  4. Dirk Beyer, Stefan Löwe, and Philipp Wendler. Reliable Benchmarking: Requirements and Solutions. International Journal on Software Tools for Technology Transfer (STTT), 2017. [ Info ] [ PDF ]
    Abstract:
    Benchmarking is a widely used method in experimental computer science, in particular, for the comparative evaluation of tools and algorithms. As a consequence, a number of questions need to be answered in order to ensure proper benchmarking, resource measurement, and presentation of results, all of which is essential for researchers, tool developers, and users, as well as for tool competitions. We identify a set of requirements that are indispensable for reliable benchmarking and resource measurement of time and memory usage of automatic solvers, verifiers, and similar tools, and discuss limitations of existing methods and benchmarking tools. Fulfilling these requirements in a benchmarking framework can (on Linux systems) currently only be done by using the cgroup and namespace features of the kernel. We developed BenchExec, a ready-to-use, tool-independent, and open-source implementation of a benchmarking framework that fulfills all presented requirements, making reliable benchmarking and resource measurement easy. Our framework is able to work with a wide range of different tools, has proven its reliability and usefulness in the International Competition on Software Verification, and is used by several research groups worldwide to ensure reliable benchmarking. Finally, we present guidelines on how to present measurement results in a scientifically valid and comprehensible way.

    @Article{Benchmarking-STTT,
    author = {Dirk Beyer and Stefan L{\"o}we and Philipp Wendler},
    title = {Reliable Benchmarking: {R}equirements and Solutions},
    journal = {International Journal on Software Tools for Technology Transfer (STTT)},
    year = {2017},
    doi = {10.1007/s10009-017-0469-y},
    pdf = {https://www.sosy-lab.org/research/pub/2017-STTT.Reliable_Benchmarking_Requirements_and_Solutions.pdf},
    url = {https://www.sosy-lab.org/research/benchmarking/},
    abstract = {Benchmarking is a widely used method in experimental computer science, in particular, for the comparative evaluation of tools and algorithms. As a consequence, a number of questions need to be answered in order to ensure proper benchmarking, resource measurement, and presentation of results, all of which is essential for researchers, tool developers, and users, as well as for tool competitions. We identify a set of requirements that are indispensable for reliable benchmarking and resource measurement of time and memory usage of automatic solvers, verifiers, and similar tools, and discuss limitations of existing methods and benchmarking tools. Fulfilling these requirements in a benchmarking framework can (on Linux systems) currently only be done by using the cgroup and namespace features of the kernel. We developed BenchExec, a ready-to-use, tool-independent, and open-source implementation of a benchmarking framework that fulfills all presented requirements, making reliable benchmarking and resource measurement easy. Our framework is able to work with a wide range of different tools, has proven its reliability and usefulness in the International Competition on Software Verification, and is used by several research groups worldwide to ensure reliable benchmarking. Finally, we present guidelines on how to present measurement results in a scientifically valid and comprehensible way. },
    
    }
    

Articles in conference or workshop proceedings

  1. Pavel Andrianov, Karlheinz Friedberger, Mikhail U. Mandrykin, Vadim S. Mutilin, and Anton Volkov. CPA-BAM-BnB: Block-Abstraction Memoization and Region-Based Memory Models for Predicate Abstractions (Competition Contribution). In Axel Legay and Tiziana Margaria, editors, Proceedings of the 23rd International Conference on Tools and Algorithms for the Construction and of Analysis Systems (TACAS 2017, Uppsala, Sweden, April 22-29), LNCS 10206, pages 355--359, 2017. Springer-Verlag, Heidelberg. [ Info ]
    @inproceedings{CPABAM-COMP17,
    author = {Pavel Andrianov and Karlheinz Friedberger and Mikhail U. Mandrykin and Vadim S. Mutilin and Anton Volkov},
    title = {{CPA-BAM-BnB}: {Block}-Abstraction Memoization and Region-Based Memory Models for Predicate Abstractions (Competition Contribution)},
    booktitle = {Proceedings of the 23rd International Conference on Tools and Algorithms for the Construction and of Analysis Systems (TACAS~2017, Uppsala, Sweden, April 22-29)},
    editor = {Axel Legay and Tiziana Margaria},
    series = {LNCS~10206},
    pages = {355--359},
    year = {2017},
    publisher = {Springer-Verlag, Heidelberg},
    isbn = {978-3-662-54579-9},
    url = {http://dx.doi.org/10.1007/978-3-662-54580-5_22},
    doi = {10.1007/978-3-662-54580-5_22},
    
    }
    

  2. Dirk Beyer. Software Verification with Validation of Results (Report on SV-COMP 2017). In Axel Legay and Tiziana Margaria, editors, Proceedings of the 23rd International Conference on Tools and Algorithms for the Construction and of Analysis Systems (TACAS 2017, Uppsala, Sweden, April 22-29), LNCS 10206, pages 331-349, 2017. Springer-Verlag, Heidelberg. [ Info ] [ PDF ]
    @inproceedings{TACAS17,
    author = {Dirk Beyer},
    title = {Software Verification with Validation of Results ({R}eport on {SV-COMP} 2017)},
    booktitle = {Proceedings of the 23rd International Conference on Tools and Algorithms for the Construction and of Analysis Systems (TACAS~2017, Uppsala, Sweden, April 22-29)},
    editor = {Axel Legay and Tiziana Margaria},
    series = {LNCS~10206},
    pages = {331-349},
    year = {2017},
    publisher = {Springer-Verlag, Heidelberg},
    isbn = {978-3-662-54579-9},
    doi = {10.1007/978-3-662-54580-5_20},
    pdf = {https://www.sosy-lab.org/research/pub/2017-TACAS.Software_Verification_with_Validation_of_Results.pdf},
    url = {http://sv-comp.sosy-lab.org/},
    
    }
    

  3. Dirk Beyer, Matthias Dangl, Daniel Dietsch, and Matthias Heizmann. Exchanging Verification Witnesses between Verifiers. In J. Jürjens and K. Schneider, editors, Tagungsband Software Engineering 2017, Fachtagung des GI-Fachbereichs Softwaretechnik (21.-24. Februar 2017, Hannover, Deutschland), LNI P-267, pages 93-94, 2017. GI.
    @inproceedings{SE17-Witnesses,
    author = {Dirk Beyer and Matthias Dangl and Daniel Dietsch and Matthias Heizmann},
    title = {Exchanging Verification Witnesses between Verifiers},
    booktitle = {Tagungsband Software Engineering 2017, Fachtagung des GI-Fachbereichs Softwaretechnik (21.-24. Februar 2017, Hannover, Deutschland)},
    publisher = {{GI}},
    editor = {J.~J{\"{u}}rjens and K.~Schneider},
    series = {{LNI}~P-267},
    pages = {93-94},
    year = {2017},
    url = {},
    
    }
    

  4. Dirk Beyer and Thomas Lemberger. Software Verification: Testing vs. Model Checking. In Ofer Strichman and Rachel Tzoref-Brill, editors, Proceedings of the 13th Haifa Verification Conference (HVC 2017, Haifa, Israel, November 13-25), volume 10629 of LNCS, pages 99-114, 2017. Springer-Verlag, Heidelberg. [ PDF ]
    Abstract:
    In practice, software testing has been the established method for finding bugs in programs for a long time. But in the last 15 years, software model checking has received a lot of attention, and many successful tools for software model checking exist today. We believe it is time for a careful comparative evaluation of automatic software testing against automatic software model checking. We chose six existing tools for automatic test-case generation, namely AFL-fuzz, CPATiger, Crest-ppc, FShell, Klee, and PRtest, and four tools for software model checking, namely CBMC, CPA-Seq, ESBMC-incr, and ESBMC-kInd, for the task of finding specification violations in a large benchmark suite consisting of 5693 C programs. In order to perform such an evaluation, we have implemented a framework for test-based falsification (TBF) that executes and validates test cases produced by test-case generation tools in order to find errors in programs. The conclusion of our experiments is that software model checkers can (i) find a substantially larger number of bugs (ii) in less time, and (iii) require less adjustment to the input programs.
    Annotation:
    Received the HVC 2017 Best Paper Award

    @inproceedings{HVC17,
    author = {Dirk Beyer and Thomas Lemberger},
    title = {Software Verification: Testing vs. Model Checking},
    booktitle = {Proceedings of the 13th Haifa Verification Conference (HVC~2017, Haifa, Israel, November 13-25)},
    editor = {Ofer Strichman and Rachel Tzoref-Brill},
    series = {LNCS},
    volume = {10629},
    pages = {99-114},
    year = {2017},
    publisher = {Springer-Verlag, Heidelberg},
    isbn = {978-3-319-70389-3},
    doi = {https://doi.org/10.1007/978-3-319-70389-3_7},
    pdf = {https://www.sosy-lab.org/research/pub/2017-HVC.Software_Verification_Testing_vs_Model_Checking.pdf},
    annote = {Received the HVC 2017 Best Paper Award},
    abstract = { In practice, software testing has been the established method for finding bugs in programs for a long time. But in the last 15 years, software model checking has received a lot of attention, and many successful tools for software model checking exist today. We believe it is time for a careful comparative evaluation of automatic software testing against automatic software model checking. We chose six existing tools for automatic test-case generation, namely AFL-fuzz, CPATiger, Crest-ppc, FShell, Klee, and PRtest, and four tools for software model checking, namely CBMC, CPA-Seq, ESBMC-incr, and ESBMC-kInd, for the task of finding specification violations in a large benchmark suite consisting of 5693 C programs. In order to perform such an evaluation, we have implemented a framework for test-based falsification (TBF) that executes and validates test cases produced by test-case generation tools in order to find errors in programs. The conclusion of our experiments is that software model checkers can (i) find a substantially larger number of bugs (ii) in less time, and (iii) require less adjustment to the input programs. } 
    }
    

Theses (PhD, MSc, BSc, Project)

  1. Deyan Ivanov. Interactive Visualization of Verification Results from CPAchecker with D3. Bachelor's Thesis, LMU Munich, Software Systems Lab, 2017. [ PDF ]
    @misc{IvanovVisualization,
    author = {Deyan Ivanov},
    title = {Interactive Visualization of Verification Results from {{\sc CPAchecker}} with D3},
    howpublished = {Bachelor's Thesis, LMU Munich, Software Systems Lab},
    year = {2017},
    pdf = {https://www.sosy-lab.org/research/bsc/2017.Ivanov.Interactive_Visualization_of_Verification_Results_from_CPAchecker_with_D3.pdf},
    field = {Computer Science},
    
    }
    

  2. Stefan Löwe. Effective Approaches to Abstraction Refinement for Automatic Software Verification. PhD Thesis, University of Passau, Software Systems Lab, 2017. [ Info ] [ PDF ]
    @misc{StefanValueDomain,
    author = {Stefan L{\"{o}}we},
    title = {Effective Approaches to Abstraction Refinement for Automatic Software Verification},
    howpublished = {PhD Thesis, University of Passau, Software Systems Lab},
    year = {2017},
    pdf = {https://www.sosy-lab.org/research/phd/2017.Loewe.Effective_Approaches_to_Abstraction_Refinement_for_Automatic_Software_Verification.pdf},
    url = {https://www.sosy-lab.org/research/phd/loewe/},
    
    }
    

  3. Nils Steinger. Measuring, Visualizing, and Optimizing the Energy Consumption of Computer Clusters. Bachelor's Thesis, University of Passau, Software Systems Lab, 2017. [ Info ] [ PDF ]
    @misc{SteingerMeasuring,
    author = {Nils Steinger},
    title = {Measuring, Visualizing, and Optimizing the Energy Consumption of Computer Clusters},
    howpublished = {Bachelor's Thesis, University of Passau, Software Systems Lab},
    year = {2017},
    pdf = {https://www.sosy-lab.org/research/bsc/2017.Steinger.Measuring,_Visualizing,_and_Optimizing_the_Energy_Consumption_of_Computer_Clusters.pdf},
    url = {https://www.sosy-lab.org/research/bsc/steinger},
    field = {Computer Science},
    
    }
    

  4. Philipp Wendler. Towards Practical Predicate Analysis. PhD Thesis, University of Passau, Software Systems Lab, 2017. [ Info ] [ PDF ]
    @misc{PhilippPredicateAnalysis,
    author = {Philipp Wendler},
    title = {Towards Practical Predicate Analysis},
    howpublished = {PhD Thesis, University of Passau, Software Systems Lab},
    year = {2017},
    pdf = {https://www.sosy-lab.org/research/phd/2017.Wendler.Towards_Practical_Predicate_Analysis.pdf},
    url = {https://www.sosy-lab.org/research/phd/wendler/},
    
    }
    

  5. Gernot Zoerneck. Implementing PDR in CPAchecker. Bachelor's Thesis, University of Passau, Software Systems Lab, 2017.
    @misc{ZoerneckPDR,
    author = {Gernot Zoerneck},
    title = {Implementing PDR in {{\sc CPAchecker}}},
    howpublished = {Bachelor's Thesis, University of Passau, Software Systems Lab},
    year = {2017},
    field = {Computer Science},
    
    }
    




Disclaimer:

This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All person copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.




Last modified: Mon Dec 18 13:27:46 2017


This document was translated from BibTEX by bibtex2html