Publications of year 2017

(All PublicationsIndex)

Articles in journal or book chapters

  1. Dirk Beyer, Rolf Hennicker, Martin Hofmann, Tobias Nipkow, and Martin Wirsing. Software-Verifikation. In A. Bode, M. Broy, H.-J. Bungartz, and F. Matthes, editors, 50 Jahre Universitäts-Informatik in München, pages 75-86. Springer, 2017. Keyword(s): Software Model Checking.
    @InCollection{InfMUC17,
    author = {Dirk Beyer and Rolf Hennicker and Martin Hofmann and Tobias Nipkow and Martin Wirsing},
    title = {Software-Verifikation},
    booktitle = {50 Jahre Universit{\"a}ts-Informatik in M{\"u}nchen},
    editor = {A.~Bode and M.~Broy and H.-J.~Bungartz and F.~Matthes},
    publisher = {Springer},
    pages = {75-86},
    year = {2017},
    isbn = {978-3-662-54711-3},
    doi = {10.1007/978-3-662-54712-0_5},
    keyword = {Software Model Checking},
    
    }
    

  2. Dirk Beyer, Stefan Löwe, and Philipp Wendler. Reliable Benchmarking: Requirements and Solutions. International Journal on Software Tools for Technology Transfer (STTT), 2017. [ Info ] [ PDF ] Keyword(s): Benchmarking.
    Abstract:
    Benchmarking is a widely used method in experimental computer science, in particular, for the comparative evaluation of tools and algorithms. As a consequence, a number of questions need to be answered in order to ensure proper benchmarking, resource measurement, and presentation of results, all of which is essential for researchers, tool developers, and users, as well as for tool competitions. We identify a set of requirements that are indispensable for reliable benchmarking and resource measurement of time and memory usage of automatic solvers, verifiers, and similar tools, and discuss limitations of existing methods and benchmarking tools. Fulfilling these requirements in a benchmarking framework can (on Linux systems) currently only be done by using the cgroup and namespace features of the kernel. We developed BenchExec, a ready-to-use, tool-independent, and open-source implementation of a benchmarking framework that fulfills all presented requirements, making reliable benchmarking and resource measurement easy. Our framework is able to work with a wide range of different tools, has proven its reliability and usefulness in the International Competition on Software Verification, and is used by several research groups worldwide to ensure reliable benchmarking. Finally, we present guidelines on how to present measurement results in a scientifically valid and comprehensible way.

    @Article{Benchmarking-STTT,
    author = {Dirk Beyer and Stefan L{\"o}we and Philipp Wendler},
    title = {Reliable Benchmarking: {R}equirements and Solutions},
    journal = {International Journal on Software Tools for Technology Transfer (STTT)},
    year = {2017},
    doi = {10.1007/s10009-017-0469-y},
    pdf = {https://www.sosy-lab.org/research/pub/2017-STTT.Reliable_Benchmarking_Requirements_and_Solutions.pdf},
    url = {https://www.sosy-lab.org/research/benchmarking/},
    abstract = {Benchmarking is a widely used method in experimental computer science, in particular, for the comparative evaluation of tools and algorithms. As a consequence, a number of questions need to be answered in order to ensure proper benchmarking, resource measurement, and presentation of results, all of which is essential for researchers, tool developers, and users, as well as for tool competitions. We identify a set of requirements that are indispensable for reliable benchmarking and resource measurement of time and memory usage of automatic solvers, verifiers, and similar tools, and discuss limitations of existing methods and benchmarking tools. Fulfilling these requirements in a benchmarking framework can (on Linux systems) currently only be done by using the cgroup and namespace features of the kernel. We developed BenchExec, a ready-to-use, tool-independent, and open-source implementation of a benchmarking framework that fulfills all presented requirements, making reliable benchmarking and resource measurement easy. Our framework is able to work with a wide range of different tools, has proven its reliability and usefulness in the International Competition on Software Verification, and is used by several research groups worldwide to ensure reliable benchmarking. Finally, we present guidelines on how to present measurement results in a scientifically valid and comprehensible way. },
    keyword = {Benchmarking},
    
    }
    

Articles in conference or workshop proceedings

  1. Pavel Andrianov, Karlheinz Friedberger, Mikhail U. Mandrykin, Vadim S. Mutilin, and Anton Volkov. CPA-BAM-BnB: Block-Abstraction Memoization and Region-Based Memory Models for Predicate Abstractions (Competition Contribution). In Axel Legay and Tiziana Margaria, editors, Proceedings of the 23rd International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS 2017, Uppsala, Sweden, April 22-29), LNCS 10206, pages 355--359, 2017. Springer-Verlag, Heidelberg. [ Info ] Keyword(s): CPAchecker, SV-COMP, Software Model Checking.
    @inproceedings{CPABAM-COMP17,
    author = {Pavel Andrianov and Karlheinz Friedberger and Mikhail U. Mandrykin and Vadim S. Mutilin and Anton Volkov},
    title = {{CPA-BAM-BnB}: {Block}-Abstraction Memoization and Region-Based Memory Models for Predicate Abstractions (Competition Contribution)},
    booktitle = {Proceedings of the 23rd International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS~2017, Uppsala, Sweden, April 22-29)},
    editor = {Axel Legay and Tiziana Margaria},
    series = {LNCS~10206},
    pages = {355--359},
    year = {2017},
    publisher = {Springer-Verlag, Heidelberg},
    isbn = {978-3-662-54579-9},
    url = {http://dx.doi.org/10.1007/978-3-662-54580-5_22},
    doi = {10.1007/978-3-662-54580-5_22},
    keyword = {CPAchecker,SV-COMP,Software Model Checking},
    
    }
    

  2. Dirk Beyer. Software Verification with Validation of Results (Report on SV-COMP 2017). In A. Legay and T. Margaria, editors, Proceedings of the 23rd International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS 2017, Uppsala, Sweden, April 22-29), LNCS 10206, pages 331-349, 2017. Springer-Verlag, Heidelberg. [ Info ] [ PDF ] Keyword(s): SV-COMP, Software Model Checking.
    @inproceedings{TACAS17,
    author = {Dirk Beyer},
    title = {Software Verification with Validation of Results ({R}eport on {SV-COMP} 2017)},
    booktitle = {Proceedings of the 23rd International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS~2017, Uppsala, Sweden, April 22-29)},
    editor = {A.~Legay and T.~Margaria},
    series = {LNCS~10206},
    pages = {331-349},
    year = {2017},
    publisher = {Springer-Verlag, Heidelberg},
    isbn = {978-3-662-54579-9},
    doi = {10.1007/978-3-662-54580-5_20},
    pdf = {https://www.sosy-lab.org/research/pub/2017-TACAS.Software_Verification_with_Validation_of_Results.pdf},
    url = {http://sv-comp.sosy-lab.org/},
    keyword = {SV-COMP,Software Model Checking},
    
    }
    

  3. Dirk Beyer, Matthias Dangl, Daniel Dietsch, and Matthias Heizmann. Exchanging Verification Witnesses between Verifiers. In J. Jürjens and K. Schneider, editors, Tagungsband Software Engineering 2017, Fachtagung des GI-Fachbereichs Softwaretechnik (21.-24. Februar 2017, Hannover, Deutschland), LNI P-267, pages 93-94, 2017. GI. Keyword(s): CPAchecker, Software Model Checking.
    @inproceedings{SE17-Witnesses,
    author = {Dirk Beyer and Matthias Dangl and Daniel Dietsch and Matthias Heizmann},
    title = {Exchanging Verification Witnesses between Verifiers},
    booktitle = {Tagungsband Software Engineering 2017, Fachtagung des GI-Fachbereichs Softwaretechnik (21.-24. Februar 2017, Hannover, Deutschland)},
    publisher = {{GI}},
    editor = {J.~J{\"{u}}rjens and K.~Schneider},
    series = {{LNI}~P-267},
    pages = {93-94},
    year = {2017},
    url = {},
    keyword = {CPAchecker,Software Model Checking},
    
    }
    

  4. Dirk Beyer and Thomas Lemberger. Software Verification: Testing vs. Model Checking. In O. Strichman and R. Tzoref-Brill, editors, Proceedings of the 13th Haifa Verification Conference (HVC 2017, Haifa, Israel, November 13-25), LNCS 10629, pages 99-114, 2017. Springer-Verlag, Heidelberg. [ PDF ] Keyword(s): CPAchecker, Software Model Checking.
    Abstract:
    In practice, software testing has been the established method for finding bugs in programs for a long time. But in the last 15 years, software model checking has received a lot of attention, and many successful tools for software model checking exist today. We believe it is time for a careful comparative evaluation of automatic software testing against automatic software model checking. We chose six existing tools for automatic test-case generation, namely AFL-fuzz, CPATiger, Crest-ppc, FShell, Klee, and PRtest, and four tools for software model checking, namely CBMC, CPA-Seq, ESBMC-incr, and ESBMC-kInd, for the task of finding specification violations in a large benchmark suite consisting of 5693 C programs. In order to perform such an evaluation, we have implemented a framework for test-based falsification (TBF) that executes and validates test cases produced by test-case generation tools in order to find errors in programs. The conclusion of our experiments is that software model checkers can (i) find a substantially larger number of bugs (ii) in less time, and (iii) require less adjustment to the input programs.
    Annotation:
    Won the HVC 2017 Best Paper Award

    @inproceedings{HVC17,
    author = {Dirk Beyer and Thomas Lemberger},
    title = {Software Verification: Testing vs. Model Checking},
    booktitle = {Proceedings of the 13th Haifa Verification Conference (HVC~2017, Haifa, Israel, November 13-25)},
    editor = {O.~Strichman and R.~Tzoref-Brill},
    series = {LNCS~10629},
    pages = {99-114},
    year = {2017},
    publisher = {Springer-Verlag, Heidelberg},
    isbn = {978-3-319-70389-3},
    doi = {https://doi.org/10.1007/978-3-319-70389-3_7},
    pdf = {https://www.sosy-lab.org/research/pub/2017-HVC.Software_Verification_Testing_vs_Model_Checking.pdf},
    annote = {Won the HVC 2017 Best Paper Award},
    abstract = { In practice, software testing has been the established method for finding bugs in programs for a long time. But in the last 15 years, software model checking has received a lot of attention, and many successful tools for software model checking exist today. We believe it is time for a careful comparative evaluation of automatic software testing against automatic software model checking. We chose six existing tools for automatic test-case generation, namely AFL-fuzz, CPATiger, Crest-ppc, FShell, Klee, and PRtest, and four tools for software model checking, namely CBMC, CPA-Seq, ESBMC-incr, and ESBMC-kInd, for the task of finding specification violations in a large benchmark suite consisting of 5693 C programs. In order to perform such an evaluation, we have implemented a framework for test-based falsification (TBF) that executes and validates test cases produced by test-case generation tools in order to find errors in programs. The conclusion of our experiments is that software model checkers can (i) find a substantially larger number of bugs (ii) in less time, and (iii) require less adjustment to the input programs. } keyword = {CPAchecker,Software Model Checking},
    
    }
    

Theses (PhD, MSc, BSc, Project)

  1. Deyan Ivanov. Interactive Visualization of Verification Results from CPAchecker with D3. Bachelor's Thesis, LMU Munich, Software Systems Lab, 2017. [ PDF ] Keyword(s): CPAchecker, Software Model Checking.
    @misc{IvanovVisualization,
    author = {Deyan Ivanov},
    title = {Interactive Visualization of Verification Results from {{\sc CPAchecker}} with D3},
    howpublished = {Bachelor's Thesis, LMU Munich, Software Systems Lab},
    year = {2017},
    pdf = {https://www.sosy-lab.org/research/bsc/2017.Ivanov.Interactive_Visualization_of_Verification_Results_from_CPAchecker_with_D3.pdf},
    field = {Computer Science},
    keyword = {CPAchecker,Software Model Checking},
    
    }
    

  2. Stefan Löwe. Effective Approaches to Abstraction Refinement for Automatic Software Verification. PhD Thesis, University of Passau, Software Systems Lab, 2017. [ Info ] [ PDF ] Keyword(s): CPAchecker, Software Model Checking.
    @misc{StefanValueDomain,
    author = {Stefan L{\"{o}}we},
    title = {Effective Approaches to Abstraction Refinement for Automatic Software Verification},
    howpublished = {PhD Thesis, University of Passau, Software Systems Lab},
    year = {2017},
    pdf = {https://www.sosy-lab.org/research/phd/2017.Loewe.Effective_Approaches_to_Abstraction_Refinement_for_Automatic_Software_Verification.pdf},
    url = {https://www.sosy-lab.org/research/phd/loewe/},
    keyword = {CPAchecker,Software Model Checking},
    
    }
    

  3. Nils Steinger. Measuring, Visualizing, and Optimizing the Energy Consumption of Computer Clusters. Bachelor's Thesis, University of Passau, Software Systems Lab, 2017. [ Info ] [ PDF ]
    @misc{SteingerMeasuring,
    author = {Nils Steinger},
    title = {Measuring, Visualizing, and Optimizing the Energy Consumption of Computer Clusters},
    howpublished = {Bachelor's Thesis, University of Passau, Software Systems Lab},
    year = {2017},
    pdf = {https://www.sosy-lab.org/research/bsc/2017.Steinger.Measuring,_Visualizing,_and_Optimizing_the_Energy_Consumption_of_Computer_Clusters.pdf},
    url = {https://www.sosy-lab.org/research/bsc/steinger},
    field = {Computer Science},
    
    }
    

  4. Philipp Wendler. Towards Practical Predicate Analysis. PhD Thesis, University of Passau, Software Systems Lab, 2017. [ Info ] [ PDF ] Keyword(s): CPAchecker, Software Model Checking.
    @misc{PhilippPredicateAnalysis,
    author = {Philipp Wendler},
    title = {Towards Practical Predicate Analysis},
    howpublished = {PhD Thesis, University of Passau, Software Systems Lab},
    year = {2017},
    pdf = {https://www.sosy-lab.org/research/phd/2017.Wendler.Towards_Practical_Predicate_Analysis.pdf},
    url = {https://www.sosy-lab.org/research/phd/wendler/},
    keyword = {CPAchecker,Software Model Checking},
    
    }
    

  5. Gernot Zoerneck. Implementing PDR in CPAchecker. Bachelor's Thesis, University of Passau, Software Systems Lab, 2017. Keyword(s): CPAchecker, Software Model Checking.
    @misc{ZoerneckPDR,
    author = {Gernot Zoerneck},
    title = {Implementing PDR in {{\sc CPAchecker}}},
    howpublished = {Bachelor's Thesis, University of Passau, Software Systems Lab},
    year = {2017},
    field = {Computer Science},
    keyword = {CPAchecker,Software Model Checking},
    
    }
    

(All PublicationsIndex)



Disclaimer:

This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All person copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.




Last modified: Mon Jul 16 20:31:36 2018


This document was translated from BibTEX by bibtex2html