Publications by year
2025
-
Algorithm Selection for Word-Level Hardware Model Checking (Student Abstract).
In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI),
2025.
Keyword(s):
Btor2
Funding:
DFG-BRIDGE
PDF
Abstract
We build the first machine-learning-based algorithm selection tool for hardware verification described in the Btor2 format. In addition to hardware verifiers, our tool also selects from a set of software verifiers to solve a given Btor2 instance, enabled by a Btor2-to-C translator. We propose two embeddings for a Btor2 instance, Bag of Keywords and Bit-Width Aggregation. Pairwise classifiers are applied for algorithm selection. Upon evaluation, our tool Btor2-Select solves 30.0% more instances and reduces PAR-2 by 50.2%, compared to the PDR implementation in the HWMCC'20 winner model checker AVR. Measured by the Shapley values, the software verifiers collectively contributed 27.2% to Btor2-Select's performance.BibTeX Entry
@inproceedings{AAAI25, author = {Zhengyang Lu and Po-Chun Chien and Nian-Ze Lee and Vijay Ganesh}, title = {Algorithm Selection for Word-Level Hardware Model Checking (Student Abstract)}, booktitle = {Proceedings of the AAAI Conference on Artificial Intelligence~(AAAI)}, pages = {}, year = {2025}, pdf = {https://www.sosy-lab.org/research/pub/2025-AAAI.Algorithm_Selection_for_Word-Level_Hardware_Model_Checking_Student_Abstract.pdf}, abstract = {We build the first machine-learning-based algorithm selection tool for hardware verification described in the Btor2 format. In addition to hardware verifiers, our tool also selects from a set of software verifiers to solve a given Btor2 instance, enabled by a Btor2-to-C translator. We propose two embeddings for a Btor2 instance, Bag of Keywords and Bit-Width Aggregation. Pairwise classifiers are applied for algorithm selection. Upon evaluation, our tool Btor2-Select solves 30.0% more instances and reduces PAR-2 by 50.2%, compared to the PDR implementation in the HWMCC'20 winner model checker AVR. Measured by the Shapley values, the software verifiers collectively contributed 27.2% to Btor2-Select's performance.}, keyword = {Btor2}, doinone = {Unpublished: Last checked: 2024-11-18}, funding = {DFG-BRIDGE}, }
2024
-
MoXIchecker: An Extensible Model Checker for MoXI.
In Proc. VSTTE,
2024.
Springer.
Keyword(s):
Btor2
Funding:
DFG-CONVEY,
DFG-BRIDGE
PDF
Supplement
Artifact(s)
Abstract
MoXI is a new intermediate verification language introduced in 2024 to promote the standardization and open-source implementations for symbolic model checking by extending the SMT-LIB 2 language with constructs to define state-transition systems. The tool suite of MoXI provides a translator from MoXI to Btor2, which is a lower-level intermediate language for hardware verification, and a translation-based model checker, which invokes mature hardware model checkers for Btor2 to analyze the translated verification tasks. The extensibility of such a translation-based model checker is restricted because more complex theories, such as integer or real arithmetics, cannot be precisely expressed with bit-vectors of fixed lengths in Btor2. We present MoXIchecker, the first model checker that solves MoXI verification tasks directly. Instead of translating MoXI to lower-level languages, MoXIchecker uses the solver-agnostic library PySMT for SMT solvers as backend for its verification algorithms. MoXIchecker is extensible because it accommodates verification tasks involving more complex theories, not limited by lower-level languages, facilitates the implementation of new algorithms, and is solver-agnostic by using the API of PySMT. In our evaluation, MoXIchecker uniquely solved tasks that use integer or real arithmetics, and achieved a comparable performance against the translation-based model checker from the MoXI tool suite.BibTeX Entry
@inproceedings{VSTTE24, author = {Salih Ates and Dirk Beyer and Po-Chun Chien and Nian-Ze Lee}, title = {{MoXIchecker}: {An} Extensible Model Checker for {MoXI}}, booktitle = {Proc.\ VSTTE}, pages = {}, year = {2024}, series = {}, publisher = {Springer}, doi = {}, url = {https://www.sosy-lab.org/research/moxichecker/}, pdf = {https://www.sosy-lab.org/research/pub/2024-VSTTE.MoXIchecker_An_Extensible_Model_Checker_for_MoXI.pdf}, presentation = {}, abstract = {MoXI is a new intermediate verification language introduced in 2024 to promote the standardization and open-source implementations for symbolic model checking by extending the SMT-LIB 2 language with constructs to define state-transition systems. The tool suite of MoXI provides a translator from MoXI to Btor2, which is a lower-level intermediate language for hardware verification, and a translation-based model checker, which invokes mature hardware model checkers for Btor2 to analyze the translated verification tasks. The extensibility of such a translation-based model checker is restricted because more complex theories, such as integer or real arithmetics, cannot be precisely expressed with bit-vectors of fixed lengths in Btor2. We present MoXIchecker, the first model checker that solves MoXI verification tasks directly. Instead of translating MoXI to lower-level languages, MoXIchecker uses the solver-agnostic library PySMT for SMT solvers as backend for its verification algorithms. MoXIchecker is extensible because it accommodates verification tasks involving more complex theories, not limited by lower-level languages, facilitates the implementation of new algorithms, and is solver-agnostic by using the API of PySMT. In our evaluation, MoXIchecker uniquely solved tasks that use integer or real arithmetics, and achieved a comparable performance against the translation-based model checker from the MoXI tool suite.}, keyword = {Btor2}, artifact = {10.5281/zenodo.13895872}, funding = {DFG-CONVEY, DFG-BRIDGE}, } -
BenchCloud: A Platform for Scalable Performance Benchmarking.
In Proc. ASE,
pages 2386-2389,
2024.
ACM.
doi:10.1145/3691620.3695358
Keyword(s):
Benchmarking,
Competition on Software Verification (SV-COMP),
Competition on Software Testing (Test-Comp)
Funding:
DFG-CONVEY,
DFG-COOP
Publisher's Version
PDF
Presentation
Video
Supplement
Artifact(s)
Abstract
Performance evaluation is a crucial method for assessing automated-reasoning tools. Evaluating automated tools requires rigorous benchmarking to accurately measure resource consumption, including time and memory, which are essential for understanding the tools' capabilities. BenchExec, a widely used benchmarking framework, reliably measures resource usage for tools executed locally on a single node. This paper describes BenchCloud, a solution for elastic and scalable job distribution across hundreds of nodes, enabling large-scale experiments on distributed and heterogeneous computing environments. BenchCloud seamlessly integrates with BenchExec, allowing BenchExec to delegate the actual execution to BenchCloud. The system has been employed in several prominent international competitions in automated reasoning, including SMT-COMP, SV-COMP, and Test-Comp, underscoring its importance in rigorous tool evaluation across various research domains. It helps to ensure both internal and external validity of the experimental results. This paper presents an overview of BenchCloud's architecture and high- lights its primary use cases in facilitating scalable benchmarking.
Demonstration video: https://youtu.be/aBfQytqPm0U
Running system: https://benchcloud.sosy-lab.org/BibTeX Entry
@inproceedings{ASE24a, author = {Dirk Beyer and Po-Chun Chien and Marek Jankola}, title = {{BenchCloud}: {A} Platform for Scalable Performance Benchmarking}, booktitle = {Proc.\ ASE}, pages = {2386-2389}, year = {2024}, series = {}, publisher = {ACM}, doi = {10.1145/3691620.3695358}, url = {https://benchcloud.sosy-lab.org/}, pdf = {https://www.sosy-lab.org/research/pub/2024-ASE24.BenchCloud_A_Platform_for_Scalable_Performance_Benchmarking.pdf}, presentation = {https://www.sosy-lab.org/research/prs/2024-10-30_ASE_BenchCloud_A_Platform_for_Scalable_Performance_Benchmarking_Po-Chun.pdf}, abstract = {Performance evaluation is a crucial method for assessing automated-reasoning tools. Evaluating automated tools requires rigorous benchmarking to accurately measure resource consumption, including time and memory, which are essential for understanding the tools' capabilities. BenchExec, a widely used benchmarking framework, reliably measures resource usage for tools executed locally on a single node. This paper describes BenchCloud, a solution for elastic and scalable job distribution across hundreds of nodes, enabling large-scale experiments on distributed and heterogeneous computing environments. BenchCloud seamlessly integrates with BenchExec, allowing BenchExec to delegate the actual execution to BenchCloud. The system has been employed in several prominent international competitions in automated reasoning, including SMT-COMP, SV-COMP, and Test-Comp, underscoring its importance in rigorous tool evaluation across various research domains. It helps to ensure both internal and external validity of the experimental results. This paper presents an overview of BenchCloud's architecture and high- lights its primary use cases in facilitating scalable benchmarking. <br> Demonstration video: <a href="https://youtu.be/aBfQytqPm0U">https://youtu.be/aBfQytqPm0U</a> <br> Running system: <a href="https://benchcloud.sosy-lab.org/">https://benchcloud.sosy-lab.org/</a>}, keyword = {Benchmarking, Competition on Software Verification (SV-COMP), Competition on Software Testing (Test-Comp)}, artifact = {10.5281/zenodo.13742756}, funding = {DFG-CONVEY, DFG-COOP}, video = {https://youtu.be/aBfQytqPm0U}, } -
CPA-Daemon: Mitigating Tool Restarts for Java-Based Verifiers.
In Proceedings of the 22nd International Symposium on Automated Technology for Verification and Analysis (ATVA 2024, Kyoto, Japan, October 21-24),
LNCS ,
2024.
Springer.
Keyword(s):
CPAchecker,
Software Model Checking,
Cooperative Verification
Funding:
DFG-CONVEY
PDF
Artifact(s)
BibTeX Entry
@inproceedings{ATVA24, author = {Dirk Beyer and Thomas Lemberger and Henrik Wachowitz}, title = {{CPA-Daemon}: Mitigating Tool Restarts for Java-Based Verifiers}, booktitle = {Proceedings of the 22nd International Symposium on Automated Technology for Verification and Analysis (ATVA~2024, Kyoto, Japan, October 21-24)}, pages = {}, year = {2024}, series = {LNCS~}, publisher = {Springer}, pdf = {https://www.sosy-lab.org/research/pub/2024-ATVA.CPA-Daemon_Mitigating_Tool_Restart_for_Java-Based_Verifiers.pdf}, abstract = {}, keyword = {CPAchecker, Software Model Checking, Cooperative Verification}, artifact = {https://doi.org/10.5281/zenodo.11147333}, doinone = {Unpublished: Last checked: 2024-10-01}, funding = {DFG-CONVEY}, } -
Software Verification with CPAchecker 3.0: Tutorial and User Guide.
In Proceedings of the 26th International Symposium on Formal Methods (FM 2024, Milan, Italy, September 9-13),
LNCS 14934,
pages 543-570,
2024.
Springer.
doi:10.1007/978-3-031-71177-0_30
Keyword(s):
CPAchecker,
Software Model Checking,
Software Testing
Funding:
DFG-COOP,
DFG-CONVEY,
DFG-IDEFIX
Publisher's Version
PDF
Presentation
Supplement
Artifact(s)
Abstract
This tutorial provides an introduction to CPAchecker for users. CPAchecker is a flexible and configurable framework for software verification and testing. The framework provides many abstract domains, such as BDDs, explicit values, intervals, memory graphs, and predicates, and many program-analysis and model-checking algorithms, such as abstract interpretation, bounded model checking, Impact, interpolation-based model checking, k-induction, PDR, predicate abstraction, and symbolic execution. This tutorial presents basic use cases for CPAchecker in formal software verification, focusing on its main verification techniques with their strengths and weaknesses. An extended version also shows further use cases of CPAchecker for test-case generation and witness-based result validation. The envisioned readers are assumed to possess a background in automatic formal verification and program analysis, but prior knowledge of CPAchecker is not required. This tutorial and user guide is based on CPAchecker in version 3.0. This user guide's latest version and other documentation are available at https://cpachecker.sosy-lab.org/doc.php.BibTeX Entry
@inproceedings{FM24a, author = {Daniel Baier and Dirk Beyer and Po-Chun Chien and Marie-Christine Jakobs and Marek Jankola and Matthias Kettl and Nian-Ze Lee and Thomas Lemberger and Marian Lingsch-Rosenfeld and Henrik Wachowitz and Philipp Wendler}, title = {Software Verification with {CPAchecker} 3.0: {Tutorial} and User Guide}, booktitle = {Proceedings of the 26th International Symposium on Formal Methods (FM~2024, Milan, Italy, September 9-13)}, pages = {543-570}, year = {2024}, series = {LNCS~14934}, publisher = {Springer}, doi = {10.1007/978-3-031-71177-0_30}, url = {https://cpachecker.sosy-lab.org}, pdf = {https://www.sosy-lab.org/research/pub/2024-FM.Software_Verification_with_CPAchecker_3.0_Tutorial_and_User_Guide.pdf}, presentation = {https://www.sosy-lab.org/research/prs/2024-09-10_FM24_CPAchecker_Tutorial.pdf}, abstract = {This tutorial provides an introduction to CPAchecker for users. CPAchecker is a flexible and configurable framework for software verification and testing. The framework provides many abstract domains, such as BDDs, explicit values, intervals, memory graphs, and predicates, and many program-analysis and model-checking algorithms, such as abstract interpretation, bounded model checking, Impact, interpolation-based model checking, <i>k</i>-induction, PDR, predicate abstraction, and symbolic execution. This tutorial presents basic use cases for CPAchecker in formal software verification, focusing on its main verification techniques with their strengths and weaknesses. An extended version also shows further use cases of CPAchecker for test-case generation and witness-based result validation. The envisioned readers are assumed to possess a background in automatic formal verification and program analysis, but prior knowledge of CPAchecker is not required. This tutorial and user guide is based on CPAchecker in version 3.0. This user guide's latest version and other documentation are available at <a href="https://cpachecker.sosy-lab.org/doc.php">https://cpachecker.sosy-lab.org/doc.php</a>.}, keyword = {CPAchecker, Software Model Checking, Software Testing}, annote = {An <a href="https://www.sosy-lab.org/research/bib/All/index.html#TechReport24c">extended version</a> of this article is available on <a href="https://doi.org/10.48550/arXiv.2409.02094">arXiv</a>.}, artifact = {10.5281/zenodo.13612338}, funding = {DFG-COOP, DFG-CONVEY, DFG-IDEFIX}, }Additional Infos
An extended version of this article is available on arXiv. -
P3: A Dataset of Partial Program Patches.
In Proc. MSR,
2024.
ACM.
doi:10.1145/3643991.3644889
Keyword(s):
Partial Fix,
Dataset,
Mining
Funding:
DFG-IDEFIX
Publisher's Version
PDF
Supplement
Artifact(s)
Abstract
Identifying and fixing bugs in programs remains a challenge and is one of the most time-consuming tasks in software development. But even after a bug is identified, and a fix has been proposed by a developer or tool, it is not uncommon that the fix is incomplete and does not cover all possible inputs that trigger the bug. This can happen quite often and leads to re-opened issues and inefficiencies. In this paper, we introduce P3, a curated dataset composed of in- complete fixes. Each entry in the set contains a series of commits fixing the same underlying issue, where multiple of the intermediate commits are incomplete fixes. These are sourced from real-world open-source C projects. The selection process involves both auto- mated and manual stages. Initially, we employ heuristics to identify potential partial fixes from repositories, subsequently we validate them through meticulous manual inspection. This process ensures the accuracy and reliability of our curated dataset. We envision that the dataset will support researchers while investigating par- tial fixes in more detail, allowing them to develop new techniques to detect and fix them.BibTeX Entry
@inproceedings{MSR24, author = {Dirk Beyer and Lars Grunske and Matthias Kettl and Marian Lingsch-Rosenfeld and Moeketsi Raselimo}, title = {P3: A Dataset of Partial Program Patches}, booktitle = {Proc.\ MSR}, pages = {}, year = {2024}, publisher = {ACM}, doi = {10.1145/3643991.3644889}, url = {https://gitlab.com/sosy-lab/research/data/partial-fix-dataset}, pdf = {}, abstract = {Identifying and fixing bugs in programs remains a challenge and is one of the most time-consuming tasks in software development. But even after a bug is identified, and a fix has been proposed by a developer or tool, it is not uncommon that the fix is incomplete and does not cover all possible inputs that trigger the bug. This can happen quite often and leads to re-opened issues and inefficiencies. In this paper, we introduce P3, a curated dataset composed of in- complete fixes. Each entry in the set contains a series of commits fixing the same underlying issue, where multiple of the intermediate commits are incomplete fixes. These are sourced from real-world open-source C projects. The selection process involves both auto- mated and manual stages. Initially, we employ heuristics to identify potential partial fixes from repositories, subsequently we validate them through meticulous manual inspection. This process ensures the accuracy and reliability of our curated dataset. We envision that the dataset will support researchers while investigating par- tial fixes in more detail, allowing them to develop new techniques to detect and fix them.}, keyword = {Partial Fix, Dataset, Mining}, annote = {}, artifact = {10.5281/zenodo.10319627}, funding = {DFG-IDEFIX}, } -
Augmenting Interpolation-Based Model Checking with Auxiliary Invariants.
In Proceedings of the 30th International Symposium on Model Checking Software (SPIN 2024, Luxembourg City, Luxembourg, April 10-11),
LNCS 14624,
pages 227-247,
2024.
Springer.
doi:10.1007/978-3-031-66149-5_13
Keyword(s):
Software Model Checking,
Cooperative Verification,
CPAchecker
Funding:
DFG-CONVEY
Publisher's Version
PDF
Presentation
Supplement
Artifact(s)
Abstract
Software model checking is a challenging problem, and generating relevant invariants is a key factor in proving the safety properties of a program. Program invariants can be obtained by various approaches, including lightweight procedures based on data-flow analysis and intensive techniques using Craig interpolation. Although data-flow analysis runs efficiently, it often produces invariants that are too weak to prove the properties. By contrast, interpolation-based approaches build strong invariants from interpolants, but they might not scale well due to expensive interpolation procedures. Invariants can also be injected into model-checking algorithms to assist the analysis. Invariant injection has been studied for many well-known approaches, including k-induction, predicate abstraction, and symbolic execution. We propose an augmented interpolation-based verification algorithm that injects external invariants into interpolation-based model checking (McMillan, 2003), a hardware model-checking algorithm recently adopted for software verification. The auxiliary invariants help prune unreachable states in Craig interpolants and confine the analysis to the reachable parts of a program. We implemented the proposed technique in the verification framework CPAchecker and evaluated it against mature SMT-based methods in CPAchecker as well as other state-of-the-art software verifiers. We found that injecting invariants reduces the number of interpolation queries needed to prove safety properties and improves the run-time efficiency. Consequently, the proposed invariant-injection approach verified difficult tasks that none of its plain version (i.e., without invariants), the invariant generator, or any compared tools could solve.BibTeX Entry
@inproceedings{SPIN24c, author = {Dirk Beyer and Po-Chun Chien and Nian-Ze Lee}, title = {Augmenting Interpolation-Based Model Checking with Auxiliary Invariants}, booktitle = {Proceedings of the 30th International Symposium on Model Checking Software (SPIN~2024, Luxembourg City, Luxembourg, April 10-11)}, pages = {227-247}, year = {2024}, series = {LNCS~14624}, publisher = {Springer}, doi = {10.1007/978-3-031-66149-5_13}, url = {https://www.sosy-lab.org/research/imc-df/}, pdf = {https://www.sosy-lab.org/research/pub/2024-SPIN.Augmenting_Interpolation-Based_Model_Checking_with_Auxiliary_Invariants.pdf}, presentation = {https://www.sosy-lab.org/research/prs/2024-04-10_SPIN_Augmenting_IMC_with_Auxiliary_Invariants_Po-Chun.pdf}, abstract = {Software model checking is a challenging problem, and generating relevant invariants is a key factor in proving the safety properties of a program. Program invariants can be obtained by various approaches, including lightweight procedures based on data-flow analysis and intensive techniques using Craig interpolation. Although data-flow analysis runs efficiently, it often produces invariants that are too weak to prove the properties. By contrast, interpolation-based approaches build strong invariants from interpolants, but they might not scale well due to expensive interpolation procedures. Invariants can also be injected into model-checking algorithms to assist the analysis. Invariant injection has been studied for many well-known approaches, including <i>k</i>-induction, predicate abstraction, and symbolic execution. We propose an augmented interpolation-based verification algorithm that injects external invariants into interpolation-based model checking (McMillan, 2003), a hardware model-checking algorithm recently adopted for software verification. The auxiliary invariants help prune unreachable states in Craig interpolants and confine the analysis to the reachable parts of a program. We implemented the proposed technique in the verification framework CPAchecker and evaluated it against mature SMT-based methods in CPAchecker as well as other state-of-the-art software verifiers. We found that injecting invariants reduces the number of interpolation queries needed to prove safety properties and improves the run-time efficiency. Consequently, the proposed invariant-injection approach verified difficult tasks that none of its plain version (i.e., without invariants), the invariant generator, or any compared tools could solve.}, keyword = {Software Model Checking, Cooperative Verification, CPAchecker}, annote = {This article received the "Best Paper Award" at SPIN 2024! An <a href="https://www.sosy-lab.org/research/bib/All/index.html#TechReport24a">extended version</a> of this article is available on <a href="https://doi.org/10.48550/arXiv.2403.07821">arXiv</a>.}, artifact = {10.5281/zenodo.10548594}, funding = {DFG-CONVEY}, }Additional Infos
This article received the "Best Paper Award" at SPIN 2024! An extended version of this article is available on arXiv. -
Fault Localization on Verification Witnesses.
In Proceedings of the 30th International Symposium on Model Checking Software (SPIN 2024, Luxembourg City, Luxembourg, April 10-11),
LNCS 14624,
pages 205-224,
2024.
Springer.
doi:10.1007/978-3-031-66149-5_12
Keyword(s):
Software Model Checking,
Witness-Based Validation,
CPAchecker
Funding:
DFG-CONVEY,
DFG-IDEFIX,
DFG-COOP
Publisher's Version
PDF
Artifact(s)
Abstract
When verifiers report an alarm, they export a violation witness (exchangeable counterexample) that helps validate the reachability of that alarm. Conventional wisdom says that this violation witness should be very precise: the ideal witness describes a single error path for the validator to check. But we claim that verifiers overshoot and produce large witnesses with information that makes validation unnecessarily difficult. To check our hypothesis, we reduce violation witnesses to that information that automated fault-localization approaches deem relevant for triggering the reported alarm in the program. We perform a large experimental evaluation on the witnesses produced in the International Competition on Software Verification (SV-COMP 2023). It shows that our reduction shrinks the witnesses considerably and enables the confirmation of verification results that were not confirmable before.BibTeX Entry
@inproceedings{SPIN24b, author = {Dirk Beyer and Matthias Kettl and Thomas Lemberger}, title = {Fault Localization on Verification Witnesses}, booktitle = {Proceedings of the 30th International Symposium on Model Checking Software (SPIN~2024, Luxembourg City, Luxembourg, April 10-11)}, pages = {205-224}, year = {2024}, series = {LNCS~14624}, publisher = {Springer}, doi = {10.1007/978-3-031-66149-5_12}, pdf = {https://sosy-lab.org/research/pub/2024-SPIN.Fault_Localization_on_Verification_Witnesses.pdf}, abstract = {When verifiers report an alarm, they export a violation witness (exchangeable counterexample) that helps validate the reachability of that alarm. Conventional wisdom says that this violation witness should be very precise: the ideal witness describes a single error path for the validator to check. But we claim that verifiers overshoot and produce large witnesses with information that makes validation unnecessarily difficult. To check our hypothesis, we reduce violation witnesses to that information that automated fault-localization approaches deem relevant for triggering the reported alarm in the program. We perform a large experimental evaluation on the witnesses produced in the International Competition on Software Verification (SV-COMP 2023). It shows that our reduction shrinks the witnesses considerably and enables the confirmation of verification results that were not confirmable before.}, keyword = {Software Model Checking, Witness-Based Validation, CPAchecker}, annote = {Nominated for best paper.<br> This work was also presented with a poster at the 46th International Conference on Software Engineering (ICSE 2024, Lisbon, Portugal, April 14-20): <a href="https://sosy-lab.org/research/pst/2024-03-05_ICSE24_Fault_Localization_on_Verification_Witnesses_Poster.pdf">Extended Abstract</a>.}, artifact = {10.5281/zenodo.10794627}, funding = {DFG-CONVEY,DFG-IDEFIX,DFG-COOP}, }Additional Infos
Nominated for best paper.
This work was also presented with a poster at the 46th International Conference on Software Engineering (ICSE 2024, Lisbon, Portugal, April 14-20): Extended Abstract. -
Software Verification Witnesses 2.0.
In Proceedings of the 30th International Symposium on Model Checking Software (SPIN 2024, Luxembourg City, Luxembourg, April 10-11),
LNCS 14624,
pages 184-203,
2024.
Springer.
doi:10.1007/978-3-031-66149-5_11
Keyword(s):
Software Model Checking,
Cooperative Verification,
Witness-Based Validation,
Witness-Based Validation (main),
CPAchecker
Funding:
DFG-CONVEY,
DFG-IDEFIX
Publisher's Version
PDF
Presentation
Supplement
Artifact(s)
BibTeX Entry
@inproceedings{SPIN24a, author = {Paulína Ayaziová and Dirk Beyer and Marian Lingsch-Rosenfeld and Martin Spiessl and Jan Strejček}, title = {Software Verification Witnesses 2.0}, booktitle = {Proceedings of the 30th International Symposium on Model Checking Software (SPIN~2024, Luxembourg City, Luxembourg, April 10-11)}, pages = {184-203}, year = {2024}, series = {LNCS~14624}, publisher = {Springer}, doi = {10.1007/978-3-031-66149-5_11}, url = {https://gitlab.com/sosy-lab/benchmarking/sv-witnesses/}, pdf = {https://www.sosy-lab.org/research/pub/2024-SPIN.Software_Verification_Witnesses_2.0.pdf}, presentation = {https://www.sosy-lab.org/research/prs/2024-04-11_SPIN24_Software-Verification-Witnesses-2.0.pdf}, abstract = {}, keyword = {Software Model Checking, Cooperative Verification, Witness-Based Validation, Witness-Based Validation (main), CPAchecker}, annote = {}, artifact = {10.5281/zenodo.10826204}, funding = {DFG-CONVEY,DFG-IDEFIX}, } -
CPAchecker 2.3 with Strategy Selection (Competition Contribution).
In Proceedings of the 30th International Conference on
Tools and Algorithms for the Construction and Analysis of Systems
(TACAS 2024, Luxembourg, Luxembourg, April 6-11), part 3,
LNCS 14572,
pages 359-364,
2024.
Springer.
doi:10.1007/978-3-031-57256-2_21
Keyword(s):
Software Model Checking,
Witness-Based Validation,
CPAchecker
Funding:
DFG-CONVEY,
DFG-IDEFIX
Publisher's Version
PDF
Supplement
Artifact(s)
Abstract
CPAchecker is a versatile framework for software verification, rooted in the established concept of configurable program analysis. Compared to the last published system description at SV-COMP 2015, the CPAchecker submission to SV-COMP 2024 incorporates new analyses for reachability safety, memory safety, termination, overflows, and data races. To combine forces of the available analyses in CPAchecker and cover the full spectrum of the diverse program characteristics and specifications in the competition, we use strategy selection to predict a sequential portfolio of analyses that is suitable for a given verification task. The prediction is guided by a set of carefully picked program features. The sequential portfolios are composed based on expert knowledge and consist of bit-precise analyses using k-induction, data-flow analysis, SMT solving, Craig interpolation, lazy abstraction, and block-abstraction memoization. The synergy of various algorithms in CPAchecker enables support for all properties and categories of C programs in SV-COMP 2024 and contributes to its success in many categories. CPAchecker also generates verification witnesses in the new YAML format.BibTeX Entry
@inproceedings{TACAS24c, author = {Daniel Baier and Dirk Beyer and Po-Chun Chien and Marek Jankola and Matthias Kettl and Nian-Ze Lee and Thomas Lemberger and Marian Lingsch-Rosenfeld and Martin Spiessl and Henrik Wachowitz and Philipp Wendler}, title = {{CPAchecker} 2.3 with Strategy Selection (Competition Contribution)}, booktitle = {Proceedings of the 30th International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS~2024, Luxembourg, Luxembourg, April 6-11), part~3}, pages = {359-364}, year = {2024}, series = {LNCS~14572}, publisher = {Springer}, doi = {10.1007/978-3-031-57256-2_21}, url = {https://cpachecker.sosy-lab.org/}, abstract = {CPAchecker is a versatile framework for software verification, rooted in the established concept of configurable program analysis. Compared to the last published system description at SV-COMP 2015, the CPAchecker submission to SV-COMP 2024 incorporates new analyses for reachability safety, memory safety, termination, overflows, and data races. To combine forces of the available analyses in CPAchecker and cover the full spectrum of the diverse program characteristics and specifications in the competition, we use strategy selection to predict a sequential portfolio of analyses that is suitable for a given verification task. The prediction is guided by a set of carefully picked program features. The sequential portfolios are composed based on expert knowledge and consist of bit-precise analyses using <i>k</i>-induction, data-flow analysis, SMT solving, Craig interpolation, lazy abstraction, and block-abstraction memoization. The synergy of various algorithms in CPAchecker enables support for all properties and categories of C programs in SV-COMP 2024 and contributes to its success in many categories. CPAchecker also generates verification witnesses in the new YAML format.}, keyword = {Software Model Checking, Witness-Based Validation, CPAchecker}, _pdf = {https://www.sosy-lab.org/research/pub/2024-TACAS.CPAchecker_2.3_with_Strategy_Selection_Competition_Contribution.pdf}, artifact = {10.5281/zenodo.10203297}, funding = {DFG-CONVEY, DFG-IDEFIX}, } -
State of the Art in Software Verification and Witness Validation: SV-COMP 2024.
In B. Finkbeiner and
L. Kovács, editors,
Proceedings of the 30th International Conference on
Tools and Algorithms for the Construction and Analysis of Systems
(TACAS 2024, Luxembourg, Luxembourg, April 6-11), part 3,
LNCS 14572,
pages 299-329,
2024.
Springer.
doi:10.1007/978-3-031-57256-2_15
Keyword(s):
Competition on Software Verification (SV-COMP),
Competition on Software Verification (SV-COMP Report),
Software Model Checking
Funding:
DFG-CONVEY
Publisher's Version
PDF
Supplement
BibTeX Entry
@inproceedings{TACAS24b, author = {Dirk Beyer}, title = {State of the Art in Software Verification and Witness Validation: {SV-COMP 2024}}, booktitle = {Proceedings of the 30th International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS~2024, Luxembourg, Luxembourg, April 6-11), part~3}, editor = {B.~Finkbeiner and L.~Kovács}, pages = {299-329}, year = {2024}, series = {LNCS~14572}, publisher = {Springer}, doi = {10.1007/978-3-031-57256-2_15}, sha256 = {}, url = {https://sv-comp.sosy-lab.org/2024/}, keyword = {Competition on Software Verification (SV-COMP),Competition on Software Verification (SV-COMP Report),Software Model Checking}, _pdf = {https://www.sosy-lab.org/research/pub/2024-TACAS.State_of_the_Art_in_Software_Verification_and_Witness_Validation_SV-COMP_2024.pdf}, funding = {DFG-CONVEY}, } -
Btor2-Cert: A Certifying Hardware-Verification Framework Using Software Analyzers.
In Proc. TACAS (3),
LNCS 14572,
pages 129-149,
2024.
Springer.
doi:10.1007/978-3-031-57256-2_7
Keyword(s):
Software Model Checking,
Witness-Based Validation,
Cooperative Verification,
Btor2
Funding:
DFG-CONVEY
Publisher's Version
PDF
Supplement
Artifact(s)
Abstract
Formal verification is essential but challenging: Even the best verifiers may produce wrong verification verdicts. Certifying verifiers enhance the confidence in verification results by generating a witness for other tools to validate the verdict independently. Recently, translating the hardware-modeling language Btor2 to software, such as the programming language C or LLVM intermediate representation, has been actively studied and facilitated verifying hardware designs by software analyzers. However, it remained unknown whether witnesses produced by software verifiers contain helpful information about the original circuits and how such information can aid hardware analysis. We propose a certifying and validating framework Btor2-Cert to verify safety properties of Btor2 circuits, combining Btor2-to-C translation, software verifiers, and a new witness validator Btor2-Val, to answer the above open questions. Btor2-Cert translates a software violation witness to a Btor2 violation witness; As the Btor2 language lacks a format for correctness witnesses, we encode invariants in software correctness witnesses as Btor2 circuits. The validator Btor2-Val checks violation witnesses by circuit simulation and correctness witnesses by validation via verification. In our evaluation, Btor2-Cert successfully utilized software witnesses to improve quality assurance of hardware. By invoking the software verifier CBMC on translated programs, it uniquely solved, with confirmed witnesses, 8% of the unsafe tasks for which the hardware verifier ABC failed to detect bugs.BibTeX Entry
@inproceedings{TACAS24a, author = {Zsófia Ádám and Dirk Beyer and Po-Chun Chien and Nian-Ze Lee and Nils Sirrenberg}, title = {{Btor2-Cert}: {A} Certifying Hardware-Verification Framework Using Software Analyzers}, booktitle = {Proc.\ TACAS~(3)}, pages = {129-149}, year = {2024}, series = {LNCS~14572}, publisher = {Springer}, doi = {10.1007/978-3-031-57256-2_7}, url = {https://www.sosy-lab.org/research/btor2-cert/}, abstract = {Formal verification is essential but challenging: Even the best verifiers may produce wrong verification verdicts. Certifying verifiers enhance the confidence in verification results by generating a witness for other tools to validate the verdict independently. Recently, translating the hardware-modeling language Btor2 to software, such as the programming language C or LLVM intermediate representation, has been actively studied and facilitated verifying hardware designs by software analyzers. However, it remained unknown whether witnesses produced by software verifiers contain helpful information about the original circuits and how such information can aid hardware analysis. We propose a certifying and validating framework Btor2-Cert to verify safety properties of Btor2 circuits, combining Btor2-to-C translation, software verifiers, and a new witness validator Btor2-Val, to answer the above open questions. Btor2-Cert translates a software violation witness to a Btor2 violation witness; As the Btor2 language lacks a format for correctness witnesses, we encode invariants in software correctness witnesses as Btor2 circuits. The validator Btor2-Val checks violation witnesses by circuit simulation and correctness witnesses by validation via verification. In our evaluation, Btor2-Cert successfully utilized software witnesses to improve quality assurance of hardware. By invoking the software verifier CBMC on translated programs, it uniquely solved, with confirmed witnesses, 8% of the unsafe tasks for which the hardware verifier ABC failed to detect bugs.}, keyword = {Software Model Checking, Witness-Based Validation, Cooperative Verification, Btor2}, _pdf = {https://www.sosy-lab.org/research/pub/2024-TACAS.Btor2-Cert_A_Certifying_Hardware-Verification_Framework_Using_Software_Analyzers.pdf}, annote = {The reproduction package of this article received the "Distinguished Artifact Award" at TACAS 2024!}, artifact = {10.5281/zenodo.10548597}, funding = {DFG-CONVEY}, }Additional Infos
The reproduction package of this article received the "Distinguished Artifact Award" at TACAS 2024! -
Refining CEGAR-based Test-Case Generation with Feasibility Annotations.
In Proc. TAP ,
LNCS ,
2024.
Springer.
Keyword(s):
Ultimate Automizer,
Software Testing
Funding:
DFG-ReVeriX
Artifact(s)
BibTeX Entry
@inproceedings{UTestGen-Reuse, author = {Max Barth and Marie{-}Christine Jakobs}, title = {Refining CEGAR-based Test-Case Generation with Feasibility Annotations}, booktitle = {Proc.\ {TAP}}, pages = {}, year = {2024}, series = {LNCS~}, publisher = {Springer}, url = {}, pdf = {}, keyword = {Ultimate Automizer, Software Testing}, annote = {}, artifact = {10.5281/zenodo.11641893}, doinone = {Unpublished: Last checked: 2024-07-08}, funding = {DFG-ReVeriX}, } -
Test-Case Generation with
Automata-based Software Model Checking.
In Proc. SPIN ,
LNCS ,
2024.
Springer.
Keyword(s):
Ultimate Automizer,
Software Testing
Artifact(s)
BibTeX Entry
@inproceedings{UTestGen, author = {Max Barth and Marie{-}Christine Jakobs}, title = {Test-Case Generation with Automata-based Software Model Checking}, booktitle = {Proc.\ {SPIN}}, pages = {}, year = {2024}, series = {LNCS~}, publisher = {Springer}, url = {}, pdf = {}, keyword = {Ultimate Automizer, Software Testing}, annote = {}, artifact = {10.5281/zenodo.10574234}, doinone = {Unpublished: Last checked: 2024-07-08}, funding = {}, } -
Ultimate TestGen: Test-Case Generation with Automata-based Software
Model Checking (Competition Contribution).
In Proc. FASE ,
LNCS 14573,
pages 326-330,
2024.
Springer.
doi:10.1007/978-3-031-57259-3_20
Keyword(s):
Ultimate Automizer,
Software Testing
Publisher's Version
PDF
Artifact(s)
BibTeX Entry
@inproceedings{UTestGen-Competition, author = {Max Barth and Daniel Dietsch and Matthias Heizmann and Marie{-}Christine Jakobs}, title = {Ultimate TestGen: Test-Case Generation with Automata-based Software Model Checking (Competition Contribution)}, booktitle = {Proc.\ {FASE}}, pages = {326-330}, year = {2024}, series = {LNCS~14573}, publisher = {Springer}, doi = {10.1007/978-3-031-57259-3_20}, url = {}, pdf = {}, keyword = {Ultimate Automizer, Software Testing}, annote = {}, artifact = {10.5281/zenodo.10071568}, funding = {}, } -
Ranged Program Analysis: A Parallel Divide-and-Conquer Approach for Software Verification.
In Software Engineering 2024, Fachtagung des GI-Fachbereichs Softwaretechnik,
Linz, Austria, February 26 - March 1, 2024,
LNI P-343,
pages 157-158,
2024.
GI.
doi:10.18420/SW2024_52
Keyword(s):
Ranged Program Analysis,
Cooperative Verification,
Software Model Checking,
CPAchecker
Publisher's Version
BibTeX Entry
@inproceedings{JakobsSE2024, author = {Jan Haltermann and Marie{-}Christine Jakobs and Cedric Richter and Heike Wehrheim}, title = {Ranged Program Analysis: A Parallel Divide-and-Conquer Approach for Software Verification}, booktitle = {Software Engineering 2024, Fachtagung des GI-Fachbereichs Softwaretechnik, Linz, Austria, February 26 - March 1, 2024}, pages = {157-158}, year = {2024}, series = {{LNI}~{P-343}}, publisher = {GI}, doi = {10.18420/SW2024_52}, pdf = {}, keyword = {Ranged Program Analysis, Cooperative Verification, Software Model Checking, CPAchecker}, annote = {}, artifact = {}, funding = {}, } -
Regression-Test History Data for Flaky Test Research.
In Proc. 1st International Workshop on Flaky Tests,
pages 3–4,
2024.
ACM.
doi:10.1145/3643656.3643901
Keyword(s):
Software Testing,
Flaky Tests
Publisher's Version
PDF
Presentation
Artifact(s)
Abstract
Due to their random nature, flaky test failures are difficult to study. Without having observed a test to both pass and fail under the same setup, it is unknown whether a test is flaky and what its failure rate is. Thus, flaky-test research has greatly benefited from data records of previous studies, which provide evidence for flaky test failures and give a rough indication of the failure rates to expect. For assessing the impact of the studied flaky tests on developers' work, it is important to also know how flaky test failures manifest over a regression test history, i.e., under continuous changes to test code or code under test. While existing datasets on flaky tests are mostly based on re-runs on an invariant code base, the actual effects of flaky tests on development can only be assessed across the commits in an evolving commit history, against which (potentially flaky) regression tests are executed. In our presentation, we outline approaches to bridge this gap and report on our experiences following one of them. As a result of this work, we contribute a dataset of flaky test failures across a simulated regression test history.BibTeX Entry
@inproceedings{RegressionTestData-FTW24, author = {Philipp Wendler and Stefan Winter}, title = {Regression-Test History Data for Flaky Test Research}, booktitle = {Proc.\ 1st International Workshop on Flaky Tests}, pages = {3–4}, year = {2024}, publisher = {ACM}, doi = {10.1145/3643656.3643901}, pdf = {https://www.sosy-lab.org/research/pub/2024-FTW24.Regression-Test_History_Data_for_Flaky_Test_Research.pdf}, presentation = {https://www.sosy-lab.org/research/prs/2024-04-14_FTW24_Regression-Test_History_Data_for_Flaky_Test_Research_Stefan.html}, abstract = {Due to their random nature, flaky test failures are difficult to study. Without having observed a test to both pass and fail under the same setup, it is unknown whether a test is flaky and what its failure rate is. Thus, flaky-test research has greatly benefited from data records of previous studies, which provide evidence for flaky test failures and give a rough indication of the failure rates to expect. For assessing the impact of the studied flaky tests on developers' work, it is important to also know how flaky test failures manifest over a regression test history, i.e., under continuous changes to test code or code under test. While existing datasets on flaky tests are mostly based on re-runs on an invariant code base, the actual effects of flaky tests on development can only be assessed across the commits in an evolving commit history, against which (potentially flaky) regression tests are executed. In our presentation, we outline approaches to bridge this gap and report on our experiences following one of them. As a result of this work, we contribute a dataset of flaky test failures across a simulated regression test history.}, keyword = {Software Testing, Flaky Tests}, artifact = {10.5281/zenodo.10639030}, keywords = {Software Testing, Dataset}, } -
CoVeriTeam GUI: A No-Code Approach to Cooperative Software Verification.
In Proceedings of the 39th IEEE/ACM International Conference on Automated Software Engineering (ASE 2024, Sacramento, CA, USA, October 27-November 1),
pages 2419-2422,
2024.
doi:10.1145/3691620.3695366
Keyword(s):
Software Model Checking,
Cooperative Verification
Funding:
DFG-CONVEY
Publisher's Version
PDF
Artifact(s)
Abstract
We present CoVeriTeam GUI, a No-Code web frontend to compose new software-verification workflows from existing analysis techniques. Verification approaches stopped relying on single techniques years ago, and instead combine selections that complement each other well. So far, such combinations were-under high implementation and maintenance cost-glued together with proprietary code. Now, CoVeriTeam GUI enables users to build new verification workflows without programming. Verification techniques can be combined through various composition operators in a drag-and-drop fashion directly in the browser, and an integration with a remote service allows to execute the built workflows with the click of a button. CoVeriTeam GUI is available open source under Apache 2.0: https://gitlab.com/sosy-lab/software/coveriteam-gui
Demonstration video: https://youtu.be/oZoOARuIOuABibTeX Entry
@inproceedings{CoVeriTeamGUI-ASE24, author = {Thomas Lemberger and Henrik Wachowitz}, title = {CoVeriTeam GUI: A No-Code Approach to Cooperative Software Verification}, booktitle = {Proceedings of the 39th IEEE/ACM International Conference on Automated Software Engineering (ASE 2024, Sacramento, CA, USA, October 27-November 1)}, pages = {2419-2422}, year = {2024}, doi = {10.1145/3691620.3695366}, pdf = {https://www.sosy-lab.org/research/pub/2024-ASE24.CoVeriTeam_GUI_A_No-Code_Approach_to_Cooperative_Software_Verification.pdf}, presentation = {}, abstract = {We present CoVeriTeam GUI, a No-Code web frontend to compose new software-verification workflows from existing analysis techniques. Verification approaches stopped relying on single techniques years ago, and instead combine selections that complement each other well. So far, such combinations were---under high implementation and maintenance cost---glued together with proprietary code. Now, CoVeriTeam GUI enables users to build new verification workflows without programming. Verification techniques can be combined through various composition operators in a drag-and-drop fashion directly in the browser, and an integration with a remote service allows to execute the built workflows with the click of a button. CoVeriTeam GUI is available open source under Apache 2.0: <a href="https://gitlab.com/sosy-lab/software/coveriteam-gui">https://gitlab.com/sosy-lab/software/coveriteam-gui</a><br> Demonstration video: <a href="https://youtu.be/oZoOARuIOuA">https://youtu.be/oZoOARuIOuA</a>}, keyword = {Software Model Checking, Cooperative Verification}, artifact = {10.5281/zenodo.13757771}, funding = {DFG-CONVEY}, } -
CPV: A Circuit-Based Program Verifier (Competition Contribution).
In Proc. TACAS,
LNCS 14572,
pages 365-370,
2024.
Springer.
doi:10.1007/978-3-031-57256-2_22
Keyword(s):
Software Model Checking,
Cooperative Verification,
Btor2
Funding:
DFG-CONVEY
Publisher's Version
PDF
Presentation
Supplement
Artifact(s)
Abstract
We submit to SV-COMP 2024 CPV, a circuit-based software verifier for C programs. CPV utilizes sequential circuits as its intermediate representation and invokes hardware model checkers to analyze the reachability safety of C programs. As the frontend, it uses Kratos2, a recently proposed verification tool, to translate a C program to a sequential circuit. As the backend, state-of-the-art hardware model checkers ABC and AVR are employed to verify the translated circuits. We configure the hardware model checkers to run various analyses, including IC3/PDR, interpolation-based model checking, and k-induction. Information discovered by hardware model checkers is represented as verification witnesses. In the competition, CPV achieved comparable performance against participants whose intermediate representations are based on control-flow graphs. In the category ReachSafety, it outperformed several mature software verifiers as a first-year participant. CPV manifests the feasibility of sequential circuits as an alternative intermediate representation for program analysis and enables head-to-head algorithmic comparison between hardware and software verification.BibTeX Entry
@inproceedings{CPV-TACAS24, author = {Po-Chun Chien and Nian-Ze Lee}, title = {CPV: A Circuit-Based Program Verifier (Competition Contribution)}, booktitle = {Proc.\ TACAS}, pages = {365-370}, year = {2024}, series = {LNCS~14572}, publisher = {Springer}, doi = {10.1007/978-3-031-57256-2_22}, url = {https://gitlab.com/sosy-lab/software/cpv}, pdf = {https://www.sosy-lab.org/research/pub/2024-TACAS.CPV_A_Circuit-Based_Program_Verifier_Competition_Contribution.pdf}, presentation = {https://www.sosy-lab.org/research/prs/2024-04-08_SVCOMP_CPV_A_Circuit-Based_Program_Verifier_Po-Chun.pdf}, abstract = {We submit to SV-COMP 2024 CPV, a circuit-based software verifier for C programs. CPV utilizes sequential circuits as its intermediate representation and invokes hardware model checkers to analyze the reachability safety of C programs. As the frontend, it uses Kratos2, a recently proposed verification tool, to translate a C program to a sequential circuit. As the backend, state-of-the-art hardware model checkers ABC and AVR are employed to verify the translated circuits. We configure the hardware model checkers to run various analyses, including IC3/PDR, interpolation-based model checking, and <i>k</i>-induction. Information discovered by hardware model checkers is represented as verification witnesses. In the competition, CPV achieved comparable performance against participants whose intermediate representations are based on control-flow graphs. In the category <i>ReachSafety</i>, it outperformed several mature software verifiers as a first-year participant. CPV manifests the feasibility of sequential circuits as an alternative intermediate representation for program analysis and enables head-to-head algorithmic comparison between hardware and software verification.}, keyword = {Software Model Checking, Cooperative Verification, Btor2}, artifact = {10.5281/zenodo.10203472}, funding = {DFG-CONVEY}, } -
Tighter Construction of Tight Büchi Automata.
2024.
Springer.
Keyword(s):
Büchi Automata
PDF
Abstract
Tight automata are useful in providing the shortest coun- terexample in LTL model checking and also in constructing a maximally satisfying strategy in LTL strategy synthesis. There exists a translation of LTL formulas to tight Büchi automata and several translations of Büchi automata to equivalent tight Büchi automata. This paper presents an- other translation of Büchi automata to equivalent tight Büchi automata. The translation is designed to produce smaller tight automata and it asymptotically improves the best-known upper bound on the size of a tight Büchi automaton equivalent to a given Büchi automaton. We also provide a lower bound, which is more precise than the previously known one. Further, we show that automata reduction methods based on quo- tienting preserve tightness. Our translation was implemented in a tool called Tightener. Experimental evaluation shows that Tightener usually produces smaller tight automata than the translation from LTL to tight automata known as CGH.BibTeX Entry
@inproceedings{JankolaFOSSACS2024, author = {Marek Jankola and Jan Strejček}, title = {Tighter Construction of Tight Büchi Automata}, year = {2024}, publisher = {Springer}, pdf = {https://www.sosy-lab.org/research/pub/2024-FOSSACS.Tighter_Construction_of_Tight_Buchi_Automata.pdf}, abstract = {Tight automata are useful in providing the shortest coun- terexample in LTL model checking and also in constructing a maximally satisfying strategy in LTL strategy synthesis. There exists a translation of LTL formulas to tight Büchi automata and several translations of Büchi automata to equivalent tight Büchi automata. This paper presents an- other translation of Büchi automata to equivalent tight Büchi automata. The translation is designed to produce smaller tight automata and it asymptotically improves the best-known upper bound on the size of a tight Büchi automaton equivalent to a given Büchi automaton. We also provide a lower bound, which is more precise than the previously known one. Further, we show that automata reduction methods based on quo- tienting preserve tightness. Our translation was implemented in a tool called Tightener. Experimental evaluation shows that Tightener usually produces smaller tight automata than the translation from LTL to tight automata known as CGH.}, keyword = {Büchi Automata}, }
2023
-
CPA-DF: A Tool for Configurable Interval Analysis to Boost Program Verification.
In Proc. ASE,
pages 2050-2053,
2023.
IEEE.
doi:10.1109/ASE56229.2023.00213
Keyword(s):
Software Model Checking,
Cooperative Verification,
CPAchecker
Funding:
DFG-CONVEY
Publisher's Version
PDF
Presentation
Video
Supplement
Artifact(s)
Abstract
Software verification is challenging, and auxiliary program invariants are used to improve the effectiveness of verification approaches. For instance, the k-induction implementation in CPAchecker, an award-winning framework for program analysis, uses invariants produced by a configurable data-flow analysis to strengthen induction hypotheses. This invariant generator, CPA-DF, uses arithmetic expressions over intervals as its abstract domain and is able to prove some safe verification tasks alone. After extensively evaluating CPA-DF on SV-Benchmarks, the largest publicly available suite of C safety-verification tasks, we discover that its potential as a stand-alone analysis or a sub-analysis in a parallel portfolio for combined verification approaches has been significantly underestimated: (1) As a stand-alone analysis, CPA-DF finds almost as many proofs as the plain k-induction implementation without auxiliary invariants. (2) As a sub-analysis running in parallel to the plain k-induction implementation, CPA-DF boosts the portfolio verifier to solve a comparable amount of tasks as the heavily-optimized k-induction implementation with invariant injection. Our detailed analysis reveals that dynamic precision adjustment is crucial to the efficiency and effectiveness of CPA-DF. To generalize our results beyond CPAchecker, we use CoVeriTeam, a platform for cooperative verification, to compose three portfolio verifiers that execute CPA-DF and three other software verifiers in parallel, respectively. Surprisingly, running CPA-DF merely in parallel to these state-of-the-art tools further boosts the number of correct results up to more than 20%.
Demonstration video: https://youtu.be/l7UG-vhTL_4BibTeX Entry
@inproceedings{ASE23a, author = {Dirk Beyer and Po-Chun Chien and Nian-Ze Lee}, title = {{CPA-DF}: {A} Tool for Configurable Interval Analysis to Boost Program Verification}, booktitle = {Proc.\ ASE}, pages = {2050-2053}, year = {2023}, series = {}, publisher = {IEEE}, doi = {10.1109/ASE56229.2023.00213}, url = {https://www.sosy-lab.org/research/cpa-df/}, pdf = {https://www.sosy-lab.org/research/pub/2023-ASE.CPA-DF_A_Tool_for_Configurable_Interval_Analysis_to_Boost_Program_Verification.pdf}, presentation = {https://www.sosy-lab.org/research/prs/2023-09-13_ASE_CPA-DF_Po-Chun.pdf}, abstract = {Software verification is challenging, and auxiliary program invariants are used to improve the effectiveness of verification approaches. For instance, the <i>k</i>-induction implementation in <a href="https://cpachecker.sosy-lab.org/">CPAchecker</a>, an award-winning framework for program analysis, uses invariants produced by a configurable data-flow analysis to strengthen induction hypotheses. This invariant generator, CPA-DF, uses arithmetic expressions over intervals as its abstract domain and is able to prove some safe verification tasks alone. After extensively evaluating CPA-DF on <a href="https://gitlab.com/sosy-lab/benchmarking/sv-benchmarks">SV-Benchmarks</a>, the largest publicly available suite of C safety-verification tasks, we discover that its potential as a stand-alone analysis or a sub-analysis in a parallel portfolio for combined verification approaches has been significantly underestimated: (1) As a stand-alone analysis, CPA-DF finds almost as many proofs as the plain <i>k</i>-induction implementation without auxiliary invariants. (2) As a sub-analysis running in parallel to the plain <i>k</i>-induction implementation, CPA-DF boosts the portfolio verifier to solve a comparable amount of tasks as the heavily-optimized <i>k</i>-induction implementation with invariant injection. Our detailed analysis reveals that dynamic precision adjustment is crucial to the efficiency and effectiveness of CPA-DF. To generalize our results beyond CPAchecker, we use <a href="https://gitlab.com/sosy-lab/software/coveriteam">CoVeriTeam</a>, a platform for cooperative verification, to compose three portfolio verifiers that execute CPA-DF and three other software verifiers in parallel, respectively. Surprisingly, running CPA-DF merely in parallel to these state-of-the-art tools further boosts the number of correct results up to more than 20%. <br> Demonstration video: <a href="https://youtu.be/l7UG-vhTL_4">https://youtu.be/l7UG-vhTL_4</a>}, keyword = {Software Model Checking, Cooperative Verification, CPAchecker}, artifact = {10.5281/zenodo.8245821}, funding = {DFG-CONVEY}, video = {https://youtu.be/l7UG-vhTL_4}, } -
LIV: Invariant Validation using Straight-Line Programs.
In Proc. ASE,
pages 2074-2077,
2023.
IEEE.
doi:10.1109/ASE56229.2023.00214
Keyword(s):
Software Model Checking,
Witness-Based Validation
Funding:
DFG-CONVEY
Publisher's Version
PDF
Video
Supplement
Artifact(s)
Abstract
Validation of correctness proofs is an established procedure in software verification. While there are steady advances when it comes to verification of more and more complex software systems, it becomes increasingly hard to determine which information is actually useful for validation of the correctness proof. Usually, the central piece that verifiers struggle to come up with are good loop invariants. While a proof using inductive invariants is easy to validate, not all invariants used by verifiers necessarily are inductive. In order to alleviate this problem, we propose LIV, an approach that makes it easy to check if the invariant information provided by the verifier is sufficient to establish an inductive proof. This is done by emulating a Hoare-style proof, splitting the program into Hoare triples and converting these into verification tasks that can themselves be efficiently verified by an off-the-shelf verifier. In case the validation fails, useful information about the failure reason can be extracted from the overview of which triples could be established and which were refuted. We show that our approach works by evaluating it on a state-of-the-art benchmark set.BibTeX Entry
@inproceedings{ASE23b, author = {Dirk Beyer and Martin Spiessl}, title = {{LIV}: {Invariant} Validation using Straight-Line Programs}, booktitle = {Proc.\ ASE}, pages = {2074-2077}, year = {2023}, series = {}, publisher = {IEEE}, doi = {10.1109/ASE56229.2023.00214}, url = {https://www.sosy-lab.org/research/liv}, pdf = {https://www.sosy-lab.org/research/pub/2023-ASE.LIV_Loop-Invariant_Validation_using_Straight-Line_Programs.pdf}, abstract = {Validation of correctness proofs is an established procedure in software verification. While there are steady advances when it comes to verification of more and more complex software systems, it becomes increasingly hard to determine which information is actually useful for validation of the correctness proof. Usually, the central piece that verifiers struggle to come up with are good loop invariants. While a proof using inductive invariants is easy to validate, not all invariants used by verifiers necessarily are inductive. In order to alleviate this problem, we propose LIV, an approach that makes it easy to check if the invariant information provided by the verifier is sufficient to establish an inductive proof. This is done by emulating a Hoare-style proof, splitting the program into Hoare triples and converting these into verification tasks that can themselves be efficiently verified by an off-the-shelf verifier. In case the validation fails, useful information about the failure reason can be extracted from the overview of which triples could be established and which were refuted. We show that our approach works by evaluating it on a state-of-the-art benchmark set.}, keyword = {Software Model Checking, Witness-Based Validation}, artifact = {10.5281/zenodo.8289101}, funding = {DFG-CONVEY}, video = {https://youtu.be/mZhoGAa08Rk}, } -
CEGAR-PT: A Tool for Abstraction by Program Transformation.
In Proc. ASE,
pages 2078-2081,
2023.
IEEE.
doi:10.1109/ASE56229.2023.00215
Keyword(s):
Software Model Checking
Funding:
DFG-CONVEY
Publisher's Version
PDF
Video
Supplement
Artifact(s)
Abstract
Abstraction is a key technology for proving the correctness of computer programs. There are many approaches available, but unfortunately, the various techniques are difficult to combine and the successful techniques have to be re-implemented again and again.
We address this problem by using the tool CEGAR-PT, which views abstraction as program transformation and integrates different verification components off-the-shelf. The idea is to use existing components without having to change their implementation, while still adjusting the precision of the abstraction using the successful CEGAR approach. The approach is largely general: it only restricts the abstraction to transform, given a precision that defines the level of abstraction, one program into another program. The abstraction by program transformation can over-approximate the data flow (e.g., havoc some variables, use more abstract types) or the control flow (e.g., loop abstraction, slicing).BibTeX Entry
@inproceedings{ASE23c, author = {Dirk Beyer and Marian Lingsch-Rosenfeld and Martin Spiessl}, title = {{CEGAR-PT}: {A} Tool for Abstraction by Program Transformation}, booktitle = {Proc.\ ASE}, pages = {2078-2081}, year = {2023}, series = {}, publisher = {IEEE}, doi = {10.1109/ASE56229.2023.00215}, url = {https://www.sosy-lab.org/research/cegar-pt}, pdf = {https://www.sosy-lab.org/research/pub/2023-ASE.CEGAR-PT_A_Tool_for_Abstraction_by_Program_Transformation.pdf}, abstract = {Abstraction is a key technology for proving the correctness of computer programs. There are many approaches available, but unfortunately, the various techniques are difficult to combine and the successful techniques have to be re-implemented again and again. <br> We address this problem by using the tool CEGAR-PT, which views abstraction as program transformation and integrates different verification components off-the-shelf. The idea is to use existing components without having to change their implementation, while still adjusting the precision of the abstraction using the successful CEGAR approach. The approach is largely general: it only restricts the abstraction to transform, given a precision that defines the level of abstraction, one program into another program. The abstraction by program transformation can over-approximate the data flow (e.g., havoc some variables, use more abstract types) or the control flow (e.g., loop abstraction, slicing).}, keyword = {Software Model Checking}, artifact = {10.5281/zenodo.8287183}, funding = {DFG-CONVEY}, video = {https://youtu.be/ASZ6hoq8asE}, } -
CoVeriTeam Service: Verification as a Service.
In Proc. ICSE,
pages 21-25,
2023.
IEEE.
doi:10.1109/ICSE-Companion58688.2023.00017
Keyword(s):
Software Model Checking,
Incremental Verification,
Cooperative Verification
Funding:
DFG-CONVEY,
DFG-COOP
Publisher's Version
PDF
Supplement
Artifact(s)
Abstract
The research community has developed numerous tools for solving verification problems, but we are missing a common interface for executing them. This means users have to spend considerable effort on the installation and parameter setup, for each new tool (version) they want to execute. The situation could make a verification researcher wanting to experiment with a new verification tool turn away from it. We aim to make it easier for users to execute verification tools, as well as provide mechanism for tool developers to make their tools easily accessible. Our solution combines a web service and a common interface for verification tools. The presented service has been used during the 2023 competitions on software verification and testing, for integration testing. As another use- case, we developed a service for incremental verification on top of the CoVeriTeam Service and demonstrate its use in a continuous-integration process.BibTeX Entry
@inproceedings{ICSE23, author = {Dirk Beyer and Sudeep Kanav and Henrik Wachowitz}, title = {{CoVeriTeam Service}: {Verification} as a Service}, booktitle = {Proc.\ ICSE}, pages = {21-25}, year = {2023}, publisher = {IEEE}, doi = {10.1109/ICSE-Companion58688.2023.00017}, url = {https://coveriteam-service.sosy-lab.org/static/index.html}, pdf = {https://www.sosy-lab.org/research/pub/2023-ICSE.CoVeriTeam_Service_Verification_as_a_Service.pdf}, abstract = {The research community has developed numerous tools for solving verification problems, but we are missing a common interface for executing them. This means users have to spend considerable effort on the installation and parameter setup, for each new tool (version) they want to execute. The situation could make a verification researcher wanting to experiment with a new verification tool turn away from it. We aim to make it easier for users to execute verification tools, as well as provide mechanism for tool developers to make their tools easily accessible. Our solution combines a web service and a common interface for verification tools. The presented service has been used during the 2023 competitions on software verification and testing, for integration testing. As another use- case, we developed a service for incremental verification on top of the {CoVeriTeam} Service and demonstrate its use in a continuous-integration process.}, keyword = {Software Model Checking,Incremental Verification,Cooperative Verification}, _sha256 = {604dd391b6a49e46e97b6faafbb3cc331ccf5c04e3d364cf1e76a2c99c1c267f}, artifact = {10.5281/zenodo.7276532}, funding = {DFG-CONVEY,DFG-COOP}, } -
Software Testing: 5th Comparative Evaluation: Test-Comp 2023.
In L. Lambers and
S. Uchitel, editors,
Proceedings of the 26th International Conference on
Fundamental Approaches to Software Engineering
(FASE 2023, Paris, France, April 22-27),
LNCS 13991,
pages 309-323,
2023.
Springer.
doi:10.1007/978-3-031-30826-0_17
Keyword(s):
Competition on Software Testing (Test-Comp),
Competition on Software Testing (Test-Comp Report),
Software Testing
Funding:
DFG-COOP
Publisher's Version
PDF
Supplement
BibTeX Entry
@inproceedings{FASE23, author = {Dirk Beyer}, title = {Software Testing: 5th Comparative Evaluation: {Test-Comp 2023}}, booktitle = {Proceedings of the 26th International Conference on Fundamental Approaches to Software Engineering (FASE~2023, Paris, France, April 22-27)}, editor = {L. Lambers and S. Uchitel}, pages = {309--323}, year = {2023}, series = {LNCS~13991}, publisher = {Springer}, isbn = {}, doi = {10.1007/978-3-031-30826-0_17}, sha256 = {7110c26bf3c9311f84346a108a59318687bdadde4879f83d047f1a0fc546b630}, url = {https://test-comp.sosy-lab.org/2023/}, keyword = {Competition on Software Testing (Test-Comp),Competition on Software Testing (Test-Comp Report),Software Testing}, _pdf = {https://www.sosy-lab.org/research/pub/2023-FASE.Software_Testing_5th_Comparative_Evaluation_Test-Comp_2023.pdf}, funding = {DFG-COOP}, } -
Competition on Software Verification and Witness Validation: SV-COMP 2023.
In S. Sankaranarayanan and
N. Sharygina, editors,
Proceedings of the 29th International Conference on
Tools and Algorithms for the Construction and Analysis of Systems
(TACAS 2023, Paris, France, April 22-27),
LNCS 13994,
pages 495-522,
2023.
Springer.
doi:10.1007/978-3-031-30820-8_29
Keyword(s):
Competition on Software Verification (SV-COMP),
Competition on Software Verification (SV-COMP Report),
Software Model Checking
Funding:
DFG-COOP
Publisher's Version
PDF
Supplement
BibTeX Entry
@inproceedings{TACAS23b, author = {Dirk Beyer}, title = {Competition on Software Verification and Witness Validation: {SV-COMP 2023}}, booktitle = {Proceedings of the 29th International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS~2023, Paris, France, April 22-27)}, editor = {S. Sankaranarayanan and N. Sharygina}, pages = {495--522}, year = {2023}, series = {LNCS~13994}, publisher = {Springer}, doi = {10.1007/978-3-031-30820-8_29}, sha256 = {1d35ae38d4e87c267ccc34cba880994b6f6a7927491ec13ba3cc548a29e81e5c}, url = {https://sv-comp.sosy-lab.org/2023/}, keyword = {Competition on Software Verification (SV-COMP),Competition on Software Verification (SV-COMP Report),Software Model Checking}, _pdf = {https://www.sosy-lab.org/research/pub/2023-TACAS.Competition_on_Software_Verification_and_Witness_Validation_SV-COMP_2023.pdf}, funding = {DFG-COOP}, } -
Bridging Hardware and Software Analysis with Btor2C: A Word-Level-Circuit-to-C Translator.
In Proc. TACAS,
LNCS 13994,
pages 152-172,
2023.
Springer.
doi:10.1007/978-3-031-30820-8_12
Keyword(s):
Software Model Checking,
Cooperative Verification,
Btor2
Funding:
DFG-CONVEY
Publisher's Version
PDF
Presentation
Supplement
Artifact(s)
Abstract
Across the broad field for the analysis of computational systems, research endeavors are often categorized by the respective models under investigation. Algorithms and tools are usually developed for a specific model, hindering their applications to similar problems originating from other computational systems. A prominent example of such situation is the studies on formal verification and testing for hardware and software systems. The two research communities share common theoretical foundations and solving methods, including satisfiability, interpolation, and abstraction refinement. Nevertheless, it is often demanding for one community to benefit from the advancements of the other, as analyzers typically assume a particular input format. To bridge the gap between the hardware and software analysis, we propose Btor2C, a converter from word-level sequential circuits to C programs. We choose the Btor2 language as the input format for its simplicity and bit-precise semantics. It can be deemed as an intermediate representation tailored for analysis. Given a Btor2 circuit, Btor2C generates a behaviorally equivalent program in the C language, supported by most static program analyzers. We demonstrate the use cases of Btor2C by translating the benchmark set from the Hardware Model Checking Competitions into C programs and analyze them by tools from the Competitions on Software Verification and Testing. Our results show that software analyzers can complement hardware verifiers for enhanced quality assurance.BibTeX Entry
@inproceedings{TACAS23a, author = {Dirk Beyer and Po-Chun Chien and Nian-Ze Lee}, title = {Bridging Hardware and Software Analysis with {Btor2C}: {A} Word-Level-Circuit-to-{C} Translator}, booktitle = {Proc.\ TACAS}, pages = {152-172}, year = {2023}, series = {LNCS~13994}, publisher = {Springer}, doi = {10.1007/978-3-031-30820-8_12}, url = {https://www.sosy-lab.org/research/btor2c/}, presentation = {https://www.sosy-lab.org/research/prs/2023-04-26_TACAS23_Bridging_Hardware_and_Software_Analysis_with_Btor2C_Po-Chun.pdf}, abstract = {Across the broad field for the analysis of computational systems, research endeavors are often categorized by the respective models under investigation. Algorithms and tools are usually developed for a specific model, hindering their applications to similar problems originating from other computational systems. A prominent example of such situation is the studies on formal verification and testing for hardware and software systems. The two research communities share common theoretical foundations and solving methods, including satisfiability, interpolation, and abstraction refinement. Nevertheless, it is often demanding for one community to benefit from the advancements of the other, as analyzers typically assume a particular input format. To bridge the gap between the hardware and software analysis, we propose Btor2C, a converter from word-level sequential circuits to C programs. We choose <a href="https://doi.org/10.1007/978-3-319-96145-3_32">the Btor2 language</a> as the input format for its simplicity and bit-precise semantics. It can be deemed as an intermediate representation tailored for analysis. Given a Btor2 circuit, Btor2C generates a behaviorally equivalent program in the C language, supported by most static program analyzers. We demonstrate the use cases of Btor2C by translating the benchmark set from the Hardware Model Checking Competitions into C programs and analyze them by tools from the Competitions on Software Verification and Testing. Our results show that software analyzers can complement hardware verifiers for enhanced quality assurance.}, keyword = {Software Model Checking, Cooperative Verification, Btor2}, _pdf = {https://www.sosy-lab.org/research/pub/2023-TACAS.Bridging_Hardware_and_Software_Analysis_with_Btor2C_A_Word-Level-Circuit-to-C_Translator.pdf}, artifact = {10.5281/zenodo.7551707}, funding = {DFG-CONVEY}, } -
Component-based CEGAR - Building Software Verifiers from Off-the-Shelf Components.
In G. Engels,
R. Hebig, and
M. Tichy, editors,
Software Engineering 2023, Fachtagung des GI-Fachbereichs Softwaretechnik, 20.-24. Februar 2023, Paderborn,
LNI P-332,
pages 37-38,
2023.
GI.
Keyword(s):
CPAchecker,
Software Model Checking,
Cooperative Verification
Publisher's Version
PDF
BibTeX Entry
@inproceedings{SE23, author = {Dirk Beyer and Jan Haltermann and Thomas Lemberger and Heike Wehrheim}, title = {Component-based {CEGAR} - Building Software Verifiers from Off-the-Shelf Components}, booktitle = {Software Engineering 2023, Fachtagung des GI-Fachbereichs Softwaretechnik, 20.-24. Februar 2023, Paderborn}, editor = {G.~Engels and R.~Hebig and M.~Tichy}, pages = {37--38}, year = {2023}, series = {{LNI}~P-332}, publisher = {{GI}}, sha256 = {}, pdf = {https://sosy-lab.org/research/pub/2023-SE.Component-based_CEGAR_Building_Software_Verifiers_from_Off-the-Shelf_Components.pdf}, abstract = {}, keyword = {CPAchecker,Software Model Checking,Cooperative Verification}, annote = {This is a summary of a <a href="https://www.sosy-lab.org/research/bib/Year/2022.html#ICSE22">full article on this topic</a> that appeared in Proc. ICSE 2022.}, doinone = {DOI not available}, isbnnote = {978-3-88579-726-5}, urlpub = {https://dspace.gi.de/handle/20.500.12116/40128}, }Additional Infos
This is a summary of a full article on this topic that appeared in Proc. ICSE 2022. -
diffDP: Using Data Dependencies and Properties in Difference Verification
with Conditions.
In Proc. iFM,
LNCS 14300,
pages 40-61,
2023.
Springer.
doi:10.1007/978-3-031-47705-8_3
Keyword(s):
Incremental Verification,
Regression Verification,
Software Model Checking,
CPAchecker
Funding:
Software-Factory 4.0,
DFG-ReVeriX
Publisher's Version
PDF
Artifact(s)
BibTeX Entry
@inproceedings{diffDP, author = {Marie{-}Christine Jakobs and Tim Pollandt}, title = {diffDP: Using Data Dependencies and Properties in Difference Verification with Conditions}, booktitle = {Proc.\ iFM}, pages = {40-61}, year = {2023}, series = {LNCS~14300}, publisher = {Springer}, doi = {10.1007/978-3-031-47705-8_3}, url = {}, pdf = {}, keyword = {Incremental Verification, Regression Verification, Software Model Checking, CPAchecker}, annote = {}, artifact = {10.5281/zenodo.8272913}, funding = {Software-Factory 4.0, DFG-ReVeriX}, } -
Ranged Program Analysis via Instrumentation.
In Proc. SEFM,
LNCS 14323,
pages 145-164,
2023.
Springer.
doi:10.1007/978-3-031-47115-5_9
Keyword(s):
Software Model Checking,
CPAchecker,
Ranged Program Analysis,
Program Instrumentation
Publisher's Version
PDF
Artifact(s)
BibTeX Entry
@inproceedings{RangedPAwithInstrumentation, author = {Jan Haltermann and Marie{-}Christine Jakobs and Cedric Richter and Heike Wehrheim}, title = {Ranged Program Analysis via Instrumentation}, booktitle = {Proc.\ SEFM}, pages = {145-164}, year = {2023}, series = {LNCS~14323}, publisher = {Springer}, doi = {10.1007/978-3-031-47115-5_9}, url = {}, pdf = {}, keyword = {Software Model Checking, CPAchecker, Ranged Program Analysis, Program Instrumentation}, annote = {}, artifact = {10.5281/zenodo.8065229}, funding = {}, } -
Parallel Program Analysis via Range Splitting.
In Proc. FASE,
LNCS 13991,
pages 195-219,
2023.
Springer.
doi:10.1007/978-3-031-30826-0_11
Keyword(s):
Ranged Program Analysis,
Cooperative Verification,
Software Model Checking,
CPAchecker
Funding:
DFG-COOP
Publisher's Version
PDF
BibTeX Entry
@inproceedings{RangedPA-CPA, author = {Jan Haltermann and Marie{-}Christine Jakobs and Cedric Richter and Heike Wehrheim}, title = {Parallel Program Analysis via Range Splitting}, booktitle = {Proc.\ {FASE}}, pages = {195--219}, year = {2023}, series = {LNCS~13991}, publisher = {Springer}, doi = {10.1007/978-3-031-30826-0_11}, url = {}, pdf = {}, keyword = {Ranged Program Analysis, Cooperative Verification, Software Model Checking, CPAchecker}, annote = {}, artifact = {}, funding = {DFG-COOP}, } -
Variable Misuse Detection: Software Developers versus Neural Bug Detectors.
In Software Engineering 2023, Fachtagung des GI-Fachbereichs Softwaretechnik,
20.-24. Februar 2023, Paderborn,
LNI P-332,
pages 103-104,
2023.
GI.
Keyword(s):
Bug Detection,
Empirical Study
Publisher's Version
Artifact(s)
BibTeX Entry
@inproceedings{JakobsSE2023, author = {Cedric Richter and Jan Haltermann and Marie{-}Christine Jakobs and Felix Pauck and Stefan Schott and Heike Wehrheim}, title = {Variable Misuse Detection: Software Developers versus Neural Bug Detectors}, booktitle = {Software Engineering 2023, Fachtagung des GI-Fachbereichs Softwaretechnik, 20.-24. Februar 2023, Paderborn}, pages = {103-104}, year = {2023}, series = {{LNI}~{P-332}}, publisher = {GI}, pdf = {}, keyword = {Bug Detection, Empirical Study}, artifact = {10.5281/zenodo.6958242}, doinone = {DOI not available}, urlpub = {https://dl.gi.de/handle/20.500.12116/40105}, } -
Realisability of Global Models of Interaction.
In Proceedings of the International Colloquium on Theoretical Aspects of Computing (ICTAC) 2023,
2023.
Abstract
We consider global models of communicating agents specified as transition systems labelled by interactions in which multiple senders and receivers can participate. A realisation of such a model is a set of local transition systems—one for each agent—which are executed concurrently using synchronous communication. Our core challenge is how to check whether a global model is realisable and, if it is, how to synthesise a realisation. We identify and compare two variants to realise global interaction models, both relying on bisimulation equivalence. Then we investigate, for both variants, realisability conditions to be checked on global models. We propose a synthesis method for the construction of realisations by grouping locally indistinguishable states. The paper is accompanied by a tool that implements realisability checks and synthesises realisations.BibTeX Entry
@inproceedings{HennickerICTAC23, author = {Maurice ter Beek and Rolf Hennicker and José Proença}, title = {Realisability of Global Models of Interaction}, booktitle = {Proceedings of the International Colloquium on Theoretical Aspects of Computing (ICTAC) 2023}, year = {2023}, abstract = {We consider global models of communicating agents specified as transition systems labelled by interactions in which multiple senders and receivers can participate. A realisation of such a model is a set of local transition systems—one for each agent—which are executed concurrently using synchronous communication. Our core challenge is how to check whether a global model is realisable and, if it is, how to synthesise a realisation. We identify and compare two variants to realise global interaction models, both relying on bisimulation equivalence. Then we investigate, for both variants, realisability conditions to be checked on global models. We propose a synthesis method for the construction of realisations by grouping locally indistinguishable states. The paper is accompanied by a tool that implements realisability checks and synthesises realisations.}, } -
Large Language Model Assisted Software Engineering: Prospects, Challenges, and a Case Study.
In Bernhard Steffen, editors,
Proc. AISoLA,
LNCS 14380,
2023.
Springer.
To appear.
PDF
BibTeX Entry
@inproceedings{WirsingAISoLA23, author = {Lenz Belzner and Thomas Gabor and Martin Wirsing}, title = {Large Language Model Assisted Software Engineering: Prospects, Challenges, and a Case Study}, booktitle = {Proc. AISoLA}, editor = {Bernhard Steffen}, year = {2023}, series = {LNCS~14380}, publisher = {Springer}, pdf = {https://sosy-lab.org/research/pub/2023-AISoLA.Large_Language_Model_Assisted_Software_Engineering.pdf}, note = {To appear.}, } -
Towards Systematically Engineering Autonomous Systems using Reinforcement Learning and Planning.
In In Pedro López-García,
John P. Gallagher, and
Roberto Giacobazzi, editors,
Analysis, Verification and Transformation for Declarative Programming and Intelligent Systems - Essays Dedicated to Manuel Hermenegildo on the Occasion of His 60th Birthday,
LNCS 13160,
pages 281-306,
2023.
Springer.
doi:10.1007/978-3-031-31476-6_16
Publisher's Version
PDF
BibTeX Entry
@inproceedings{WirsingAVERTIS23, author = {Martin Wirsing and Lenz Belzner}, title = {Towards Systematically Engineering Autonomous Systems using Reinforcement Learning and Planning}, booktitle = {Analysis, Verification and Transformation for Declarative Programming and Intelligent Systems - Essays Dedicated to Manuel Hermenegildo on the Occasion of His 60th Birthday}, editor = {In Pedro López-García and John P. Gallagher and Roberto Giacobazzi}, pages = {281--306}, year = {2023}, series = {LNCS~13160}, publisher = {Springer}, doi = {10.1007/978-3-031-31476-6_16}, pdf = {https://sosy-lab.org/research/pub/2023-AVERTIS.Towards_Systematically_Engineering_Autonomous_Systems_using_Reinforcement_Learning_and_Planning.pdf}, }
2022
-
Software Model Checking: 20 Years and Beyond.
In Principles of Systems Design,
LNCS 13660,
pages 554-582,
2022.
Springer.
doi:10.1007/978-3-031-22337-2_27
Keyword(s):
Software Model Checking
Publisher's Version
PDF
BibTeX Entry
@inproceedings{Henzinger22, author = {Dirk Beyer and Andreas Podelski}, title = {Software Model Checking: 20 Years and Beyond}, booktitle = {Principles of Systems Design}, pages = {554-582}, year = {2022}, series = {LNCS~13660}, publisher = {Springer}, doi = {10.1007/978-3-031-22337-2_27}, sha256 = {87a441617d1194266dff5fd5bd143370e9b318e72848b2d6e3c49f152a136799}, url = {}, abstract = {}, keyword = {Software Model Checking}, _pdf = {}, editors = {J-F.~Raskin and K.~Chatterjee and L.~Doyen and R.~Majumdar}, funding = {}, } -
Case Study on Verification-Witness Validators: Where We Are and Where We Go.
In Proceedings of the 29th International Symposium on
Static Analysis,
(SAS 2022, Auckland, New Zealand, December 5-7, 2022),
LNCS 13790,
pages 160-174,
2022.
Springer.
doi:10.1007/978-3-031-22308-2_8
Keyword(s):
Software Model Checking
Publisher's Version
PDF
BibTeX Entry
@inproceedings{SAS22, author = {Dirk Beyer and Jan Strejček}, title = {Case Study on Verification-Witness Validators: Where We Are and Where We Go}, booktitle = {Proceedings of the 29th International Symposium on Static Analysis, (SAS~2022, Auckland, New Zealand, December 5-7, 2022)}, pages = {160-174}, year = {2022}, series = {LNCS~13790}, publisher = {Springer}, doi = {10.1007/978-3-031-22308-2_8}, sha256 = {8003de86c73be27da528c44f440a49cd03a877649c9cb61a328a37507bc963da}, url = {}, abstract = {}, keyword = {Software Model Checking}, _pdf = {}, editors = {Gagandeep Singh and Caterina Urban}, funding = {}, } -
A Retrospective Study of One Decade of Artifact Evaluations.
In Abhik Roychoudhury,
Cristian Cadar, and
Miryung Kim, editors,
Proceedings of the 30th ACM Joint European Software Engineering Conference and Symposium on the
Foundations of Software Engineering,
ESEC/FSE 2022, Singapore, Singapore, November 14-18,
pages 145-156,
2022.
ACM.
doi:10.1145/3540250.3549172
Keyword(s):
Software Model Checking
Publisher's Version
PDF
BibTeX Entry
@inproceedings{FSE22, author = {Stefan Winter and Christopher Steven Timperley and Ben Hermann and Jürgen Cito and Jonathan Bell and Michael Hilton and Dirk Beyer}, title = {A Retrospective Study of One Decade of Artifact Evaluations}, booktitle = {Proceedings of the 30th {ACM} Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering, ESEC/FSE 2022, Singapore, Singapore, November 14-18}, editor = {Abhik Roychoudhury and Cristian Cadar and Miryung Kim}, pages = {145-156}, year = {2022}, publisher = {ACM}, doi = {10.1145/3540250.3549172}, keyword = {Software Model Checking}, _sha256 = {5ad5c04b173c8c68f651b955545d44a7d74c0cf497b2c7ec988768d7459e26b4}, funding = {}, } -
Cooperation between Automatic and Interactive Software Verifiers.
In Bernd-Holger Schlingloff and
Ming Chai, editors,
Proceedings of the 20th International Conference on
Software Engineering and Formal Methods,
(SEFM 2022, Berlin, Germany, September 26-30,
LNCS 13550,
pages 111–128,
2022.
Springer.
doi:10.1007/978-3-031-17108-6_7
Keyword(s):
Software Model Checking,
CPAchecker
Funding:
DFG-CONVEY
Publisher's Version
PDF
BibTeX Entry
@inproceedings{SEFM22b, author = {Dirk Beyer and Martin Spiessl and Sven Umbricht}, title = {Cooperation between Automatic and Interactive Software Verifiers}, booktitle = {Proceedings of the 20th International Conference on Software Engineering and Formal Methods, (SEFM~2022, Berlin, Germany, September 26-30}, editor = {Bernd-Holger Schlingloff and Ming Chai}, pages = {111–128}, year = {2022}, series = {LNCS~13550}, publisher = {Springer}, doi = {10.1007/978-3-031-17108-6_7}, sha256 = {a310ff0ac97f37ee817c6f05a4cc9a635cbacd09ad301b483095f133040e8e48}, url = {}, abstract = {}, keyword = {Software Model Checking, CPAchecker}, _pdf = {https://www.sosy-lab.org/research/pub/2022-SEFM.Cooperation_between_Automatic_and_Interactive_Software_Verifiers.pdf}, funding = {DFG-CONVEY}, } -
A Unifying Approach for Control-Flow-Based Loop Abstraction.
In Bernd-Holger Schlingloff and
Ming Chai, editors,
Proceedings of the 20th International Conference on
Software Engineering and Formal Methods,
(SEFM 2022, Berlin, Germany, September 26-30,
LNCS 13550,
pages 3-19,
2022.
Springer.
doi:10.1007/978-3-031-17108-6_1
Keyword(s):
Software Model Checking,
CPAchecker
Funding:
DFG-CONVEY
Publisher's Version
PDF
BibTeX Entry
@inproceedings{SEFM22a, author = {Dirk Beyer and Marian Lingsch Rosenfeld and Martin Spiessl}, title = {A Unifying Approach for Control-Flow-Based Loop Abstraction}, booktitle = {Proceedings of the 20th International Conference on Software Engineering and Formal Methods, (SEFM~2022, Berlin, Germany, September 26-30}, editor = {Bernd-Holger Schlingloff and Ming Chai}, pages = {3-19}, year = {2022}, series = {LNCS~13550}, publisher = {Springer}, doi = {10.1007/978-3-031-17108-6_1}, sha256 = {047a8a9062e143741623320cf80ec963ce5f7200a5a75d263fa6615c12f2199e}, url = {}, abstract = {}, keyword = {Software Model Checking, CPAchecker}, _pdf = {https://www.sosy-lab.org/research/pub/2022-SEFM.A_Unifying_Approach_for_Control-Flow-Based_Loop_Abstraction.pdf}, funding = {DFG-CONVEY}, } -
Decomposing Software Verification into Off-the-Shelf Components: An Application to CEGAR.
In Proceedings of the 44th International Conference on
Software Engineering (ICSE 2022, Pittsburgh, PA, USA, May 8-20 (Virtual), May 22-27 (In-Person)),
pages 536-548,
2022.
ACM.
doi:10.1145/3510003.3510064
Keyword(s):
CPAchecker,
Software Model Checking,
Interfaces for Component-Based Design
Funding:
DFG-COOP
Publisher's Version
PDF
Supplement
Artifact(s)
Abstract
Techniques for software verification are typically realized as cohesive units of software with tightly coupled components. This makes it difficult to re-use components, and the potential for workload distribution is limited. Innovations in software verification might find their way into practice faster if provided in smaller, more specialized components. In this paper, we propose to strictly decompose software verification: the verification task is split into independent subtasks, implemented by only loosely coupled components communicating via clearly defined interfaces. We apply this decomposition concept to one of the most frequently employed techniques in software verification: counterexample-guided abstraction refinement (CEGAR). CEGAR is a technique to iteratively compute an abstract model of the system. We develop a decomposition of CEGAR into independent components with clearly defined interfaces that are based on existing, standardized exchange formats. Its realization component-based CEGAR (C-CEGAR) concerns the three core tasks of CEGAR: abstract-model exploration, feasibility check, and precision refinement. We experimentally show that - despite the necessity of exchanging complex data via interfaces - the efficiency thereby only reduces by a small constant factor while the precision in solving verification tasks even increases. We furthermore illustrate the advantages of C-CEGAR by experimenting with different implementations of components, thereby further increasing the overall effectiveness and testing that substitution of components works well.BibTeX Entry
@inproceedings{ICSE22, author = {Dirk Beyer and Jan Haltermann and Thomas Lemberger and Heike Wehrheim}, title = {Decomposing Software Verification into Off-the-Shelf Components: An Application to {CEGAR}}, booktitle = {Proceedings of the 44th International Conference on Software Engineering (ICSE~2022, Pittsburgh, PA, USA, May 8-20 (Virtual), May 22-27 (In-Person))}, pages = {536-548}, year = {2022}, publisher = {ACM}, doi = {10.1145/3510003.3510064}, url = {https://www.sosy-lab.org/research/component-based-cegar/}, abstract = {Techniques for software verification are typically realized as cohesive units of software with tightly coupled components. This makes it difficult to re-use components, and the potential for workload distribution is limited. Innovations in software verification might find their way into practice faster if provided in smaller, more specialized components. In this paper, we propose to strictly decompose software verification: the verification task is split into independent subtasks, implemented by only loosely coupled components communicating via clearly defined interfaces. We apply this decomposition concept to one of the most frequently employed techniques in software verification: counterexample-guided abstraction refinement (CEGAR). CEGAR is a technique to iteratively compute an abstract model of the system. We develop a decomposition of CEGAR into independent components with clearly defined interfaces that are based on existing, standardized exchange formats. Its realization component-based CEGAR (C-CEGAR) concerns the three core tasks of CEGAR: abstract-model exploration, feasibility check, and precision refinement. We experimentally show that --- despite the necessity of exchanging complex data via interfaces --- the efficiency thereby only reduces by a small constant factor while the precision in solving verification tasks even increases. We furthermore illustrate the advantages of C-CEGAR by experimenting with different implementations of components, thereby further increasing the overall effectiveness and testing that substitution of components works well.}, keyword = {CPAchecker,Software Model Checking,Interfaces for Component-Based Design}, _pdf = {https://www.sosy-lab.org/research/pub/2022-ICSE.Decomposing_Software_Verification_into_Off-the-Shelf-Components.pdf}, _sha256 = {be1c5d744475af00f5a0cddd51d92353296d1d8e5ba60f5439ba5b98217e0e03}, artifact = {10.5281/zenodo.5301636}, funding = {DFG-COOP}, } -
The Static Analyzer Frama-C in SV-COMP (Competition Contribution).
In Dana Fisman and
Grigore Rosu, editors,
Proceedings of the 28th International Conference on
Tools and Algorithms for the Construction and Analysis of Systems
(TACAS 2022, Munich, Germany, April 2-7,
LNCS 13244,
pages 429-434,
2022.
Springer.
doi:10.1007/978-3-030-99527-0_26
Keyword(s):
Competition on Software Verification (SV-COMP),
Software Model Checking
Funding:
DFG-CONVEY
Publisher's Version
PDF
BibTeX Entry
@inproceedings{TACAS22c, author = {Dirk Beyer and Martin Spiessl}, title = {The Static Analyzer {Frama-C} in {SV-COMP} (Competition Contribution)}, booktitle = {Proceedings of the 28th International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS~2022, Munich, Germany, April 2-7}, editor = {Dana Fisman and Grigore Rosu}, pages = {429--434}, year = {2022}, series = {LNCS~13244}, publisher = {Springer}, doi = {10.1007/978-3-030-99527-0_26}, sha256 = {77ed425c2b30a4f9424ed46c9cb5a846f5c21677ececdbf098e30f37aca67a3d}, url = {}, abstract = {}, keyword = {Competition on Software Verification (SV-COMP),Software Model Checking}, _pdf = {https://www.sosy-lab.org/research/pub/2022-TACAS.The_Static_Analyzer_Frama-C_in_SV-COMP_Competition_Contribution.pdf}, funding = {DFG-CONVEY}, } -
Progress on Software Verification: SV-COMP 2022.
In D. Fisman and
G. Rosu, editors,
Proceedings of the 28th International Conference on
Tools and Algorithms for the Construction and Analysis of Systems
(TACAS 2022, Munich, Germany, April 2-7,
LNCS 13244,
pages 375-402,
2022.
Springer.
doi:10.1007/978-3-030-99527-0_20
Keyword(s):
Competition on Software Verification (SV-COMP),
Competition on Software Verification (SV-COMP Report),
Software Model Checking
Funding:
DFG-COOP
Publisher's Version
PDF
BibTeX Entry
@inproceedings{TACAS22b, author = {Dirk Beyer}, title = {Progress on Software Verification: {SV-COMP 2022}}, booktitle = {Proceedings of the 28th International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS~2022, Munich, Germany, April 2-7}, editor = {D.~Fisman and G.~Rosu}, pages = {375-402}, year = {2022}, series = {LNCS~13244}, publisher = {Springer}, doi = {10.1007/978-3-030-99527-0_20}, sha256 = {88d2b7552d79ad77c4e000f83a18f9d71038f7ddfca6c0f0700644405a115943}, url = {}, abstract = {}, keyword = {Competition on Software Verification (SV-COMP),Competition on Software Verification (SV-COMP Report),Software Model Checking}, _pdf = {https://www.sosy-lab.org/research/pub/2022-TACAS.Progress_on_Software_Verification_SV-COMP_2022.pdf}, funding = {DFG-COOP}, } -
CoVeriTeam: On-Demand Composition of Cooperative Verification Systems.
In D. Fisman and
G. Rosu, editors,
Proceedings of the 28th International Conference on
Tools and Algorithms for the Construction and Analysis of Systems
(TACAS 2022, Munich, Germany, April 2-7,
LNCS 13243,
pages 561-579,
2022.
Springer.
doi:10.1007/978-3-030-99524-9_31
Keyword(s):
Software Model Checking,
Cooperative Verification
Funding:
DFG-COOP
Publisher's Version
PDF
Presentation
Supplement
BibTeX Entry
@inproceedings{TACAS22a, author = {Dirk Beyer and Sudeep Kanav}, title = {{CoVeriTeam}: {O}n-Demand Composition of Cooperative Verification Systems}, booktitle = {Proceedings of the 28th International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS~2022, Munich, Germany, April 2-7}, editor = {D.~Fisman and G.~Rosu}, pages = {561-579}, year = {2022}, series = {LNCS~13243}, publisher = {Springer}, doi = {10.1007/978-3-030-99524-9_31}, sha256 = {e38311ae071351301b08d16849ee309a86efdc07fc45e18e466b4735ef21f241}, url = {https://www.sosy-lab.org/research/coveriteam/}, presentation = {https://www.sosy-lab.org/research/prs/2022-04-06_TACAS22_CoVeriTeam_Sudeep.pdf}, abstract = {}, keyword = {Software Model Checking,Cooperative Verification}, funding = {DFG-COOP}, } -
Advances in Automatic Software Testing: Test-Comp 2022.
In E. B. Johnsen and
M. Wimmer, editors,
Proceedings of the 25th International Conference on
Fundamental Approaches to Software Engineering
(FASE 2022, Munich, Germany, April 2-7),
LNCS 13241,
pages 321-335,
2022.
Springer.
doi:10.1007/978-3-030-99429-7_18
Keyword(s):
Competition on Software Testing (Test-Comp),
Competition on Software Testing (Test-Comp Report),
Software Testing
Funding:
DFG-COOP
Publisher's Version
PDF
Supplement
BibTeX Entry
@inproceedings{FASE22b, author = {Dirk Beyer}, title = {Advances in Automatic Software Testing: {Test-Comp 2022}}, booktitle = {Proceedings of the 25th International Conference on Fundamental Approaches to Software Engineering (FASE~2022, Munich, Germany, April 2-7)}, editor = {E.~B.~Johnsen and M.~Wimmer}, pages = {321-335}, year = {2022}, series = {LNCS~13241}, publisher = {Springer}, isbn = {}, doi = {10.1007/978-3-030-99429-7_18}, sha256 = {3f921c8f232a5c970f678889de8c402313049522a5dfa69ca68cd01d9dd9fce3}, url = {https://test-comp.sosy-lab.org/2022/}, abstract = {}, keyword = {Competition on Software Testing (Test-Comp),Competition on Software Testing (Test-Comp Report),Software Testing}, _pdf = {https://www.sosy-lab.org/research/pub/2022-FASE.Advances_in_Automatic_Software_Testing_Test-Comp_2022.pdf}, funding = {DFG-COOP}, } -
Construction of Verifier Combinations Based on Off-the-Shelf Verifiers.
In E. B. Johnsen and
M. Wimmer, editors,
Proceedings of the 25th International Conference on
Fundamental Approaches to Software Engineering
(FASE 2022, Munich, Germany, April 2-7),
LNCS 13241,
pages 49-70,
2022.
Springer.
doi:10.1007/978-3-030-99429-7_3
Keyword(s):
Software Model Checking
Funding:
DFG-COOP
Publisher's Version
PDF
Presentation
Supplement
BibTeX Entry
@inproceedings{FASE22a, author = {Dirk Beyer and Sudeep Kanav and Cedric Richter}, title = {Construction of Verifier Combinations Based on Off-the-Shelf Verifiers}, booktitle = {Proceedings of the 25th International Conference on Fundamental Approaches to Software Engineering (FASE~2022, Munich, Germany, April 2-7)}, editor = {E.~B.~Johnsen and M.~Wimmer}, pages = {49-70}, year = {2022}, series = {LNCS~13241}, publisher = {Springer}, isbn = {}, doi = {10.1007/978-3-030-99429-7_3}, sha256 = {fa50620b5b60e7c8761ea251b3ab30ef1e18320d49d76f417eac6dcd5b4a0bbc}, url = {https://www.sosy-lab.org/research/coveriteam-combinations/}, presentation = {https://www.sosy-lab.org/research/prs/2022-04-04_FASE22-CoVeriTeam-Combinations_Cedric.pdf}, abstract = {}, keyword = {Software Model Checking}, funding = {DFG-COOP}, } -
State selection algorithms and their impact on the performance of stateful network protocol fuzzing.
In Proc. of Software Analysis, Evolution and Reengineering (SANER),
2022.
IEEE.
To appear.
BibTeX Entry
@inproceedings{ernst:saner2022, author = {Dongge Liu and Van-Thuan Pham and Gidon Ernst and Toby Murray and Benjamin Rubinstein}, title = {State selection algorithms and their impact on the performance of stateful network protocol fuzzing}, booktitle = {Proc. of Software Analysis, Evolution and Reengineering (SANER)}, year = {2022}, publisher = {IEEE}, note = {To appear.}, } -
Loop Verification with Invariants and Summaries.
In Proc. of Verification, Model-Checking, and Abstract Interpretation (VMCAI),
LNCS,
2022.
Springer.
BibTeX Entry
@inproceedings{ernst:vmcai2022, author = {Gidon Ernst}, title = {Loop Verification with Invariants and Summaries}, booktitle = {Proc. of Verification, Model-Checking, and Abstract Interpretation (VMCAI)}, volume = {13182}, year = {2022}, series = {LNCS}, publisher = {Springer}, } -
The Static Analyzer Infer in SV-COMP (Competition Contribution).
In Dana Fisman and
Grigore Rosu, editors,
Proceedings of the 28th International Conference
on Tools and Algorithms for the Construction and Analysis of Systems
(TACAS 2022, Munich, Germany, April 2-7), Part 2,
LNCS 13244,
pages 451-456,
2022.
Springer.
doi:10.1007/978-3-030-99527-0_30
Keyword(s):
Competition on Software Verification (SV-COMP)
Publisher's Version
PDF
Presentation
Abstract
We present Infer-SV, a wrapper that adapts Infer for SV-COMP. Infer is a static-analysis tool for C and other languages, developed by Facebook and used by multiple large companies. It is strongly aimed at industry and the internal use at Facebook. Despite its popularity, there are no reported numbers on its precision and efficiency. With Infer-SV, we take a first step towards an objective comparison of Infer with other SV-COMP participants from academia and industry.BibTeX Entry
@inproceedings{INFER-SVCOMP22, author = {Matthias Kettl and Thomas Lemberger}, title = {The Static Analyzer Infer in {SV-COMP} (Competition Contribution)}, booktitle = {Proceedings of the 28th International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS~2022, Munich, Germany, April 2-7), Part 2}, editor = {Dana Fisman and Grigore Rosu}, pages = {451--456}, year = {2022}, series = {LNCS~13244}, publisher = {Springer}, doi = {10.1007/978-3-030-99527-0_30}, pdf = {https://www.sosy-lab.org/research/pub/2022-SVCOMP.The_Static_Analyzer_Infer_in_SV-COMP.pdf}, presentation = {https://www.sosy-lab.org/research/prs/2022-04-07_TACAS_Infer.pdf}, abstract = {We present Infer-SV, a wrapper that adapts Infer for SV-COMP. Infer is a static-analysis tool for C and other languages, developed by Facebook and used by multiple large companies. It is strongly aimed at industry and the internal use at Facebook. Despite its popularity, there are no reported numbers on its precision and efficiency. With Infer-SV, we take a first step towards an objective comparison of Infer with other SV-COMP participants from academia and industry.}, keyword = {Competition on Software Verification (SV-COMP)}, } -
Rigorous Engineering of Collective Adaptive Systems Introduction to the 4th Track Edition.
In Tiziana Margaria and
Bernhard Steffen, editors,
Leveraging Applications of Formal Methods, Verification and Validation. Adaptation and Learning - 11th International Symposium, ISoLA 2022, Rhodes, Greece, October 22-30, 2022, Proceedings, Part III,
LNCS 13703,
pages 3-12,
2022.
Springer.
doi:10.1007/978-3-031-19759-8_1
Publisher's Version
PDF
BibTeX Entry
@inproceedings{DBLP:conf/isola/WirsingNJ22, author = {Martin Wirsing and Rocco De Nicola and Stefan J{\"{a}}hnichen}, title = {Rigorous Engineering of Collective Adaptive Systems Introduction to the 4th Track Edition}, booktitle = {Leveraging Applications of Formal Methods, Verification and Validation. Adaptation and Learning - 11th International Symposium, ISoLA 2022, Rhodes, Greece, October 22-30, 2022, Proceedings, Part {III}}, editor = {Tiziana Margaria and Bernhard Steffen}, pages = {3--12}, year = {2022}, series = {LNCS~13703}, publisher = {Springer}, doi = {10.1007/978-3-031-19759-8_1}, pdf = {https://sosy-lab.org/research/pub/2022-ISOLA.Rigorous_Engineering_of_Collective_Adaptive_Systems.pdf}, } -
Epistemic Ensembles.
In Tiziana Margaria and
Bernhard Steffen, editors,
Leveraging Applications of Formal Methods, Verification and Validation. Adaptation and Learning - 11th International Symposium, ISoLA 2022, Rhodes, Greece, October 22-30, 2022, Proceedings, Part III,
LNCS 13703,
pages 110-126,
2022.
Springer.
doi:10.1007/978-3-031-19759-8_8
Publisher's Version
PDF
BibTeX Entry
@inproceedings{DBLP:conf/isola/HennickerKW22, author = {Rolf Hennicker and Alexander Knapp and Martin Wirsing}, title = {Epistemic Ensembles}, booktitle = {Leveraging Applications of Formal Methods, Verification and Validation. Adaptation and Learning - 11th International Symposium, ISoLA 2022, Rhodes, Greece, October 22-30, 2022, Proceedings, Part {III}}, editor = {Tiziana Margaria and Bernhard Steffen}, pages = {110--126}, year = {2022}, series = {LNCS~13703}, publisher = {Springer}, doi = {10.1007/978-3-031-19759-8_8}, pdf = {https://sosy-lab.org/research/pub/2022-ISOLA.Epistemic_Ensembles.pdf}, } -
On Learning Stable Cooperation in the Iterated Prisoner's Dilemma with Paid Incentives.
In 42nd IEEE International Conference on Distributed Computing Systems, ICDCS Workshops, Bologna, Italy, July 10, 2022,
pages 113-118,
2022.
IEEE.
doi:10.1109/ICDCSW56584.2022.00031
Publisher's Version
PDF
BibTeX Entry
@inproceedings{DBLP:conf/icdcs/SunPSWB22, author = {Xiyue Sun and Fabian Pieroth and Kyrill Schmid and Martin Wirsing and Lenz Belzner}, title = {On Learning Stable Cooperation in the Iterated Prisoner's Dilemma with Paid Incentives}, booktitle = {42nd {IEEE} International Conference on Distributed Computing Systems, {ICDCS} Workshops, Bologna, Italy, July 10, 2022}, pages = {113--118}, year = {2022}, publisher = {{IEEE}}, doi = {10.1109/ICDCSW56584.2022.00031}, pdf = {https://sosy-lab.org/research/pub/2022-ICDCSW.On_Learning_Stable_Cooperation_in_the_Iterated_Prisoners_Dilemma_with_Paid_Incentives.pdf}, }
2021
-
PJBDD: A BDD Library for Java and Multi-Threading.
In Proceedings of the 19th International Symposium on
Automated Technology for Verification and Analysis
(ATVA21 2021, Gold Coast (Online), Australia, October 18-22),
2021.
Springer.
doi:10.1007/978-3-030-88885-5_10
Keyword(s):
PJBDD,
BDD
Funding:
DFG-CONVEY
Publisher's Version
PDF
Artifact(s)
Abstract
PJBDD is a flexible and modular Java library for binary decision diagrams (BDD), which are a well-known data structure for performing efficient operations on compressed sets and relations. BDDs have practical applications in composing and analyzing boolean functions, e.g., for computer-aided verification. Despite its importance, there are only a few BDD libraries available. PJBDD is based on a slim object-oriented design, supports multi-threaded execution of the BDD operations (internal) as well as thread-safe access to the operations from applications (external). It provides automatic reference counting and garbage collection. The modular design of the library allows us to provide a uniform API for binary decision diagrams, zero-suppressed decision diagrams, and also chained decision diagrams. This paper includes a compact evaluation of PJBDD, to demonstrate that concurrent operations on large BDDs scale well and parallelize nicely on multi-core CPUs.BibTeX Entry
@inproceedings{ATVA21, author = {Dirk Beyer and Karlheinz Friedberger and Stephan Holzner}, title = {{PJBDD}: {A} {BDD} Library for {Java} and Multi-Threading}, booktitle = {Proceedings of the 19th International Symposium on Automated Technology for Verification and Analysis (ATVA21~2021, Gold Coast (Online), Australia, October 18-22)}, year = {2021}, publisher = {Springer}, doi = {10.1007/978-3-030-88885-5_10}, pdf = {https://www.sosy-lab.org/research/pub/2021-ATVA.PJBDD_A_BDD_Library_for_Java_and_Multi_Threading.pdf}, abstract = {PJBDD is a flexible and modular Java library for binary decision diagrams (BDD), which are a well-known data structure for performing efficient operations on compressed sets and relations. BDDs have practical applications in composing and analyzing boolean functions, e.g., for computer-aided verification. Despite its importance, there are only a few BDD libraries available. PJBDD is based on a slim object-oriented design, supports multi-threaded execution of the BDD operations (internal) as well as thread-safe access to the operations from applications (external). It provides automatic reference counting and garbage collection. The modular design of the library allows us to provide a uniform API for binary decision diagrams, zero-suppressed decision diagrams, and also chained decision diagrams. This paper includes a compact evaluation of PJBDD, to demonstrate that concurrent operations on large BDDs scale well and parallelize nicely on multi-core CPUs.}, keyword = {PJBDD,BDD}, artifact = {10.5281/zenodo.5070156}, funding = {DFG-CONVEY}, } -
JavaSMT 3: Interacting with SMT Solvers in Java.
In A. Silva and
K. R. M. Leino, editors,
Proceedings of the 33rd International Conference on
Computer-Aided Verification
(CAV 2021, Los Angeles, California, USA, July 18-24),
LNCS 12760,
pages 1-13,
2021.
Springer.
doi:10.1007/978-3-030-81688-9_9
Keyword(s):
JavaSMT
Funding:
DFG-CONVEY
Publisher's Version
PDF
Supplement
BibTeX Entry
@inproceedings{CAV21, author = {Daniel Baier and Dirk Beyer and Karlheinz Friedberger}, title = {JavaSMT 3: Interacting with SMT Solvers in Java}, booktitle = {Proceedings of the 33rd International Conference on Computer-Aided Verification (CAV~2021, Los Angeles, California, USA, July 18-24)}, editor = {A.~Silva and K.~R.~M.~Leino}, pages = {1-13}, year = {2021}, series = {LNCS~12760}, publisher = {Springer}, doi = {10.1007/978-3-030-81688-9_9}, sha256 = {6c0ff13c5dd8596e19be4176eefaafe5853d60a082b78ebd3f5e64381fdcb100}, url = {https://github.com/sosy-lab/java-smt}, abstract = {}, keyword = {JavaSMT}, _pdf = {https://www.sosy-lab.org/research/pub/2021-CAV.JavaSMT_3_Interacting_with_SMT_Solvers_in_Java.pdf}, funding = {DFG-CONVEY}, } -
Software Verification: 10th Comparative Evaluation (SV-COMP 2021).
In J. F. Groote and
K. G. Larsen, editors,
Proceedings of the 27th International Conference on
Tools and Algorithms for the Construction and Analysis of Systems
(TACAS 2021, Luxembourg, Luxembourg, March 27 - April 1), part 2,
LNCS 12652,
pages 401-422,
2021.
Springer.
doi:10.1007/978-3-030-72013-1_24
Keyword(s):
Competition on Software Verification (SV-COMP),
Competition on Software Verification (SV-COMP Report),
Software Model Checking
Funding:
DFG-CONVEY
Publisher's Version
PDF
Supplement
BibTeX Entry
@inproceedings{TACAS21, author = {Dirk Beyer}, title = {Software Verification: 10th Comparative Evaluation ({SV-COMP 2021})}, booktitle = {Proceedings of the 27th International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS~2021, Luxembourg, Luxembourg, March 27 - April 1), part 2}, editor = {J.~F.~Groote and K.~G.~Larsen}, pages = {401-422}, year = {2021}, series = {LNCS~12652}, publisher = {Springer}, doi = {10.1007/978-3-030-72013-1_24}, sha256 = {d78bb586715b0650702665510258d8e53a7bd16ae2a3cc4568b5986527b29051}, url = {https://sv-comp.sosy-lab.org/2021/}, abstract = {}, keyword = {Competition on Software Verification (SV-COMP),Competition on Software Verification (SV-COMP Report),Software Model Checking}, funding = {DFG-CONVEY}, } -
Status Report on Software Testing: Test-Comp 2021.
In E. Guerra and
M. Stoelinga, editors,
Proceedings of the 24th International Conference on
Fundamental Approaches to Software Engineering
(FASE 2021, Luxembourg, Luxembourg, March 27 - April 1),
LNCS 12649,
pages 341-357,
2021.
Springer.
doi:10.1007/978-3-030-71500-7_17
Keyword(s):
Competition on Software Testing (Test-Comp),
Competition on Software Testing (Test-Comp Report),
Software Testing
Funding:
DFG-COOP
Publisher's Version
PDF
Supplement
Abstract
This report describes Test-Comp 2021, the 3rd edition of the Competition on Software Testing. The competition is a series of annual comparative evaluations of fully automatic software test generators for C programs. The competition has a strong focus on reproducibility of its results and its main goal is to provide an overview of the current state of the art in the area of automatic test-generation. The competition was based on 3 173 test-generation tasks for C programs. Each test-generation task consisted of a program and a test specification (error coverage, branch coverage). Test-Comp 2021 had 11 participating test generators from 6 countries.BibTeX Entry
@inproceedings{FASE21, author = {Dirk Beyer}, title = {Status Report on Software Testing: {Test-Comp 2021}}, booktitle = {Proceedings of the 24th International Conference on Fundamental Approaches to Software Engineering (FASE~2021, Luxembourg, Luxembourg, March 27 - April 1)}, editor = {E.~Guerra and M.~Stoelinga}, pages = {341-357}, year = {2021}, series = {LNCS~12649}, publisher = {Springer}, isbn = {978-3-030-71500-7}, doi = {10.1007/978-3-030-71500-7_17}, sha256 = {113b44c5be9f6d773ebd1a5cad91e8dc66f06d7af0b8c648c9dcea8d6bbc7e3d}, url = {https://test-comp.sosy-lab.org/2021/}, abstract = {This report describes Test-Comp 2021, the 3rd edition of the Competition on Software Testing. The competition is a series of annual comparative evaluations of fully automatic software test generators for C programs. The competition has a strong focus on reproducibility of its results and its main goal is to provide an overview of the current state of the art in the area of automatic test-generation. The competition was based on 3 173 test-generation tasks for C programs. Each test-generation task consisted of a program and a test specification (error coverage, branch coverage). Test-Comp 2021 had 11 participating test generators from 6 countries.}, keyword = {Competition on Software Testing (Test-Comp),Competition on Software Testing (Test-Comp Report),Software Testing}, funding = {DFG-COOP}, } -
Deductive Verification via the Debug Adapter Protocol.
In Proc. of Formal Integrated Development Environment (F-IDE),
2021.
BibTeX Entry
@inproceedings{ernst:fide2021, author = {Gidon Ernst and Johannes Blau and Toby Murray}, title = {Deductive Verification via the Debug Adapter Protocol}, booktitle = {Proc. of Formal Integrated Development Environment (F-IDE)}, year = {2021}, } -
Bridging Arrays and ADTs in Recursive Proofs.
In Proc. of Tools and Algorithms for the Construction and Analysis of Systems (TACAS),
LNCS,
pages 24-42,
2021.
Springer.
BibTeX Entry
@inproceedings{ernst:tacas2021, author = {Grigory Fedyukovich and Gidon Ernst}, title = {Bridging Arrays and {ADTs} in Recursive Proofs}, booktitle = {Proc. of Tools and Algorithms for the Construction and Analysis of Systems (TACAS)}, volume = {12652}, pages = {24--42}, year = {2021}, series = {LNCS}, publisher = {Springer}, } -
ARCH-COMP 2021 category report: Falsification with Validation of Results.
In Proc. of Applied Verification of Continuous and Hybrid Systems (ARCH),
EPiC,
pages 133-152,
2021.
EasyChair.
BibTeX Entry
@inproceedings{ernst:arch2021, author = {Gidon Ernst and others}, title = {{ARCH-COMP} 2021 category report: Falsification with Validation of Results}, booktitle = {Proc. of Applied Verification of Continuous and Hybrid Systems (ARCH)}, volume = {80}, pages = {133--152}, year = {2021}, series = {EPiC}, publisher = {EasyChair}, }
2020
-
Domain-Independent Interprocedural Program Analysis using Block-Abstraction Memoization.
In P. Devanbu,
M. Cohen, and
T. Zimmermann, editors,
Proceedings of the 28th ACM Joint European Software Engineering Conference and
Symposium on the Foundations of Software Engineering (ESEC/FSE 2020, Virtual Event, USA, November 8-13),
pages 50-62,
2020.
ACM.
doi:10.1145/3368089.3409718
Keyword(s):
CPAchecker,
Software Model Checking
Funding:
DFG-CONVEY
Publisher's Version
PDF
Supplement
Artifact(s)
BibTeX Entry
@inproceedings{FSE20, author = {Dirk Beyer and Karlheinz Friedberger}, title = {Domain-Independent Interprocedural Program Analysis using Block-Abstraction Memoization}, booktitle = {Proceedings of the 28th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE~2020, Virtual Event, USA, November 8-13)}, editor = {P.~Devanbu and M.~Cohen and T.~Zimmermann}, pages = {50-62}, year = {2020}, publisher = {ACM}, doi = {10.1145/3368089.3409718}, url = {https://cpachecker.sosy-lab.org}, keyword = {CPAchecker,Software Model Checking}, _sha256 = {36dc2a423425ee8bec03f0f4073e04f9121d299cc475e27190828e8276e00cb8}, artifact = {10.5281/zenodo.4024268}, funding = {DFG-CONVEY}, fundingid = {378803395}, } -
Violation Witnesses and Result Validation for Multi-Threaded Programs.
In T. Margaria and
B. Steffen, editors,
Proceedings of the 9th International Symposium on
Leveraging Applications of Formal Methods, Verification, and Validation
(ISoLA 2020, Rhodos, Greece, October 26-30), part 1,
LNCS 12476,
pages 449-470,
2020.
Springer.
doi:10.1007/978-3-030-61362-4_26
Keyword(s):
CPAchecker,
Software Model Checking,
Witness-Based Validation,
Witness-Based Validation (main)
Funding:
DFG-CONVEY
Publisher's Version
PDF
Presentation
Supplement
BibTeX Entry
@inproceedings{ISoLA20c, author = {Dirk Beyer and Karlheinz Friedberger}, title = {Violation Witnesses and Result Validation for Multi-Threaded Programs}, booktitle = {Proceedings of the 9th International Symposium on Leveraging Applications of Formal Methods, Verification, and Validation (ISoLA~2020, Rhodos, Greece, October 26-30), part~1}, editor = {T.~Margaria and B.~Steffen}, pages = {449-470}, year = {2020}, series = {LNCS~12476}, publisher = {Springer}, doi = {10.1007/978-3-030-61362-4_26}, sha256 = {65fc5325c4e77a80d8e47f9c0e7f0ac02379bfa15dcd9fb54d6587185b8efd77}, url = {https://www.sosy-lab.org/research/witnesses-concurrency/}, presentation = {https://www.sosy-lab.org/research/prs/2021-10-25_ISOLA21_ValidationMultiThreaded_Dirk.pdf}, abstract = {}, keyword = {CPAchecker,Software Model Checking,Witness-Based Validation,Witness-Based Validation (main)}, funding = {DFG-CONVEY}, } -
An Interface Theory for Program Verification.
In T. Margaria and
B. Steffen, editors,
Proceedings of the 9th International Symposium on
Leveraging Applications of Formal Methods, Verification, and Validation
(ISoLA 2020, Rhodos, Greece, October 26-30), part 1,
LNCS 12476,
pages 168-186,
2020.
Springer.
doi:10.1007/978-3-030-61362-4_9
Keyword(s):
CPAchecker,
Software Model Checking,
Interfaces for Component-Based Design
Funding:
DFG-CONVEY
Publisher's Version
PDF
Presentation
BibTeX Entry
@inproceedings{ISoLA20b, author = {Dirk Beyer and Sudeep Kanav}, title = {An Interface Theory for Program Verification}, booktitle = {Proceedings of the 9th International Symposium on Leveraging Applications of Formal Methods, Verification, and Validation (ISoLA~2020, Rhodos, Greece, October 26-30), part~1}, editor = {T.~Margaria and B.~Steffen}, pages = {168-186}, year = {2020}, series = {LNCS~12476}, publisher = {Springer}, doi = {10.1007/978-3-030-61362-4_9}, sha256 = {f15159da0e648a25e57c769639c989e68cd3407bfad10db5ee1dc25e1d2fd672}, url = {}, presentation = {https://www.sosy-lab.org/research/prs/2021-10-29_ISOLA21_VerificationInterfaces_Dirk.pdf}, abstract = {}, keyword = {CPAchecker,Software Model Checking,Interfaces for Component-Based Design}, funding = {DFG-CONVEY}, } -
Verification Artifacts in Cooperative Verification: Survey and Unifying Component Framework.
In T. Margaria and
B. Steffen, editors,
Proceedings of the 9th International Symposium on
Leveraging Applications of Formal Methods, Verification, and Validation
(ISoLA 2020, Rhodos, Greece, October 26-30), part 1,
LNCS 12476,
pages 143-167,
2020.
Springer.
doi:10.1007/978-3-030-61362-4_8
Keyword(s):
Software Model Checking,
Cooperative Verification
Funding:
DFG-COOP
Publisher's Version
PDF
Presentation
BibTeX Entry
@inproceedings{ISoLA20a, author = {Dirk Beyer and Heike Wehrheim}, title = {Verification Artifacts in Cooperative Verification: Survey and Unifying Component Framework}, booktitle = {Proceedings of the 9th International Symposium on Leveraging Applications of Formal Methods, Verification, and Validation (ISoLA~2020, Rhodos, Greece, October 26-30), part~1}, editor = {T.~Margaria and B.~Steffen}, pages = {143-167}, year = {2020}, series = {LNCS~12476}, publisher = {Springer}, doi = {10.1007/978-3-030-61362-4_8}, sha256 = {86dbfb5ee4875582566bdb5d44750cc935614c11c09627295cc3ff123115a75b}, url = {}, presentation = {https://www.sosy-lab.org/research/prs/2021-10-29_ISOLA21_VerificationArtifacts_Dirk.pdf}, abstract = {}, keyword = {Software Model Checking,Cooperative Verification}, funding = {DFG-COOP}, fundingid = {418257054}, } -
Difference Verification with Conditions.
In F. d. Boer and
A. Cerone, editors,
Proceedings of the 18th International Conference on
Software Engineering and Formal Methods (SEFM 2020, Virtual, Netherlands, September 14-18),
LNCS 12310,
pages 133-154,
2020.
Springer.
doi:10.1007/978-3-030-58768-0_8
Keyword(s):
CPAchecker,
Software Model Checking
Funding:
DFG-COOP,
DFG-CONVEY
Publisher's Version
PDF
Presentation
Video
Supplement
Abstract
Modern software-verification tools need to support development processes that involve frequent changes. Existing approaches for incremental verification hard-code specific verification techniques. Some of the approaches must be tightly intertwined with the development process. To solve this open problem, we present the concept of difference verification with conditions. Difference verification with conditions is independent from any specific verification technique and can be integrated in software projects at any time. It first applies a change analysis that detects which parts of a software were changed between revisions and encodes that information in a condition. Based on this condition, an off-the-shelf verifier is used to verify only those parts of the software that are influenced by the changes. As a proof of concept, we propose a simple, syntax-based change analysis and use difference verification with conditions with three off-the-shelf verifiers. An extensive evaluation shows the competitiveness of difference verification with conditions.BibTeX Entry
@inproceedings{SEFM20b, author = {Dirk Beyer and Marie-Christine Jakobs and Thomas Lemberger}, title = {Difference Verification with Conditions}, booktitle = {Proceedings of the 18th International Conference on Software Engineering and Formal Methods (SEFM~2020, Virtual, Netherlands, September 14-18)}, editor = {F.~d.~Boer and A.~Cerone}, pages = {133--154}, year = {2020}, series = {LNCS~12310}, publisher = {Springer}, doi = {10.1007/978-3-030-58768-0_8}, sha256 = {8e5219da9a998b26f59013c809fbb1db6f92e3f08125fa1bfaacafcfafafef7f}, url = {https://www.sosy-lab.org/research/difference/}, presentation = {https://www.sosy-lab.org/research/prs/2020-09-17_SEFM20_DifferenceVerificationWithConditions_Thomas.pdf}, abstract = {Modern software-verification tools need to support development processes that involve frequent changes. Existing approaches for incremental verification hard-code specific verification techniques. Some of the approaches must be tightly intertwined with the development process. To solve this open problem, we present the concept of difference verification with conditions. Difference verification with conditions is independent from any specific verification technique and can be integrated in software projects at any time. It first applies a change analysis that detects which parts of a software were changed between revisions and encodes that information in a condition. Based on this condition, an off-the-shelf verifier is used to verify only those parts of the software that are influenced by the changes. As a proof of concept, we propose a simple, syntax-based change analysis and use difference verification with conditions with three off-the-shelf verifiers. An extensive evaluation shows the competitiveness of difference verification with conditions.}, keyword = {CPAchecker,Software Model Checking}, funding = {DFG-COOP,DFG-CONVEY}, isbnnote = {}, video = {https://youtu.be/dG02602c9oo}, } -
FRed: Conditional Model Checking via Reducers and Folders.
In F. d. Boer and
A. Cerone, editors,
Proceedings of the 18th International Conference on
Software Engineering and Formal Methods (SEFM 2020, Virtual, Netherlands, September 14-18),
LNCS 12310,
pages 113-132,
2020.
Springer.
doi:10.1007/978-3-030-58768-0_7
Keyword(s):
CPAchecker,
Software Model Checking
Funding:
DFG-COOP
Publisher's Version
PDF
Supplement
BibTeX Entry
@inproceedings{SEFM20a, author = {Dirk Beyer and Marie-Christine Jakobs}, title = {{{\sc FRed}}: {C}onditional Model Checking via Reducers and Folders}, booktitle = {Proceedings of the 18th International Conference on Software Engineering and Formal Methods (SEFM~2020, Virtual, Netherlands, September 14-18)}, editor = {F.~d.~Boer and A.~Cerone}, pages = {113--132}, year = {2020}, series = {LNCS~12310}, publisher = {Springer}, doi = {10.1007/978-3-030-58768-0_7}, sha256 = {0ce35cbde24d7a9de0513b89f23a81147bf4f8d5880effd57742c7f195e0eeec}, url = {https://www.sosy-lab.org/research/fred/}, abstract = {}, keyword = {CPAchecker,Software Model Checking}, funding = {DFG-COOP}, isbnnote = {}, } -
MetaVal: Witness Validation via Verification.
In S. K. Lahiri and
C. Wang, editors,
Proceedings of the 32nd International Conference on
Computer Aided Verification (CAV 2020, Virtual, USA, July 21-24), part 2,
LNCS 12225,
pages 165-177,
2020.
Springer.
doi:10.1007/978-3-030-53291-8_10
Keyword(s):
CPAchecker,
Software Model Checking,
Witness-Based Validation,
Witness-Based Validation (main)
Funding:
DFG-CONVEY
Publisher's Version
PDF
Supplement
BibTeX Entry
@inproceedings{CAV20, author = {Dirk Beyer and Martin Spiessl}, title = {MetaVal: {W}itness Validation via Verification}, booktitle = {Proceedings of the 32nd International Conference on Computer Aided Verification (CAV~2020, Virtual, USA, July 21-24), part 2}, editor = {S.~K.~Lahiri and C.~Wang}, pages = {165-177}, year = {2020}, series = {LNCS~12225}, publisher = {Springer}, doi = {10.1007/978-3-030-53291-8_10}, sha256 = {7431085a248c7e2cab70318096622ff19ce1124067158d08866d3f9b250df44e}, url = {https://gitlab.com/sosy-lab/software/metaval}, abstract = {}, keyword = {CPAchecker,Software Model Checking,Witness-Based Validation,Witness-Based Validation (main)}, funding = {DFG-CONVEY}, isbnnote = {978-3-030-53290-1}, } -
Advances in Automatic Software Verification: SV-COMP 2020.
In Proceedings of the 26th International Conference on
Tools and Algorithms for the Construction and Analysis of Systems (TACAS 2020, Dublin, Ireland, April 25-30), part 2,
LNCS 12079,
pages 347-367,
2020.
Springer.
doi:10.1007/978-3-030-45237-7_21
Keyword(s):
Competition on Software Verification (SV-COMP),
Competition on Software Verification (SV-COMP Report),
Software Model Checking
Publisher's Version
PDF
Supplement
Artifact(s)
BibTeX Entry
@inproceedings{TACAS20c, author = {Dirk Beyer}, title = {Advances in Automatic Software Verification: {SV-COMP 2020}}, booktitle = {Proceedings of the 26th International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS~2020, Dublin, Ireland, April 25-30), part 2}, pages = {347-367}, year = {2020}, series = {LNCS~12079}, publisher = {Springer}, doi = {10.1007/978-3-030-45237-7_21}, sha256 = {2a0cc56934c8fb6d100039b527e8c09f421ca351e4c90ec531aa2accb04504c6}, url = {https://sv-comp.sosy-lab.org/2020/}, abstract = {}, keyword = {Competition on Software Verification (SV-COMP),Competition on Software Verification (SV-COMP Report),Software Model Checking}, artifact1 = {10.5281/zenodo.3633334}, artifact2 = {10.5281/zenodo.3630205}, artifact3 = {10.5281/zenodo.3630188}, artifact4 = {10.5281/zenodo.3574420}, } -
CPU Energy Meter: A Tool for Energy-Aware Algorithms Engineering.
In Proceedings of the 26th International Conference on
Tools and Algorithms for the Construction and Analysis of Systems (TACAS 2020, Dublin, Ireland, April 25-30), part 2,
LNCS 12079,
pages 126-133,
2020.
Springer.
doi:10.1007/978-3-030-45237-7_8
Keyword(s):
Benchmarking
Publisher's Version
PDF
Presentation
Video
Supplement
Abstract
Verification algorithms are among the most resource-intensive computation tasks. Saving energy is important for our living environment and to save cost in data centers. Yet, researchers compare the efficiency of algorithms still in terms of consumption of CPU time (or even wall time). Perhaps one reason for this is that measuring energy consumption of computational processes is not as convenient as measuring the consumed time and there is no sufficient tool support. To close this gap, we contribute CPU Energy Meter, a small tool that takes care of reading the energy values that Intel CPUs track inside the chip. In order to make energy measurements as easy as possible, we integrated CPU Energy Meter into BenchExec, a benchmarking tool that is already used by many researchers and competitions in the domain of formal methods. As evidence for usefulness, we explored the energy consumption of some state-of-the-art verifiers and report some interesting insights, for example, that energy consumption is not necessarily correlated with CPU time.BibTeX Entry
@inproceedings{TACAS20b, author = {Dirk Beyer and Philipp Wendler}, title = {CPU Energy Meter: A Tool for Energy-Aware Algorithms Engineering}, booktitle = {Proceedings of the 26th International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS~2020, Dublin, Ireland, April 25-30), part 2}, pages = {126-133}, year = {2020}, series = {LNCS~12079}, publisher = {Springer}, doi = {10.1007/978-3-030-45237-7_8}, sha256 = {c5c8ad06f4b192e61799469a8fc6ca4661714aa2945e0ce07363a376ff06dcd7}, url = {https://www.sosy-lab.org/research/energy-measurement/}, presentation = {https://www.sosy-lab.org/research/prs/2021-03-31_TACAS20_CPU-Energy-Meter_Dirk.pdf}, abstract = {Verification algorithms are among the most resource-intensive computation tasks. Saving energy is important for our living environment and to save cost in data centers. Yet, researchers compare the efficiency of algorithms still in terms of consumption of CPU time (or even wall time). Perhaps one reason for this is that measuring energy consumption of computational processes is not as convenient as measuring the consumed time and there is no sufficient tool support. To close this gap, we contribute CPU Energy Meter, a small tool that takes care of reading the energy values that Intel CPUs track inside the chip. In order to make energy measurements as easy as possible, we integrated CPU Energy Meter into BenchExec, a benchmarking tool that is already used by many researchers and competitions in the domain of formal methods. As evidence for usefulness, we explored the energy consumption of some state-of-the-art verifiers and report some interesting insights, for example, that energy consumption is not necessarily correlated with CPU time.}, keyword = {Benchmarking}, video = {https://youtu.be/qzKAoBVTw2c}, } -
Software Verification with PDR: An Implementation of the State of the Art.
In Proceedings of the 26th International Conference on
Tools and Algorithms for the Construction and Analysis of Systems (TACAS 2020, Dublin, Ireland, April 25-30), part 1,
LNCS 12078,
pages 3-21,
2020.
Springer.
doi:10.1007/978-3-030-45190-5_1
Keyword(s):
Software Model Checking,
CPAchecker
Publisher's Version
PDF
Presentation
Video
Supplement
BibTeX Entry
@inproceedings{TACAS20a, author = {Dirk Beyer and Matthias Dangl}, title = {Software Verification with {PDR}: An Implementation of the State of the Art}, booktitle = {Proceedings of the 26th International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS~2020, Dublin, Ireland, April 25-30), part 1}, pages = {3-21}, year = {2020}, series = {LNCS~12078}, publisher = {Springer}, doi = {10.1007/978-3-030-45190-5_1}, sha256 = {fbd54433b42cb4411ddf6d73eb198507a6d35d8b6581b000be30aed84633e204}, url = {https://www.sosy-lab.org/research/pdr-compare/}, presentation = {https://www.sosy-lab.org/research/prs/2021-03-31_TACAS20_PDR-for-Software_Dirk.pdf}, abstract = {}, keyword = {Software Model Checking,CPAchecker}, video = {https://youtu.be/Wxqd92sdHBE}, } -
Second Competition on Software Testing: Test-Comp 2020.
In Proceedings of the 23rd International Conference on
Fundamental Approaches to Software Engineering (FASE 2020, Dublin, Ireland, April 25-30),
LNCS 12076,
pages 505-519,
2020.
Springer.
doi:10.1007/978-3-030-45234-6_25
Keyword(s):
Competition on Software Testing (Test-Comp),
Competition on Software Testing (Test-Comp Report),
Software Testing
Publisher's Version
PDF
Supplement
Artifact(s)
BibTeX Entry
@inproceedings{FASE20, author = {Dirk Beyer}, title = {Second Competition on Software Testing: {Test-Comp 2020}}, booktitle = {Proceedings of the 23rd International Conference on Fundamental Approaches to Software Engineering (FASE~2020, Dublin, Ireland, April 25-30)}, pages = {505-519}, year = {2020}, series = {LNCS~12076}, publisher = {Springer}, doi = {10.1007/978-3-030-45234-6_25}, sha256 = {296b4caf885ae029e388c2ef8fd032f1ab55c07d5e8ea1064f2e50c08f5d6919}, url = {https://test-comp.sosy-lab.org/2020/}, abstract = {}, keyword = {Competition on Software Testing (Test-Comp),Competition on Software Testing (Test-Comp Report),Software Testing}, artifact1 = {10.5281/zenodo.3678250}, artifact2 = {10.5281/zenodo.3678264}, artifact3 = {10.5281/zenodo.3678275}, artifact4 = {10.5281/zenodo.3574420}, } -
Cooperative Test-Case Generation with Verifiers.
In M. Felderer,
W. Hasselbring,
R. Rabiser, and
R. Jung, editors,
Proceedings of the Conference on
Software Engineering (SE 2020, Innsbruck, Austria, February 24-28),
LNI P-300,
pages 107-108,
2020.
GI.
doi:10.18420/SE2020_31
Keyword(s):
CPAchecker,
Software Model Checking,
Software Testing
Publisher's Version
BibTeX Entry
@inproceedings{SE20, author = {Dirk Beyer and Marie-Christine Jakobs}, title = {Cooperative Test-Case Generation with Verifiers}, booktitle = {Proceedings of the Conference on Software Engineering (SE~2020, Innsbruck, Austria, February 24-28)}, editor = {M.~Felderer and W.~Hasselbring and R.~Rabiser and R.~Jung}, pages = {107--108}, year = {2020}, series = {{LNI}~P-300}, publisher = {{GI}}, doi = {10.18420/SE2020_31}, sha256 = {}, pdf = {}, presentation = {}, abstract = {}, keyword = {CPAchecker,Software Model Checking,Software Testing}, annote = {This is a summary of a <a href="https://www.sosy-lab.org/research/bib/Year/2019.html#FASE19">full article on this topic</a> that appeared in Proc. FASE 2019.}, isbnnote = {978-3-88579-694-7}, }Additional Infos
This is a summary of a full article on this topic that appeared in Proc. FASE 2019. -
Legion: Best-First Concolic Testing.
In Proc. of Automated Software Engineering (ASE),
pages 54-65,
2020.
IEEE.
PDF
BibTeX Entry
@inproceedings{ernst:ase2020, author = {Dongge Liu and Gidon Ernst and Toby Murray and Ben Rubinstein}, title = {Legion: Best-First Concolic Testing}, booktitle = {Proc. of Automated Software Engineering (ASE)}, pages = {54--65}, year = {2020}, publisher = {IEEE}, pdf = {https://arxiv.org/abs/2002.06311}, } -
Legion: Best-First Concolic Testing (Competition Contribution)..
In Proc. of Fundamental Approaches to Software Engineering (FASE),
pages 545-549,
2020.
BibTeX Entry
@inproceedings{ernst:testcomp2020, author = {Dongge Liu and Gidon Ernst and Toby Murray and Benjamin Rubinstein}, title = {Legion: Best-First Concolic Testing (Competition Contribution).}, booktitle = {Proc. of Fundamental Approaches to Software Engineering (FASE)}, pages = {545--549}, year = {2020}, } -
Information Flow Testing of a PGP Keyserver.
In Proc. of the VerifyThis Long-term Challenge 2020,
pages 11-13,
2020.
KIT Library.
Technical Report.
BibTeX Entry
@inproceedings{ernst:vtltc2020-iftesting, author = {Gidon Ernst and Lukas Rieger}, title = {{Information Flow Testing of a PGP Keyserver}}, booktitle = {{Proc. of the VerifyThis Long-term Challenge 2020}}, pages = {11--13}, year = {2020}, publisher = {KIT Library}, note = {Technical Report.}, } -
Verifying the Security of a PGP Keyserver.
In Proc. of the VerifyThis Long-term Challenge 2020,
pages 14-16,
2020.
KIT Library.
Technical Report.
BibTeX Entry
@inproceedings{ernst:vtltc2020-ifverify, author = {Gidon Ernst and Toby Murray and Mukesh Tiwari}, title = {{Verifying the Security of a PGP Keyserver}}, booktitle = {{Proc. of the VerifyThis Long-term Challenge 2020}}, pages = {14--16}, year = {2020}, publisher = {KIT Library}, note = {Technical Report.}, } -
ARCH-COMP 2020 category report: Falsification.
In Proc. of Applied Verification of Continuous and Hybrid Systems (ARCH),
EPiC,
pages 140-152,
2020.
EasyChair.
BibTeX Entry
@inproceedings{ernst:arch2020, author = {Gidon Ernst and others}, title = {{ARCH-COMP} 2020 category report: Falsification}, booktitle = {Proc. of Applied Verification of Continuous and Hybrid Systems (ARCH)}, volume = {74}, pages = {140--152}, year = {2020}, series = {EPiC}, publisher = {EasyChair}, } -
Rigorous Engineering of Collective Adaptive Systems Introduction to the 3rd Track Edition.
In Tiziana Margaria and
Bernhard Steffen, editors,
Leveraging Applications of Formal Methods, Verification and Validation: Engineering Principles - 9th International Symposium on Leveraging
Applications of Formal Methods, ISoLA 2020, Rhodes, Greece, October 20-30, 2020, Proceedings, Part II,
LNCS 12477,
pages 161-170,
2020.
Springer.
doi:10.1007/978-3-030-61470-6_10
Publisher's Version
PDF
BibTeX Entry
@inproceedings{DBLP:conf/isola/WirsingNJ20, author = {Martin Wirsing and Rocco De Nicola and Stefan J{\"{a}}hnichen}, title = {Rigorous Engineering of Collective Adaptive Systems Introduction to the 3rd Track Edition}, booktitle = {Leveraging Applications of Formal Methods, Verification and Validation: Engineering Principles - 9th International Symposium on Leveraging Applications of Formal Methods, ISoLA 2020, Rhodes, Greece, October 20-30, 2020, Proceedings, Part {II}}, editor = {Tiziana Margaria and Bernhard Steffen}, pages = {161--170}, year = {2020}, series = {LNCS~12477}, publisher = {Springer}, doi = {10.1007/978-3-030-61470-6\_10}, } -
A Dynamic Logic for Systems with Predicate-Based Communication.
In Tiziana Margaria and
Bernhard Steffen, editors,
Leveraging Applications of Formal Methods, Verification and Validation: Engineering Principles - 9th International Symposium on Leveraging
Applications of Formal Methods, ISoLA 2020, Rhodes, Greece, October 20-30, 2020, Proceedings, Part II,
LNCS 12477,
pages 224-242,
2020.
Springer.
doi:10.1007/978-3-030-61470-6_14
Publisher's Version
PDF
BibTeX Entry
@inproceedings{DBLP:conf/isola/HennickerW20, author = {Rolf Hennicker and Martin Wirsing}, title = {A Dynamic Logic for Systems with Predicate-Based Communication}, booktitle = {Leveraging Applications of Formal Methods, Verification and Validation: Engineering Principles - 9th International Symposium on Leveraging Applications of Formal Methods, ISoLA 2020, Rhodes, Greece, October 20-30, 2020, Proceedings, Part {II}}, editor = {Tiziana Margaria and Bernhard Steffen}, pages = {224--242}, year = {2020}, series = {LNCS~12477}, publisher = {Springer}, doi = {10.1007/978-3-030-61470-6\_14}, }
2019
-
TestCov: Robust Test-Suite Execution and Coverage Measurement.
In Proceedings of the 34th IEEE/ACM International Conference on Automated Software Engineering (ASE 2019, San Diego, CA, USA, November 11-15),
pages 1074-1077,
2019.
IEEE.
doi:10.1109/ASE.2019.00105
Keyword(s):
Software Testing
Funding:
DFG-COOP
Publisher's Version
PDF
Presentation
BibTeX Entry
@inproceedings{ASE19, author = {Dirk Beyer and Thomas Lemberger}, title = {{T}est{C}ov: Robust Test-Suite Execution and Coverage Measurement}, booktitle = {Proceedings of the 34th IEEE/ACM International Conference on Automated Software Engineering (ASE 2019, San Diego, CA, USA, November 11-15)}, pages = {1074-1077}, year = {2019}, publisher = {IEEE}, doi = {10.1109/ASE.2019.00105}, sha256 = {}, pdf = {https://www.sosy-lab.org/research/pub/2019-ASE.TestCov_Robust_Test-Suite_Execution_and_Coverage_Measurement.pdf}, presentation = {https://www.sosy-lab.org/research/prs/2019-11-12_ASE19_TestCov_Thomas_Lemberger.pdf}, keyword = {Software Testing}, funding = {DFG-COOP}, isbnnote = {978-1-7281-2508-4}, } -
Conditional Testing - Off-the-Shelf Combination of Test-Case Generators.
In Yu-Fang Chen,
Chih-Hong Cheng, and
Javier Esparza, editors,
Proceedings of the 17th International Symposium on
Automated Technology for Verification and Analysis (ATVA 2019, Taipei, Taiwan, October 28-31),
LNCS 11781,
pages 189-208,
2019.
Springer.
doi:10.1007/978-3-030-31784-3_11
Keyword(s):
Software Testing
Funding:
DFG-COOP
Publisher's Version
PDF
Presentation
Supplement
BibTeX Entry
@inproceedings{ATVA19, author = {Dirk Beyer and Thomas Lemberger}, title = {Conditional Testing - Off-the-Shelf Combination of Test-Case Generators}, booktitle = {Proceedings of the 17th International Symposium on Automated Technology for Verification and Analysis (ATVA~2019, Taipei, Taiwan, October 28-31)}, editor = {Yu{-}Fang Chen and Chih{-}Hong Cheng and Javier Esparza}, pages = {189-208}, year = {2019}, series = {LNCS~11781}, publisher = {Springer}, doi = {10.1007/978-3-030-31784-3_11}, sha256 = {}, url = {https://www.sosy-lab.org/research/conditional-testing/}, pdf = {https://www.sosy-lab.org/research/pub/2019-ATVA.Conditional_Testing_Off-the-Shelf_Combination_of_Test-Case_Generators.pdf}, presentation = {https://www.sosy-lab.org/research/prs/2019-10-29_ATVA19_Conditional_Testing_Thomas_Lemberger.pdf}, keyword = {Software Testing}, funding = {DFG-COOP}, } -
A Data Set of Program Invariants and Error Paths.
In Proceedings of the 2019 IEEE/ACM 16th International Conference on
Mining Software Repositories (MSR 2019, Montreal, Canada, May 26-27),
pages 111-115,
2019.
IEEE.
doi:10.1109/MSR.2019.00026
Publisher's Version
PDF
Supplement
BibTeX Entry
@inproceedings{MSR19, author = {Dirk Beyer}, title = {A Data Set of Program Invariants and Error Paths}, booktitle = {Proceedings of the 2019 IEEE/ACM 16th International Conference on Mining Software Repositories (MSR~2019, Montreal, Canada, May 26-27)}, pages = {111-115}, year = {2019}, publisher = {IEEE}, doi = {10.1109/MSR.2019.00026}, sha256 = {}, url = {https://doi.org/10.5281/zenodo.2559175}, pdf = {https://www.sosy-lab.org/research/pub/2019-MSR.A_Data_Set_of_Program_Invariants_and_Error_Paths.pdf}, } -
International Competition on Software Testing (Test-Comp).
In Proceedings of the 25th International Conference on
Tools and Algorithms for the Construction and Analysis of Systems (TACAS 2019, Prague, Czech Republic, April 6-11), part 3,
LNCS 11429,
pages 167-175,
2019.
Springer.
doi:10.1007/978-3-030-17502-3_11
Keyword(s):
Competition on Software Testing (Test-Comp),
Competition on Software Testing (Test-Comp Report),
Software Testing
Publisher's Version
PDF
Supplement
BibTeX Entry
@inproceedings{TACAS19c, author = {Dirk Beyer}, title = {International Competition on Software Testing (Test-Comp)}, booktitle = {Proceedings of the 25th International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS~2019, Prague, Czech Republic, April 6-11), part 3}, pages = {167-175}, year = {2019}, series = {LNCS~11429}, publisher = {Springer}, doi = {10.1007/978-3-030-17502-3_11}, sha256 = {80ba1d656e40b44c40e756010ccd32db5aad71820cd746b264f70244477fc737}, url = {https://test-comp.sosy-lab.org/2019/}, keyword = {Competition on Software Testing (Test-Comp),Competition on Software Testing (Test-Comp Report),Software Testing}, } -
Automatic Verification of C and Java Programs: SV-COMP 2019.
In Proceedings of the 25th International Conference on
Tools and Algorithms for the Construction and Analysis of Systems (TACAS 2019, Prague, Czech Republic, April 6-11), part 3,
LNCS 11429,
pages 133-155,
2019.
Springer.
doi:10.1007/978-3-030-17502-3_9
Keyword(s):
Competition on Software Verification (SV-COMP),
Competition on Software Verification (SV-COMP Report),
Software Model Checking
Publisher's Version
PDF
Supplement
BibTeX Entry
@inproceedings{TACAS19b, author = {Dirk Beyer}, title = {Automatic Verification of {C} and Java Programs: {SV-COMP} 2019}, booktitle = {Proceedings of the 25th International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS~2019, Prague, Czech Republic, April 6-11), part 3}, pages = {133-155}, year = {2019}, series = {LNCS~11429}, publisher = {Springer}, doi = {10.1007/978-3-030-17502-3_9}, sha256 = {3ded73753689c5a68001ad42c27c2a0071f0d13546ffb8c4780891a16d9cabc7}, url = {https://sv-comp.sosy-lab.org/2019/}, keyword = {Competition on Software Verification (SV-COMP),Competition on Software Verification (SV-COMP Report),Software Model Checking}, } -
TOOLympics 2019: An Overview of Competitions in Formal Methods.
In Proceedings of the 25th International Conference on
Tools and Algorithms for the Construction and Analysis of Systems (TACAS 2019, Prague, Czech Republic, April 6-11), part 3,
LNCS 11429,
pages 3-24,
2019.
Springer.
doi:10.1007/978-3-030-17502-3_1
Publisher's Version
PDF
Supplement
BibTeX Entry
@inproceedings{TACAS19a, author = {E.~Bartocci and D.~Beyer and P.~E.~Black and G.~Fedyukovich and H.~Garavel and A.~Hartmanns and M.~Huisman and F.~Kordon and J.~Nagele and M.~Sighireanu and B.~Steffen and M.~Suda and G.~Sutcliffe and T.~Weber and A.~Yamada}, title = {{TOOLympics} 2019: An Overview of Competitions in Formal Methods}, booktitle = {Proceedings of the 25th International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS~2019, Prague, Czech Republic, April 6-11), part 3}, pages = {3-24}, year = {2019}, series = {LNCS~11429}, publisher = {Springer}, doi = {10.1007/978-3-030-17502-3_1}, sha256 = {1659009075a34066ea759286b122c9d96c6f21f6a23479fff2b8847c88482a71}, url = {https://tacas.info/toolympics.php}, } -
CoVeriTest: Cooperative Verifier-Based Testing.
In Proceedings of the 22nd International Conference on
Fundamental Approaches to Software Engineering (FASE 2019, Prague, Czech Republic, April 6-11),
LNCS 11424,
pages 389-408,
2019.
Springer.
doi:10.1007/978-3-030-16722-6_23
Keyword(s):
CPAchecker,
Software Model Checking,
Software Testing
Publisher's Version
PDF
Supplement
BibTeX Entry
@inproceedings{FASE19, author = {Dirk Beyer and Marie-Christine Jakobs}, title = {CoVeriTest: Cooperative Verifier-Based Testing}, booktitle = {Proceedings of the 22nd International Conference on Fundamental Approaches to Software Engineering (FASE~2019, Prague, Czech Republic, April 6-11)}, pages = {389-408}, year = {2019}, series = {LNCS~11424}, publisher = {Springer}, doi = {10.1007/978-3-030-16722-6_23}, sha256 = {ee64749fba4796ed79cecfaa500731ef2ac5d5e795770c44b1e7ad358f955398}, url = {https://www.sosy-lab.org/research/coop-testgen/}, keyword = {CPAchecker,Software Model Checking,Software Testing}, } -
Combining Verifiers in Conditional Model Checking via Reducers.
In S. Becker,
I. Bogicevic,
G. Herzwurm, and
S. Wagner, editors,
Proceedings of the Conference on
Software Engineering and Software Management (SE/SWM 2019, Stuttgart, Germany, February 18-22),
LNI P-292,
pages 151-152,
2019.
GI.
doi:10.18420/se2019-46
Keyword(s):
CPAchecker,
Software Model Checking
Publisher's Version
PDF
Presentation
Abstract
Software verification received lots of attention in the past two decades. Nonetheless, it remains an extremely difficult problem. Some verification tasks cannot be solved automatically by any of today’s verifiers. To still verify such tasks, one can combine the strengths of different verifiers. A promising approach to create combinations is conditional model checking (CMC). In CMC, the first verifier outputs a condition that describes the parts of the program state space that it successfully verified, and the next verifier uses that condition to steer its exploration towards the unverified state space. Despite the benefits of CMC, only few verifiers can handle conditions. To overcome this problem, we propose an automatic plug-and-play extension for verifiers. Instead of modifying verifiers, we suggest to add a preprocessor: the reducer. The reducer takes the condition and the original program and computes a residual program that encodes the unverified state space in program code. We developed one such reducer and use it to integrate existing verifiers and test-case generators into the CMC process. Our experiments show that we can solve many additional verification tasks with this reducer-based construction.BibTeX Entry
@inproceedings{SE19, author = {Dirk Beyer and Marie-Christine Jakobs and Thomas Lemberger and Heike Wehrheim}, title = {Combining Verifiers in Conditional Model Checking via Reducers}, booktitle = {Proceedings of the Conference on Software Engineering and Software Management (SE/SWM~2019, Stuttgart, Germany, February 18-22)}, editor = {S.~Becker and I.~Bogicevic and G.~Herzwurm and S.~Wagner}, pages = {151--152}, year = {2019}, series = {{LNI}~P-292}, publisher = {{GI}}, doi = {10.18420/se2019-46}, sha256 = {}, pdf = {https://www.sosy-lab.org/research/pub/2019-SE.Combining_Verifiers_in_Conditional_Model_Checking_via_Reducers.pdf}, presentation = {https://www.sosy-lab.org/research/prs/2019-02-22_SE19_CombiningVerifiersInConditionalModelChecking_Marie.pdf}, abstract = {Software verification received lots of attention in the past two decades. Nonetheless, it remains an extremely difficult problem. Some verification tasks cannot be solved automatically by any of today’s verifiers. To still verify such tasks, one can combine the strengths of different verifiers. A promising approach to create combinations is conditional model checking (CMC). In CMC, the first verifier outputs a condition that describes the parts of the program state space that it successfully verified, and the next verifier uses that condition to steer its exploration towards the unverified state space. Despite the benefits of CMC, only few verifiers can handle conditions. To overcome this problem, we propose an automatic plug-and-play extension for verifiers. Instead of modifying verifiers, we suggest to add a preprocessor: the reducer. The reducer takes the condition and the original program and computes a residual program that encodes the unverified state space in program code. We developed one such reducer and use it to integrate existing verifiers and test-case generators into the CMC process. Our experiments show that we can solve many additional verification tasks with this reducer-based construction.}, keyword = {CPAchecker,Software Model Checking}, annote = {This is a summary of a <a href="https://www.sosy-lab.org/research/bib/Year/2018.html#ICSE18">full article on this topic</a> that appeared in Proc. ICSE 2018.}, }Additional Infos
This is a summary of a full article on this topic that appeared in Proc. ICSE 2018. -
Fast Falsification of Hybrid Systems using Probabilistically Adaptive Input.
In Proc. of Quantitative Evaluation of Systems (QEST),
LNCS,
pages 165-181,
2019.
Springer.
PDF
BibTeX Entry
@inproceedings{ernst:qest2019, author = {Gidon Ernst and Sean Sedwards and Zhenya Zhang and Ichiro Hasuo}, title = {Fast Falsification of Hybrid Systems using Probabilistically Adaptive Input}, booktitle = {Proc. of Quantitative Evaluation of Systems (QEST)}, volume = {11785}, pages = {165--181}, year = {2019}, series = {LNCS}, publisher = {Springer}, pdf = {https://arxiv.org/abs/1812.04159}, } -
SecCSL: Security Concurrent Separation Logic.
In Proc. of Computer Aided Verification (CAV),
LNCS,
pages 208-230,
2019.
Springer.
PDF
BibTeX Entry
@inproceedings{ernst:cav2019, author = {Gidon Ernst and Toby Murray}, title = {{SecCSL: Security Concurrent Separation Logic}}, booktitle = {Proc. of Computer Aided Verification (CAV)}, volume = {11562}, pages = {208--230}, year = {2019}, series = {LNCS}, publisher = {Springer}, pdf = {https://www.sosy-lab.org/research/pub/2019-CAV.SecCSL_Security_Concurrent_Separation_Logic.pdf}, } -
ARCH-COMP19 Category Report: Results on the Falsification Benchmarks.
In Proc. of Applied Verification of Continuous and Hybrid Systems (ARCH),
EPiC,
pages 129-140,
2019.
EasyChair.
doi:10.29007/68dk
Publisher's Version
PDF
BibTeX Entry
@inproceedings{ernst:arch2019, author = {Gidon Ernst and Paolo Arcaini and Alexandre Donze and Georgios Fainekos and Logan Mathesen and Gulia Pedrielli and Shakiba Yaghoubi and Yoriyuki Yamagata and Zhenya Zhang}, title = {{ARCH-COMP19 Category Report: Results on the Falsification Benchmarks}}, booktitle = {Proc. of Applied Verification of Continuous and Hybrid Systems (ARCH)}, volume = {61}, pages = {129--140}, year = {2019}, series = {EPiC}, publisher = {EasyChair}, doi = {10.29007/68dk}, pdf = {https://www.sosy-lab.org/research/pub/2019-ARCH.Category_Report_Falsification.pdf}, } -
VerifyThis - Verification Competition with a Human Factor.
In Proc. of Tools and Algorithms for the Construction and Analysis of Systems (TACAS),
LNCS,
2019.
Springer.
PDF
BibTeX Entry
@inproceedings{ernst:toolympics2019, author = {Gidon Ernst and Marieke Huisman and Wojciech Mostowski and Matthias Ulbrich}, title = {{VerifyThis -- Verification Competition with a Human Factor}}, booktitle = {Proc. of Tools and Algorithms for the Construction and Analysis of Systems (TACAS)}, volume = {11429}, year = {2019}, series = {LNCS}, publisher = {Springer}, pdf = {https://www.sosy-lab.org/research/pub/2019-TACAS.VerifyThis-Verification_Competition_with_a_Human_Factor.pdf}, } -
Behavioural and Abstractor Specifications for a Dynamic Logic with Binders and Silent Transitions.
In Proceedings of the International Workshop on Data Learning and Inference (DALI 2019, San Sebastian, Spain, September 03-06),
LCNS,
2019.
Springer.
(to appear)
PDF
Abstract
We extend dynamic logic with binders (for state variables) by distinguishing between observable and silent transitions. This differentiation gives rise to two kinds of observational interpretations of the logic: abstractor and behavioural specifications. Abstractor specifications relax the standard model class semantics of a specification by considering its closure under weak bisimulation. Behavioural specifications, however, rely on a behavioural satisfaction relation which relaxes the interpretation of state variables and the satisfaction of modal formulas 〈α〉φ and [α]φ by abstracting from silent transitions. A formal relation between abstractor and behavioural specifications is provided which shows that both coincide semantically under mild conditions. For the proof we instantiate the previously introduced concept of a behaviour-abstractor framework to the case of dynamic logic with binders and silent transitions.BibTeX Entry
@inproceedings{DaLi19, author = {Rolf Hennicker and Alexander Knapp and Alexandre Madeira and Felix Mindt}, title = {Behavioural and Abstractor Specifications for a Dynamic Logic with Binders and Silent Transitions}, booktitle = {Proceedings of the International Workshop on Data Learning and Inference (DALI~2019, San Sebastian, Spain, September 03-06)}, year = {2019}, series = {{LCNS}}, publisher = {Springer}, pdf = {https://www.sosy-lab.org/research/pub/2019-DALI.Behavioural_and_Abstractor_Specifications_for_a_Dynamic_Logic_with_Binders_and_Silent_Transitions.pdf}, abstract = {We extend dynamic logic with binders (for state variables) by distinguishing between observable and silent transitions. This differentiation gives rise to two kinds of observational interpretations of the logic: abstractor and behavioural specifications. Abstractor specifications relax the standard model class semantics of a specification by considering its closure under weak bisimulation. Behavioural specifications, however, rely on a behavioural satisfaction relation which relaxes the interpretation of state variables and the satisfaction of modal formulas ⟨α⟩φ and [α]φ by abstracting from silent transitions. A formal relation between abstractor and behavioural specifications is provided which shows that both coincide semantically under mild conditions. For the proof we instantiate the previously introduced concept of a behaviour-abstractor framework to the case of dynamic logic with binders and silent transitions.}, note = {(to appear)}, }
2018
-
In-Place vs. Copy-on-Write CEGAR Refinement for Block Summarization with Caching.
In T. Margaria and
B. Steffen, editors,
Proceedings of the 8th International Symposium on
Leveraging Applications of Formal Methods, Verification, and Validation
(ISoLA 2018, Part 2, Limassol, Cyprus, November 5-9),
LNCS 11245,
pages 197-215,
2018.
Springer.
doi:10.1007/978-3-030-03421-4_14
Keyword(s):
CPAchecker,
Software Model Checking,
BAM
Publisher's Version
PDF
Presentation
Supplement
Abstract
Block summarization is an efficient technique in software verification to decompose a verification problem into separate tasks and to avoid repeated exploration of reusable parts of a program. In order to benefit from abstraction at the same time, block summarization can be combined with counterexample-guided abstraction refinement (CEGAR). This causes the following problem: whenever CEGAR instructs the model checker to refine the abstraction along a path, several block summaries are affected and need to be updated. There exist two different refinement strategies: a destructive in-place approach that modifies the existing block abstractions and a constructive copy-on-write approach that does not change existing data. While the in-place approach is used in the field for several years, our new approach of copy-on-write refinement has the following important advantage: A complete exportable proof of the program is available after the analysis has finished. Due to the benefit from avoiding recomputations of missing information as necessary for in-place updates, the new approach causes almost no computational overhead overall. We perform a large experimental evaluation to compare the new approach with the previous one to show that full proofs can be achieved without overhead.BibTeX Entry
@inproceedings{ISoLA18b, author = {Dirk Beyer and Karlheinz Friedberger}, title = {In-Place vs. Copy-on-Write CEGAR Refinement for Block Summarization with Caching}, booktitle = {Proceedings of the 8th International Symposium on Leveraging Applications of Formal Methods, Verification, and Validation (ISoLA~2018, Part~2, Limassol, Cyprus, November 5-9)}, editor = {T.~Margaria and B.~Steffen}, pages = {197-215}, year = {2018}, series = {LNCS~11245}, publisher = {Springer}, doi = {10.1007/978-3-030-03421-4_14}, sha256 = {}, url = {https://www.sosy-lab.org/research/bam-cow-refinement/}, pdf = {https://www.sosy-lab.org/research/pub/2018-ISoLA.In-Place_vs_Copy-on-Write_CEGAR_Refinement_for_Block_Summarization_with_Caching.pdf}, presentation = {https://www.sosy-lab.org/research/prs/2018-11-06_ISoLA18_BAM-CoW-Refinement_Dirk.pdf}, abstract = {Block summarization is an efficient technique in software verification to decompose a verification problem into separate tasks and to avoid repeated exploration of reusable parts of a program. In order to benefit from abstraction at the same time, block summarization can be combined with counterexample-guided abstraction refinement (CEGAR). This causes the following problem: whenever CEGAR instructs the model checker to refine the abstraction along a path, several block summaries are affected and need to be updated. There exist two different refinement strategies: a destructive in-place approach that modifies the existing block abstractions and a constructive copy-on-write approach that does not change existing data. While the in-place approach is used in the field for several years, our new approach of copy-on-write refinement has the following important advantage: A complete exportable proof of the program is available after the analysis has finished. Due to the benefit from avoiding recomputations of missing information as necessary for in-place updates, the new approach causes almost no computational overhead overall. We perform a large experimental evaluation to compare the new approach with the previous one to show that full proofs can be achieved without overhead.}, keyword = {CPAchecker,Software Model Checking,BAM}, } -
Strategy Selection for Software Verification Based on Boolean Features: A Simple but Effective Approach.
In T. Margaria and
B. Steffen, editors,
Proceedings of the 8th International Symposium on
Leveraging Applications of Formal Methods, Verification, and Validation
(ISoLA 2018, Part 2, Limassol, Cyprus, November 5-9),
LNCS 11245,
pages 144-159,
2018.
Springer.
doi:10.1007/978-3-030-03421-4_11
Keyword(s):
CPAchecker,
Software Model Checking
Publisher's Version
PDF
Presentation
BibTeX Entry
@inproceedings{ISoLA18a, author = {Dirk Beyer and Matthias Dangl}, title = {Strategy Selection for Software Verification Based on Boolean Features: A Simple but Effective Approach}, booktitle = {Proceedings of the 8th International Symposium on Leveraging Applications of Formal Methods, Verification, and Validation (ISoLA~2018, Part~2, Limassol, Cyprus, November 5-9)}, editor = {T.~Margaria and B.~Steffen}, pages = {144-159}, year = {2018}, series = {LNCS~11245}, publisher = {Springer}, doi = {10.1007/978-3-030-03421-4_11}, sha256 = {}, pdf = {https://www.sosy-lab.org/research/pub/2018-ISoLA.Strategy_Selection_for_Software_Verification_Based_on_Boolean_Features.pdf}, presentation = {https://www.sosy-lab.org/research/prs/2018-11-05_ISoLA18_StrategySelection_Dirk.pdf}, keyword = {CPAchecker,Software Model Checking}, } -
CPA-SymExec: Efficient Symbolic Execution in CPAchecker.
In Marianne Huchard,
Christian Kästner, and
Gordon Fraser, editors,
Proceedings of the 33rd ACM/IEEE International Conference on Automated
Software Engineering (ASE 2018, Montpellier, France, September 3-7),
pages 900-903,
2018.
ACM.
doi:10.1145/3238147.3240478
Keyword(s):
CPAchecker,
Software Model Checking
Publisher's Version
PDF
Presentation
Video
Supplement
Abstract
We present CPA-SymExec, a tool for symbolic execution that is implemented in the open-source, configurable verification framework CPAchecker. Our implementation automatically detects which symbolic facts to track, in order to obtain a small set of constraints that are necessary to decide reachability of a program area of interest. CPA-SymExec is based on abstraction and counterexample-guided abstraction refinement (CEGAR), and uses a constraint-interpolation approach to detect symbolic facts. We show that our implementation can better mitigate the path-explosion problem than symbolic execution without abstraction, by comparing the performance to the state-of-the-art Klee-based symbolic-execution engine Symbiotic and to Klee itself. For the experiments we use two kinds of analysis tasks: one for finding an executable path to a specific location of interest (e.g., if a test vector is desired to show that a certain behavior occurs), and one for confirming that no executable path to a specific location exists (e.g., if it is desired to show that a certain behavior never occurs). CPA-SymExec is released under the Apache 2 license and available (inclusive source code) at https://cpachecker.sosy-lab.org. A demonstration video is available at https://youtu.be/qoBHtvPKtnw.BibTeX Entry
@inproceedings{ASE18b, author = {Dirk Beyer and Thomas Lemberger}, title = {{CPA-SymExec}: Efficient Symbolic Execution in {CPAchecker}}, booktitle = {Proceedings of the 33rd {ACM/IEEE} International Conference on Automated Software Engineering ({ASE}~2018, Montpellier, France, September 3-7)}, editor = {Marianne Huchard and Christian K{\"{a}}stner and Gordon Fraser}, pages = {900-903}, year = {2018}, publisher = {ACM}, doi = {10.1145/3238147.3240478}, sha256 = {}, url = {https://www.sosy-lab.org/research/cpa-symexec-tool/}, pdf = {https://www.sosy-lab.org/research/pub/2018-ASE.CPA-SymExec_Efficient_Symbolic_Execution_in_CPAchecker.pdf}, presentation = {https://www.sosy-lab.org/research/prs/2018-09-07_ASE18_CPASymExec_Thomas.pdf}, abstract = {We present CPA-SymExec, a tool for symbolic execution that is implemented in the open-source, configurable verification framework CPAchecker. Our implementation automatically detects which symbolic facts to track, in order to obtain a small set of constraints that are necessary to decide reachability of a program area of interest. CPA-SymExec is based on abstraction and counterexample-guided abstraction refinement (CEGAR), and uses a constraint-interpolation approach to detect symbolic facts. We show that our implementation can better mitigate the path-explosion problem than symbolic execution without abstraction, by comparing the performance to the state-of-the-art Klee-based symbolic-execution engine Symbiotic and to Klee itself. For the experiments we use two kinds of analysis tasks: one for finding an executable path to a specific location of interest (e.g., if a test vector is desired to show that a certain behavior occurs), and one for confirming that no executable path to a specific location exists (e.g., if it is desired to show that a certain behavior never occurs). CPA-SymExec is released under the Apache 2 license and available (inclusive source code) at https://cpachecker.sosy-lab.org. A demonstration video is available at https://youtu.be/qoBHtvPKtnw.}, keyword = {CPAchecker,Software Model Checking}, video = {https://youtu.be/7o7EtpbV8NM}, } -
Domain-Independent Multi-threaded Software Model Checking.
In Marianne Huchard,
Christian Kästner, and
Gordon Fraser, editors,
Proceedings of the 33rd ACM/IEEE International Conference on Automated
Software Engineering, ASE 2018, Montpellier, France, September 3-7,
2018,
pages 634-644,
2018.
ACM.
doi:10.1145/3238147.3238195
Keyword(s):
CPAchecker,
Software Model Checking,
BAM
Publisher's Version
PDF
Presentation
Supplement
Abstract
Recent development of software aims at massively parallel execution, because of the trend to increase the number of processing units per CPU socket. But many approaches for program analysis are not designed to benefit from a multi-threaded execution and lack support to utilize multi-core computers. Rewriting existing algorithms is difficult and error-prone, and the design of new parallel algorithms also has limitations. An orthogonal problem is the granularity: computing each successor state in parallel seems too fine-grained, so the open question is to find the right structural level for parallel execution. We propose an elegant solution to these problems: Block summaries should be computed in parallel. Many successful approaches to software verification are based on summaries of control-flow blocks, large blocks, or function bodies. Block-abstraction memoization is a successful domain-independent approach for summary-based program analysis. We redesigned the verification approach of block-abstraction memoization starting from its original recursive definition, such that it can run in a parallel manner for utilizing the available computation resources without losing its advantages of being independent from a certain abstract domain. We present an implementation of our new approach for multi-core shared-memory machines. The experimental evaluation shows that our summary-based approach has no significant overhead compared to the existing sequential approach and that it has a significant speedup when using multi-threading.BibTeX Entry
@inproceedings{ASE18a, author = {Dirk Beyer and Karlheinz Friedberger}, title = {Domain-Independent Multi-threaded Software Model Checking}, booktitle = {Proceedings of the 33rd {ACM/IEEE} International Conference on Automated Software Engineering, {ASE} 2018, Montpellier, France, September 3-7, 2018}, editor = {Marianne Huchard and Christian K{\"{a}}stner and Gordon Fraser}, pages = {634-644}, year = {2018}, publisher = {ACM}, doi = {10.1145/3238147.3238195}, sha256 = {}, url = {https://www.sosy-lab.org/research/bam-parallel/}, pdf = {https://www.sosy-lab.org/research/pub/2018-ASE.Domain-Independent_Multi-threaded_Software_Model_Checking.pdf}, presentation = {https://www.sosy-lab.org/research/prs/2018-09-07_ASE18_ParallelBAM_Karlheinz.pdf}, abstract = {Recent development of software aims at massively parallel execution, because of the trend to increase the number of processing units per CPU socket. But many approaches for program analysis are not designed to benefit from a multi-threaded execution and lack support to utilize multi-core computers. Rewriting existing algorithms is difficult and error-prone, and the design of new parallel algorithms also has limitations. An orthogonal problem is the granularity: computing each successor state in parallel seems too fine-grained, so the open question is to find the right structural level for parallel execution. We propose an elegant solution to these problems: Block summaries should be computed in parallel. Many successful approaches to software verification are based on summaries of control-flow blocks, large blocks, or function bodies. Block-abstraction memoization is a successful domain-independent approach for summary-based program analysis. We redesigned the verification approach of block-abstraction memoization starting from its original recursive definition, such that it can run in a parallel manner for utilizing the available computation resources without losing its advantages of being independent from a certain abstract domain. We present an implementation of our new approach for multi-core shared-memory machines. The experimental evaluation shows that our summary-based approach has no significant overhead compared to the existing sequential approach and that it has a significant speedup when using multi-threading.}, keyword = {CPAchecker,Software Model Checking,BAM}, } -
Tests from Witnesses: Execution-Based Validation of Verification Results.
In Catherine Dubois and
Burkhart Wolff, editors,
Proceedings of the 12th International Conference on
Tests and Proofs (TAP 2018, Toulouse, France, June 27-29),
LNCS 10889,
pages 3-23,
2018.
Springer.
doi:10.1007/978-3-319-92994-1_1
Keyword(s):
CPAchecker,
Software Model Checking,
Witness-Based Validation,
Witness-Based Validation (main)
Publisher's Version
PDF
Presentation
Supplement
Abstract
The research community made enormous progress in the past years in developing algorithms for verifying software, as shown by verification competitions (SV-COMP). However, the ultimate goal is to design certifying algorithms, which produce for a given input not only the output but in addition a witness. This makes it possible to validate that the output is a correct solution for the input problem. The advantage of certifying algorithms is that the validation of the result is —thanks to the witness— easier than the computation of the result. Unfortunately, the transfer to industry is slow, one of the reasons being that some verifiers report a considerable number of false alarms. The verification community works towards this ultimate goal using exchangeable violation witnesses, i.e., an independent validator can be used to check whether the produced witness indeed represents a bug. This reduces the required trust base from the complex verification tool to a validator that may be less complex, and thus, more easily trustable. But existing witness validators are based on model-checking technology — which does not solve the problem of reducing the trust base. To close this gap, we present a simple concept that is based on program execution: We extend witness validation by generating a test vector from an error path that is reconstructed from the witness. Then, we generate a test harness (similar to unit-test code) that can be compiled and linked together with the original program. We then run the executable program in an isolating container. If the execution violates the specification (similar to runtime verification) we confirm that the witness indeed represents a bug. This method reduces the trust base to the execution system, which seems appropriate for avoiding false alarms. To show feasibility and practicality, we implemented execution-based witness validation in two completely independent analysis frameworks, and performed a large experimental study.BibTeX Entry
@inproceedings{TAP18, author = {Dirk Beyer and Matthias Dangl and Thomas Lemberger and Michael Tautschnig}, title = {Tests from Witnesses: Execution-Based Validation of Verification Results}, booktitle = {Proceedings of the 12th International Conference on Tests and Proofs (TAP~2018, Toulouse, France, June 27-29)}, editor = {Catherine Dubois and Burkhart Wolff}, pages = {3-23}, year = {2018}, series = {LNCS~10889}, publisher = {Springer}, doi = {10.1007/978-3-319-92994-1_1}, sha256 = {}, url = {https://www.sosy-lab.org/research/tests-from-witnesses/}, pdf = {https://www.sosy-lab.org/research/pub/2018-TAP.Tests_from_Witnesses_Execution-Based_Validation_of_Verification_Results.pdf}, presentation = {https://www.sosy-lab.org/research/prs/2018-06-27_TAP18-Keynote-CooperativeVerification_Dirk.pdf}, abstract = {The research community made enormous progress in the past years in developing algorithms for verifying software, as shown by verification competitions (SV-COMP). However, the ultimate goal is to design certifying algorithms, which produce for a given input not only the output but in addition a witness. This makes it possible to validate that the output is a correct solution for the input problem. The advantage of certifying algorithms is that the validation of the result is —thanks to the witness— easier than the computation of the result. Unfortunately, the transfer to industry is slow, one of the reasons being that some verifiers report a considerable number of false alarms. The verification community works towards this ultimate goal using exchangeable violation witnesses, i.e., an independent validator can be used to check whether the produced witness indeed represents a bug. This reduces the required trust base from the complex verification tool to a validator that may be less complex, and thus, more easily trustable. But existing witness validators are based on model-checking technology — which does not solve the problem of reducing the trust base. To close this gap, we present a simple concept that is based on program execution: We extend witness validation by generating a test vector from an error path that is reconstructed from the witness. Then, we generate a test harness (similar to unit-test code) that can be compiled and linked together with the original program. We then run the executable program in an isolating container. If the execution violates the specification (similar to runtime verification) we confirm that the witness indeed represents a bug. This method reduces the trust base to the execution system, which seems appropriate for avoiding false alarms. To show feasibility and practicality, we implemented execution-based witness validation in two completely independent analysis frameworks, and performed a large experimental study.}, keyword = {CPAchecker,Software Model Checking,Witness-Based Validation,Witness-Based Validation (main)}, } -
Reducer-Based Construction of Conditional Verifiers.
In Proceedings of the 40th International Conference on
Software Engineering (ICSE 2018, Gothenburg, Sweden, May 27 - June 3),
pages 1182-1193,
2018.
ACM.
doi:10.1145/3180155.3180259
Keyword(s):
CPAchecker,
Software Model Checking
Publisher's Version
PDF
Presentation
Supplement
Abstract
Despite recent advances, software verification remains challenging. To solve hard verification tasks, we need to leverage not just one but several different verifiers employing different technologies. To this end, we need to exchange information between verifiers. Conditional model checking was proposed as a solution to exactly this problem: The idea is to let the first verifier output a condition which describes the state space that it successfully verified and to instruct the second verifier to verify the yet unverified state space using this condition. However, most verifiers do not understand conditions as input. In this paper, we propose the usage of an off-the-shelf construction of a conditional verifier from a given traditional verifier and a reducer. The reducer takes as input the program to be verified and the condition, and outputs a residual program whose paths cover the unverified state space described by the condition. As a proof of concept, we designed and implemented one particular reducer and composed three conditional model checkers from the three best verifiers at SV-COMP 2017. We defined a set of claims and experimentally evaluated their validity. All experimental data and results are available for replication.BibTeX Entry
@inproceedings{ICSE18, author = {Dirk Beyer and Marie-Christine Jakobs and Thomas Lemberger and Heike Wehrheim}, title = {Reducer-Based Construction of Conditional Verifiers}, booktitle = {Proceedings of the 40th International Conference on Software Engineering (ICSE~2018, Gothenburg, Sweden, May 27 - June 3)}, pages = {1182-1193}, year = {2018}, publisher = {ACM}, isbn = {978-1-4503-5638-1}, doi = {10.1145/3180155.3180259}, sha256 = {}, url = {https://www.sosy-lab.org/research/reducer/}, pdf = {https://www.sosy-lab.org/research/pub/2018-ICSE.Reducer-Based_Construction_of_Conditional_Verifiers.pdf}, presentation = {https://www.sosy-lab.org/research/prs/2018-06-01_ICSE18_ReducerBasedConstructionOfConditionalVerifiers_Marie.pdf}, abstract = {Despite recent advances, software verification remains challenging. To solve hard verification tasks, we need to leverage not just one but several different verifiers employing different technologies. To this end, we need to exchange information between verifiers. Conditional model checking was proposed as a solution to exactly this problem: The idea is to let the first verifier output a condition which describes the state space that it successfully verified and to instruct the second verifier to verify the yet unverified state space using this condition. However, most verifiers do not understand conditions as input. In this paper, we propose the usage of an off-the-shelf construction of a conditional verifier from a given traditional verifier and a reducer. The reducer takes as input the program to be verified and the condition, and outputs a residual program whose paths cover the unverified state space described by the condition. As a proof of concept, we designed and implemented one particular reducer and composed three conditional model checkers from the three best verifiers at SV-COMP 2017. We defined a set of claims and experimentally evaluated their validity. All experimental data and results are available for replication.}, keyword = {CPAchecker,Software Model Checking}, } -
Evaluating Tools for Software Verification (Track Introduction).
In T. Margaria and
B. Steffen, editors,
Proceedings of the 8th International Symposium on
Leveraging Applications of Formal Methods, Verification, and Validation
(ISoLA 2018, Limassol, Cyprus, November 5-9), Part 2,
LNCS 11245,
pages 139-143,
2018.
Springer-Verlag, Heidelberg.
doi:10.1007/978-3-030-03421-4_10
Publisher's Version
PDF
BibTeX Entry
@inproceedings{ISOLA18-TrackIntro, author = {Markus Schordan and Dirk Beyer and Stephen F. Siegel}, title = {Evaluating Tools for Software Verification (Track Introduction)}, booktitle = {Proceedings of the 8th International Symposium on Leveraging Applications of Formal Methods, Verification, and Validation (ISoLA~2018, Limassol, Cyprus, November 5--9), Part 2}, editor = {T.~Margaria and B.~Steffen}, pages = {139-143}, year = {2018}, series = {LNCS~11245}, publisher = {Springer-Verlag, Heidelberg}, isbn = {978-3-030-03420-7}, doi = {10.1007/978-3-030-03421-4_10}, sha256 = {}, url = {}, keyword = {}, } -
In Search of Perfect Users: Towards Understanding the Usability of Converged Multi-Level Secure User Interfaces.
In Proc. of Computer Human Interaction Australia (OzCHI),
pages 572-576,
2018.
ACM.
Work in Progress Report.
PDF
BibTeX Entry
@inproceedings{ernst:ozchi2018, author = {Abdullah Issa and Toby Murray and Gidon Ernst}, title = {{In Search of Perfect Users: Towards Understanding the Usability of Converged Multi-Level Secure User Interfaces}}, booktitle = {Proc. of Computer Human Interaction Australia (OzCHI)}, pages = {572--576}, year = {2018}, publisher = {ACM}, pdf = {https://www.sosy-lab.org/research/pub/2018-OzCHI.In_Search_of_Perfect_Users.pdf}, note = {Work in Progress Report.}, } -
ARCH-COMP18 Category Report: Results on the Falsification Benchmarks.
In Proc. of Applied Verification of Continuous and Hybrid Systems (ARCH),
EPiC,
pages 104-109,
2018.
EasyChair.
PDF
BibTeX Entry
@inproceedings{ernst:arch2018, author = {Adel Dokhanchi and Shakiba Yaghoubi and Bardh Hoxha and Georgios Fainekos and Gidon Ernst and Zhenya Zhang and Paolo Arcaini and Ichiro Hasuo and Sean Sedwards}, title = {{ARCH-COMP18 Category Report: Results on the Falsification Benchmarks}}, booktitle = {Proc. of Applied Verification of Continuous and Hybrid Systems (ARCH)}, volume = {54}, pages = {104--109}, year = {2018}, series = {EPiC}, publisher = {EasyChair}, pdf = {https://www.sosy-lab.org/research/pub/2018-ARCH.Results_on_the_Falsification_Benchmarks.pdf}, } -
Time-staging Enhancement of Hybrid System Falsification (Abstract).
In Proc. of Monitoring and Testing of Cyber-Physical Systems (MT-CPS),
2018.
IEEE.
BibTeX Entry
@inproceedings{ernst:mt-cps2018, author = {Zhenya Zhang and Gidon Ernst and Ichiro Hasuo and Sean Sedwards}, title = {{Time-staging Enhancement of Hybrid System Falsification (Abstract)}}, booktitle = {Proc. of Monitoring and Testing of Cyber-Physical Systems (MT-CPS)}, year = {2018}, publisher = {IEEE}, }
2017
-
Software Verification: Testing vs. Model Checking.
In O. Strichman and
R. Tzoref-Brill, editors,
Proceedings of the 13th Haifa Verification Conference (HVC 2017, Haifa, Israel, November 13-25),
LNCS 10629,
pages 99-114,
2017.
Springer.
doi:10.1007/978-3-319-70389-3_7
Keyword(s):
CPAchecker,
Software Model Checking
Publisher's Version
PDF
Presentation
Supplement
Abstract
In practice, software testing has been the established method for finding bugs in programs for a long time. But in the last 15 years, software model checking has received a lot of attention, and many successful tools for software model checking exist today. We believe it is time for a careful comparative evaluation of automatic software testing against automatic software model checking. We chose six existing tools for automatic test-case generation, namely AFL-fuzz, CPATiger, Crest-ppc, FShell, Klee, and PRtest, and four tools for software model checking, namely CBMC, CPA-Seq, ESBMC-incr, and ESBMC-kInd, for the task of finding specification violations in a large benchmark suite consisting of 5693 C programs. In order to perform such an evaluation, we have implemented a framework for test-based falsification (TBF) that executes and validates test cases produced by test-case generation tools in order to find errors in programs. The conclusion of our experiments is that software model checkers can (i) find a substantially larger number of bugs (ii) in less time, and (iii) require less adjustment to the input programs.BibTeX Entry
@inproceedings{HVC17, author = {Dirk Beyer and Thomas Lemberger}, title = {Software Verification: Testing vs. Model Checking}, booktitle = {Proceedings of the 13th Haifa Verification Conference (HVC~2017, Haifa, Israel, November 13-25)}, editor = {O.~Strichman and R.~Tzoref-Brill}, pages = {99-114}, year = {2017}, series = {LNCS~10629}, publisher = {Springer}, isbn = {978-3-319-70389-3}, doi = {10.1007/978-3-319-70389-3_7}, sha256 = {}, url = {https://www.sosy-lab.org/research/test-study/}, pdf = {https://www.sosy-lab.org/research/pub/2017-HVC.Software_Verification_Testing_vs_Model_Checking.pdf}, presentation = {https://www.sosy-lab.org/research/prs/2017-11-15_HVC17_TestStudy_Thomas.pdf}, abstract = {In practice, software testing has been the established method for finding bugs in programs for a long time. But in the last 15 years, software model checking has received a lot of attention, and many successful tools for software model checking exist today. We believe it is time for a careful comparative evaluation of automatic software testing against automatic software model checking. We chose six existing tools for automatic test-case generation, namely AFL-fuzz, CPATiger, Crest-ppc, FShell, Klee, and PRtest, and four tools for software model checking, namely CBMC, CPA-Seq, ESBMC-incr, and ESBMC-kInd, for the task of finding specification violations in a large benchmark suite consisting of 5693 C programs. In order to perform such an evaluation, we have implemented a framework for test-based falsification (TBF) that executes and validates test cases produced by test-case generation tools in order to find errors in programs. The conclusion of our experiments is that software model checkers can (i) find a substantially larger number of bugs (ii) in less time, and (iii) require less adjustment to the input programs.}, keyword = {CPAchecker,Software Model Checking}, annote = {Won the HVC 2017 Best Paper Award.<br> <a href="https://www.sosy-lab.org/research/pub/2017-HVC.Software_Verification_Testing_vs_Model_Checking.Errata.txt"> Errata</a> available.}, }Additional Infos
Won the HVC 2017 Best Paper Award.
Errata available. -
Software Verification with Validation of Results (Report on SV-COMP 2017).
In A. Legay and
T. Margaria, editors,
Proceedings of the 23rd International Conference on
Tools and Algorithms for the Construction and Analysis of Systems (TACAS 2017, Uppsala, Sweden, April 22-29),
LNCS 10206,
pages 331-349,
2017.
Springer-Verlag, Heidelberg.
doi:10.1007/978-3-662-54580-5_20
Keyword(s):
Competition on Software Verification (SV-COMP),
Competition on Software Verification (SV-COMP Report),
Software Model Checking,
Witness-Based Validation
Publisher's Version
PDF
Supplement
BibTeX Entry
@inproceedings{TACAS17, author = {Dirk Beyer}, title = {Software Verification with Validation of Results ({R}eport on {SV-COMP} 2017)}, booktitle = {Proceedings of the 23rd International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS~2017, Uppsala, Sweden, April 22-29)}, editor = {A.~Legay and T.~Margaria}, pages = {331-349}, year = {2017}, series = {LNCS~10206}, publisher = {Springer-Verlag, Heidelberg}, isbn = {978-3-662-54579-9}, doi = {10.1007/978-3-662-54580-5_20}, sha256 = {}, url = {https://sv-comp.sosy-lab.org/2017/}, pdf = {https://www.sosy-lab.org/research/pub/2017-TACAS.Software_Verification_with_Validation_of_Results.pdf}, keyword = {Competition on Software Verification (SV-COMP),Competition on Software Verification (SV-COMP Report),Software Model Checking,Witness-Based Validation}, } -
Exchanging Verification Witnesses between Verifiers.
In J. Jürjens and
K. Schneider, editors,
Tagungsband Software Engineering 2017, Fachtagung des GI-Fachbereichs Softwaretechnik
(21.-24. Februar 2017, Hannover, Deutschland),
LNI P-267,
pages 93-94,
2017.
Gesellschaft für Informatik (GI).
Keyword(s):
CPAchecker,
Software Model Checking,
Witness-Based Validation
Publisher's Version
BibTeX Entry
@inproceedings{SE17-Witnesses, author = {Dirk Beyer and Matthias Dangl and Daniel Dietsch and Matthias Heizmann}, title = {Exchanging Verification Witnesses between Verifiers}, booktitle = {Tagungsband Software Engineering 2017, Fachtagung des GI-Fachbereichs Softwaretechnik (21.-24. Februar 2017, Hannover, Deutschland)}, editor = {J.~J{\"{u}}rjens and K.~Schneider}, pages = {93-94}, year = {2017}, series = {{LNI}~P-267}, publisher = {Gesellschaft f{\"{u}}r Informatik ({GI})}, url = {}, keyword = {CPAchecker,Software Model Checking,Witness-Based Validation}, annote = {This is a summary of a <a href="https://www.sosy-lab.org/research/bib/Year/2016.html#FSE16b">full article on this topic</a> that appeared in Proc. ESEC/FSE 2016.}, doinone = {DOI not available}, urlpub = {https://dl.gi.de/handle/20.500.12116/1288}, }Additional Infos
This is a summary of a full article on this topic that appeared in Proc. ESEC/FSE 2016. -
Modular verification of order-preserving writeback caches.
In Proc. of Integrated Formal Methods (iFM),
LNCS,
pages 375-390,
2017.
Springer.
PDF
BibTeX Entry
@inproceedings{ernst:ifm2017, author = {Jörg Pfähler and Gidon Ernst and Stefan Bodenmüller and Gerhard Schellhorn and Wolfgang Reif}, title = {Modular verification of order-preserving writeback caches}, booktitle = {Proc. of Integrated Formal Methods (iFM)}, volume = {10510}, pages = {375--390}, year = {2017}, series = {LNCS}, publisher = {Springer}, pdf = {https://www.sosy-lab.org/research/pub/2017-iFM.Modular_Verification_of_Order-Preserving_Write-Back_Caches.pdf}, } -
CPA-BAM-BnB: Block-Abstraction Memoization and Region-Based Memory Models for Predicate Abstractions (Competition Contribution).
In Axel Legay and
Tiziana Margaria, editors,
Proceedings of the 23rd International Conference on
Tools and Algorithms for the Construction and Analysis of Systems (TACAS 2017, Uppsala, Sweden, April 22-29),
LNCS 10206,
pages 355-359,
2017.
Springer-Verlag, Heidelberg.
doi:10.1007/978-3-662-54580-5_22
Keyword(s):
CPAchecker,
Competition on Software Verification (SV-COMP),
Software Model Checking
Publisher's Version
PDF
Supplement
Abstract
Our submission to SV-COMP'17 is based on the software verification framework CPAchecker. Combined with value analysis and predicate analysis we use the concept of block-abstraction memoization with optimization and several fixes relative to the version of SV-COMP'16. A novelty of our approach is usage of BnB memory model for predicate analysis, which efficiently divides the accessed memory into memory regions and thus leads to smaller formulas.BibTeX Entry
@inproceedings{CPABAM-COMP17, author = {Pavel Andrianov and Karlheinz Friedberger and Mikhail U. Mandrykin and Vadim S. Mutilin and Anton Volkov}, title = {{CPA-BAM-BnB}: {Block}-Abstraction Memoization and Region-Based Memory Models for Predicate Abstractions (Competition Contribution)}, booktitle = {Proceedings of the 23rd International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS~2017, Uppsala, Sweden, April 22-29)}, editor = {Axel Legay and Tiziana Margaria}, pages = {355--359}, year = {2017}, series = {LNCS~10206}, publisher = {Springer-Verlag, Heidelberg}, isbn = {978-3-662-54579-9}, doi = {10.1007/978-3-662-54580-5_22}, sha256 = {}, url = {https://doi.org/10.1007/978-3-662-54580-5_22}, abstract = {Our submission to SV-COMP'17 is based on the software verification framework CPAchecker. Combined with value analysis and predicate analysis we use the concept of block-abstraction memoization with optimization and several fixes relative to the version of SV-COMP'16. A novelty of our approach is usage of BnB memory model for predicate analysis, which efficiently divides the accessed memory into memory regions and thus leads to smaller formulas.}, keyword = {CPAchecker,Competition on Software Verification (SV-COMP),Software Model Checking}, }
2016
-
Correctness Witnesses: Exchanging Verification Results Between Verifiers.
In T. Zimmermann,
J. Cleland-Huang, and
Z. Su, editors,
Proceedings of the 24th ACM SIGSOFT International Symposium on
Foundations of Software Engineering (FSE 2016, Seattle, WA, USA, November 13-18),
pages 326-337,
2016.
ACM.
doi:10.1145/2950290.2950351
Keyword(s):
CPAchecker,
Ultimate,
Software Model Checking,
Witness-Based Validation,
Witness-Based Validation (main)
Publisher's Version
PDF
BibTeX Entry
@inproceedings{FSE16b, author = {Dirk Beyer and Matthias Dangl and Daniel Dietsch and Matthias Heizmann}, title = {Correctness Witnesses: {E}xchanging Verification Results Between Verifiers}, booktitle = {Proceedings of the 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering (FSE~2016, Seattle, WA, USA, November 13-18)}, editor = {T.~Zimmermann and J.~Cleland-Huang and Z.~Su}, pages = {326-337}, year = {2016}, publisher = {ACM}, doi = {10.1145/2950290.2950351}, sha256 = {}, url = {}, pdf = {https://www.sosy-lab.org/research/pub/2016-FSE.Correctness_Witnesses_Exchanging_Verification_Results_between_Verifiers.pdf}, keyword = {CPAchecker,Ultimate,Software Model Checking,Witness-Based Validation,Witness-Based Validation (main)}, } -
On-the-Fly Decomposition of Specifications in Software Model Checking.
In T. Zimmermann,
J. Cleland-Huang, and
Z. Su, editors,
Proceedings of the 24th ACM SIGSOFT International Symposium on
Foundations of Software Engineering (FSE 2016, Seattle, WA, USA, November 13-18),
pages 349-361,
2016.
ACM.
doi:10.1145/2950290.2950349
Publisher's Version
PDF
BibTeX Entry
@inproceedings{FSE16a, author = {Sven Apel and Dirk Beyer and Vitaly Mordan and Vadim Mutilin and Andreas Stahlbauer}, title = {On-the-Fly Decomposition of Specifications in Software Model Checking}, booktitle = {Proceedings of the 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering (FSE~2016, Seattle, WA, USA, November 13-18)}, editor = {T.~Zimmermann and J.~Cleland-Huang and Z.~Su}, pages = {349-361}, year = {2016}, publisher = {ACM}, isbn = {978-3-319-47165-5}, doi = {10.1145/2950290.2950349}, sha256 = {}, pdf = {https://www.sosy-lab.org/research/pub/2016-FSE.On-the-Fly_Decomposition_of_Specifications_in_Software_Model_Checking.pdf}, } -
Partial Verification and Intermediate Results as a Solution to
Combine Automatic and Interactive Verification Techniques.
In T. Margaria and
B. Steffen, editors,
7th International Symposium on
Leveraging Applications of Formal Methods, Verification, and Validation
(ISoLA 2016, Part 1, Imperial, Corfu, Greece, October 10-14),
LNCS 9952,
pages 874-880,
2016.
Springer.
doi:10.1007/978-3-319-47166-2
Keyword(s):
Software Model Checking
Publisher's Version
PDF
Abstract
Many of the current verification approaches can be classified into automatic and interactive techniques, each having different strengths and weaknesses. Thus, one of the current open problems is to design solutions to combine the two approaches and accelerate technology transfer. We outline four existing techniques that might be able to contribute to combination solutions: (1) Conditional model checking is a technique that gives detailed information (in form of a condition) about the verified state space, i.e., informs the user (or tools later in a tool chain) of the outcome. Also, it accepts as input detailed information (again as condition) about what the conditional model checker has to do. (2) Correctness witnesses, stored in a machine-readable exchange format, contain (partial) invariants that can be used to prove the correctness of a system. For example, tools that usually expect invariants from the user can read the invariants from such correctness witnesses and ask the user only for the remaining invariants. (3) Abstraction-refinement based approaches that use a dynamically adjustable precision (such as in lazy CEGAR approaches) can be provided with invariants from the user or from other tools, e.g., from deductive methods. This way, the approach can succeed in constructing a proof even if it was not able to come up with the required invariant. (4) The technique of path invariants extracts (in a CEGAR method) a path program that represents an interesting part of the program for which an invariant is needed. Such a path program can be given to an expensive (or interactive) method for computing invariants that can then be fed back to a CEGAR method to continue verifying the large program. While the existing techniques originate from software verification, we believe that the new combination ideas are useful for verifying general systems.BibTeX Entry
@inproceedings{ISOLA16b, author = {Dirk Beyer}, title = {Partial Verification and Intermediate Results as a Solution to Combine Automatic and Interactive Verification Techniques}, booktitle = {7th International Symposium on Leveraging Applications of Formal Methods, Verification, and Validation (ISoLA~2016, Part~1, Imperial, Corfu, Greece, October 10-14)}, editor = {T.~Margaria and B.~Steffen}, pages = {874-880}, year = {2016}, series = {LNCS~9952}, publisher = {Springer}, isbn = {978-3-319-47165-5}, doi = {10.1007/978-3-319-47166-2}, sha256 = {}, pdf = {https://www.sosy-lab.org/research/pub/2016-ISoLA.Partial_Verification_and_Intermediate_Results_as_a_Solution_to_Combine_Automatic_and_Interactive_Verification_Techniques.pdf}, abstract = {Many of the current verification approaches can be classified into automatic and interactive techniques, each having different strengths and weaknesses. Thus, one of the current open problems is to design solutions to combine the two approaches and accelerate technology transfer. We outline four existing techniques that might be able to contribute to combination solutions: (1) Conditional model checking is a technique that gives detailed information (in form of a condition) about the verified state space, i.e., informs the user (or tools later in a tool chain) of the outcome. Also, it accepts as input detailed information (again as condition) about what the conditional model checker has to do. (2) Correctness witnesses, stored in a machine-readable exchange format, contain (partial) invariants that can be used to prove the correctness of a system. For example, tools that usually expect invariants from the user can read the invariants from such correctness witnesses and ask the user only for the remaining invariants. (3) Abstraction-refinement based approaches that use a dynamically adjustable precision (such as in lazy CEGAR approaches) can be provided with invariants from the user or from other tools, e.g., from deductive methods. This way, the approach can succeed in constructing a proof even if it was not able to come up with the required invariant. (4) The technique of path invariants extracts (in a CEGAR method) a path program that represents an interesting part of the program for which an invariant is needed. Such a path program can be given to an expensive (or interactive) method for computing invariants that can then be fed back to a CEGAR method to continue verifying the large program. While the existing techniques originate from software verification, we believe that the new combination ideas are useful for verifying general systems.}, keyword = {Software Model Checking}, } -
Symbolic Execution with CEGAR.
In T. Margaria and
B. Steffen, editors,
7th International Symposium on
Leveraging Applications of Formal Methods, Verification, and Validation
(ISoLA 2016, Part 1, Imperial, Corfu, Greece, October 10-14),
LNCS 9952,
pages 195-211,
2016.
Springer.
doi:10.1007/978-3-319-47166-2_14
Keyword(s):
CPAchecker,
Software Model Checking
Publisher's Version
PDF
Presentation
Supplement
Abstract
Symbolic execution, a standard technique in program analysis, is a particularly successful and popular component in systems for test-case generation. One of the open research problems is that the approach suffers from the path-explosion problem. We apply abstraction to symbolic execution, and refine the abstract model using counterexampleguided abstraction refinement (CEGAR), a standard technique from model checking. We also use refinement selection with existing and new heuristics to influence the behavior and further improve the performance of our refinement procedure. We implemented our new technique in the open-source software-verification framework CPAchecker. Our experimental results show that the implementation is highly competitive.BibTeX Entry
@inproceedings{ISOLA16a, author = {Dirk Beyer and Thomas Lemberger}, title = {Symbolic Execution with {CEGAR}}, booktitle = {7th International Symposium on Leveraging Applications of Formal Methods, Verification, and Validation (ISoLA~2016, Part~1, Imperial, Corfu, Greece, October 10-14)}, editor = {T.~Margaria and B.~Steffen}, pages = {195-211}, year = {2016}, series = {LNCS~9952}, publisher = {Springer}, doi = {10.1007/978-3-319-47166-2_14}, sha256 = {}, url = {https://www.sosy-lab.org/research/cpa-symexec/}, pdf = {https://www.sosy-lab.org/research/pub/2016-ISoLA.Symbolic_Execution_with_CEGAR.pdf}, presentation = {https://www.sosy-lab.org/research/prs/2016-10-10_ISoLA16_SymbolicExecutionWithCegar_Dirk.pdf}, abstract = {Symbolic execution, a standard technique in program analysis, is a particularly successful and popular component in systems for test-case generation. One of the open research problems is that the approach suffers from the path-explosion problem. We apply abstraction to symbolic execution, and refine the abstract model using counterexampleguided abstraction refinement (CEGAR), a standard technique from model checking. We also use refinement selection with existing and new heuristics to influence the behavior and further improve the performance of our refinement procedure. We implemented our new technique in the open-source software-verification framework CPAchecker. Our experimental results show that the implementation is highly competitive.}, keyword = {CPAchecker,Software Model Checking}, annote = {<a href="https://www.sosy-lab.org/research/pub/2016-ISoLA.Symbolic_Execution_with_CEGAR.Errata.txt"> Errata</a> available.}, }Additional Infos
Errata available. -
Verification-Aided Debugging: An Interactive Web-Service for Exploring Error Witnesses.
In S. Chaudhuri and
A. Farzan, editors,
28th International Conference on
Computer Aided Verification (CAV 2016, Part 2, Toronto, ON, Canada, July 17-23),
LNCS 9780,
pages 502-509,
2016.
Springer.
doi:10.1007/978-3-319-41540-6_28
Keyword(s):
Cloud-Based Software Verification,
Witness-Based Validation,
Witness-Based Validation (main)
Publisher's Version
PDF
BibTeX Entry
@inproceedings{CAV16, author = {Dirk Beyer and Matthias Dangl}, title = {Verification-Aided Debugging: {A}n Interactive Web-Service for Exploring Error Witnesses}, booktitle = {28th International Conference on Computer Aided Verification (CAV~2016, Part~2, Toronto, ON, Canada, July 17-23)}, editor = {S.~Chaudhuri and A.~Farzan}, pages = {502-509}, year = {2016}, series = {LNCS~9780}, publisher = {Springer}, doi = {10.1007/978-3-319-41540-6_28}, sha256 = {89a353eace6233e10cd85e64b0c197209367d617b94c2d02766e922ea88c9e4c}, pdf = {https://www.sosy-lab.org/research/pub/2016-CAV.Verification-Aided_Debugging_An_Interactive_Web-Service_for_Exploring_Error_Witnesses.pdf}, keyword = {Cloud-Based Software Verification,Witness-Based Validation,Witness-Based Validation (main)}, } -
Reliable and Reproducible Competition Results with BenchExec and Witnesses (Report on SV-COMP 2016).
In M. Chechik and
J.-F. Raskin, editors,
Proceedings of the 22nd International Conference on
Tools and Algorithms for the Construction and Analysis of Systems (TACAS 2016, Eindhoven, The Netherlands, April 2-8),
LNCS 9636,
pages 887-904,
2016.
Springer-Verlag, Heidelberg.
doi:10.1007/978-3-662-49674-9_55
Keyword(s):
Competition on Software Verification (SV-COMP),
Competition on Software Verification (SV-COMP Report),
Software Model Checking,
Witness-Based Validation
Publisher's Version
PDF
Supplement
BibTeX Entry
@inproceedings{TACAS16, author = {Dirk Beyer}, title = {Reliable and Reproducible Competition Results with {{\sc BenchExec}} and Witnesses ({R}eport on {SV-COMP} 2016)}, booktitle = {Proceedings of the 22nd International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS~2016, Eindhoven, The Netherlands, April 2-8)}, editor = {M.~Chechik and J.-F.~Raskin}, pages = {887-904}, year = {2016}, series = {LNCS~9636}, publisher = {Springer-Verlag, Heidelberg}, isbn = {978-3-662-49674-9}, doi = {10.1007/978-3-662-49674-9_55}, sha256 = {bc8f02d7c0651c1197977f13e77c1fcb22a5f85aadd96dc4aa59b454b199ed0e}, url = {https://sv-comp.sosy-lab.org/2016/}, keyword = {Competition on Software Verification (SV-COMP),Competition on Software Verification (SV-COMP Report),Software Model Checking,Witness-Based Validation}, } -
A Light-Weight Approach for Verifying Multi-Threaded Programs with CPAchecker.
In J. Bouda,
L. Holík,
J. Kofroň,
J. Strejček, and
A. Rambousek, editors,
Proceedings of the 11th Doctoral Workshop on
Mathematical and Engineering Methods in Computer Science (MEMICS 2016, Telč, Czechia, October 21-23),
EPTCS 233,
pages 61-71,
2016.
ArXiV.
doi:10.4204/EPTCS.233.6
Keyword(s):
CPAchecker,
Software Model Checking
Publisher's Version
PDF
BibTeX Entry
@inproceedings{MEMICS16-Multi-Threaded, author = {Dirk Beyer and Karlheinz Friedberger}, title = {A Light-Weight Approach for Verifying Multi-Threaded Programs with CPAchecker}, booktitle = {Proceedings of the 11th Doctoral Workshop on Mathematical and Engineering Methods in Computer Science (MEMICS~2016, Tel\v{c}, Czechia, October 21-23)}, editor = {J.~Bouda and L.~Hol\'ik and J.~Kofro\v{n} and J.~Strej\v{c}ek and A.~Rambousek}, pages = {61-71}, year = {2016}, series = {EPTCS~233}, publisher = {ArXiV}, doi = {10.4204/EPTCS.233.6}, sha256 = {}, pdf = {https://www.sosy-lab.org/research/pub/2016-MEMICS.A_Light-Weight_Approach_for_Verifying_Multi-Threaded_Programs_with_CPAchecker.pdf}, keyword = {CPAchecker,Software Model Checking}, } -
Evaluation and Reproducibility of Program Analysis and Verification (Track Introduction).
In T. Margaria and
B. Steffen, editors,
Proceedings of the 7th International Symposium on
Leveraging Applications of Formal Methods, Verification, and Validation
(ISoLA 2016, Corfu, Greece, October 10-14),
LNCS 9952,
pages 191-194,
2016.
Springer-Verlag, Heidelberg.
doi:10.1007/978-3-319-47166-2_13
Publisher's Version
PDF
BibTeX Entry
@inproceedings{ISOLA16-TrackIntro, author = {Markus Schordan and Dirk Beyer and Jonas Lundberg}, title = {Evaluation and Reproducibility of Program Analysis and Verification (Track Introduction)}, booktitle = {Proceedings of the 7th International Symposium on Leveraging Applications of Formal Methods, Verification, and Validation (ISoLA~2016, Corfu, Greece, October 10--14)}, editor = {T.~Margaria and B.~Steffen}, pages = {191-194}, year = {2016}, series = {LNCS~9952}, publisher = {Springer-Verlag, Heidelberg}, isbn = {978-3-319-47165-5}, doi = {10.1007/978-3-319-47166-2_13}, sha256 = {}, url = {}, } -
SMT-based Software Model Checking: An Experimental Comparison of Four Algorithms.
In Proc. VSTTE,
LNCS 9971,
pages 181-198,
2016.
Springer.
doi:10.1007/978-3-319-48869-1_14
Keyword(s):
CPAchecker,
Software Model Checking
Publisher's Version
PDF
Supplement
BibTeX Entry
@inproceedings{VSTTE16b-AlgorithmComparison, author = {Dirk Beyer and Matthias Dangl}, title = {{SMT}-based Software Model Checking: {A}n Experimental Comparison of Four Algorithms}, booktitle = {Proc.\ VSTTE}, pages = {181--198}, year = {2016}, series = {LNCS~9971}, publisher = {Springer}, doi = {10.1007/978-3-319-48869-1_14}, sha256 = {}, url = {https://www.sosy-lab.org/research/k-ind-compare/index-vstte.html}, pdf = {https://www.sosy-lab.org/research/pub/2016-VSTTE.SMT-based_Software_Model_Checking_An_Experimental_Comparison_of_Four_Algorithms.pdf}, keyword = {CPAchecker,Software Model Checking}, annote = {An <a href="https://www.sosy-lab.org/research/bib/Year/2018.complete.html#AlgorithmComparison-JAR">extended version</a> of this article appeared in JAR.}, }Additional Infos
An extended version of this article appeared in JAR. -
JavaSMT: A Unified Interface for SMT Solvers in Java.
In Proc. VSTTE,
LNCS 9971,
pages 139-148,
2016.
Springer.
doi:10.1007/978-3-319-48869-1_11
Publisher's Version
PDF
Supplement
BibTeX Entry
@inproceedings{VSTTE16a-JavaSMT, author = {Egor George Karpenkov and Karlheinz Friedberger and Dirk Beyer}, title = {{{\sc JavaSMT}}: {A} Unified Interface for {SMT} Solvers in {Java}}, booktitle = {Proc.\ VSTTE}, pages = {139--148}, year = {2016}, series = {LNCS~9971}, publisher = {Springer}, doi = {10.1007/978-3-319-48869-1_11}, sha256 = {}, url = {https://github.com/sosy-lab/java-smt/}, pdf = {https://www.sosy-lab.org/research/pub/2016-VSTTE.JavaSMT_A_Unified_Interface_For_SMT_Solvers_in_Java.pdf}, } -
Verification Witnesses.
In J. Knoop and
U. Zdun, editors,
Tagungsband Software Engineering 2016, Fachtagung des GI-Fachbereichs Softwaretechnik
(23.-26. Februar 2016, Wien, Österreich),
LNI 252,
pages 105-106,
2016.
Gesellschaft für Informatik (GI).
Keyword(s):
CPAchecker,
Software Model Checking,
Witness-Based Validation
Publisher's Version
BibTeX Entry
@inproceedings{SE16b-VerificationWitnesses, author = {Dirk Beyer and Matthias Dangl and Daniel Dietsch and Matthias Heizmann and Andreas Stahlbauer}, title = {Verification Witnesses}, booktitle = {Tagungsband Software Engineering 2016, Fachtagung des GI-Fachbereichs Softwaretechnik (23.-26. Februar 2016, Wien, {\"O}sterreich)}, editor = {J.~Knoop and U.~Zdun}, pages = {105-106}, year = {2016}, series = {{LNI}~252}, publisher = {Gesellschaft f{\"{u}}r Informatik ({GI})}, url = {}, keyword = {CPAchecker,Software Model Checking,Witness-Based Validation}, annote = {This is a summary of a <a href="https://www.sosy-lab.org/research/bib/Year/2015.html#FSE15">full article on this topic</a> that appeared in Proc. ESEC/FSE 2015.}, doinone = {DOI not available}, urlpub = {https://dl.gi.de/handle/20.500.12116/746}, }Additional Infos
This is a summary of a full article on this topic that appeared in Proc. ESEC/FSE 2015. -
On Facilitating Reuse in Multi-goal Test-Suite Generation for Software Product Lines.
In J. Knoop and
U. Zdun, editors,
Tagungsband Software Engineering 2016, Fachtagung des GI-Fachbereichs Softwaretechnik
(23.-26. Februar 2016, Wien, Österreich),
LNI 252,
pages 81-82,
2016.
Gesellschaft für Informatik (GI).
Keyword(s):
CPAchecker,
Software Model Checking
Publisher's Version
BibTeX Entry
@inproceedings{SE16a-Test-SPL, author = {Malte Lochau and Johannes B{\"u}rdek and Stefan Bauregger and Andreas Holzer and Alexander von Rhein and Sven Apel and Dirk Beyer}, title = {On Facilitating Reuse in Multi-goal Test-Suite Generation for Software Product Lines}, booktitle = {Tagungsband Software Engineering 2016, Fachtagung des GI-Fachbereichs Softwaretechnik (23.-26. Februar 2016, Wien, {\"O}sterreich)}, editor = {J.~Knoop and U.~Zdun}, pages = {81-82}, year = {2016}, series = {{LNI}~252}, publisher = {Gesellschaft f{\"{u}}r Informatik ({GI})}, url = {}, keyword = {CPAchecker,Software Model Checking}, annote = {This is a summary of a <a href="https://www.sosy-lab.org/research/bib/Year/2015.html#FASE15">full article on this topic</a> that appeared in Proc. FASE 2015.}, doinone = {DOI not available}, urlpub = {https://dl.gi.de/handle/20.500.12116/733}, }Additional Infos
This is a summary of a full article on this topic that appeared in Proc. FASE 2015. -
A relational encoding for a clash-free subset of ASMs.
In Proc. of Alloy, ASM, B, TLA, VDM, and Z (ABZ),
LNCS,
pages 237-243,
2016.
Springer.
PDF
BibTeX Entry
@inproceedings{ernst:abz2016, author = {Gerhard Schellhorn and Gidon Ernst and Jörg Pfähler and Wolfgang Reif}, title = {{A relational encoding for a clash-free subset of ASMs}}, booktitle = {Proc. of Alloy, ASM, B, TLA, VDM, and Z (ABZ)}, volume = {9675}, pages = {237--243}, year = {2016}, series = {LNCS}, publisher = {Springer}, pdf = {https://www.sosy-lab.org/research/pub/2016-ABZ.A_Relational_Encoding_for_a_Clash-Free_Subset_of_ASMs.pdf}, } -
Program Analysis with Local Policy Iteration.
In Proceedings of the 17th International Conference on Verification, Model Checking, and Abstract Interpretation (VMCAI 2016, St. Petersburg, FL, USA, January 17-19),
LNCS 9583,
pages 127-146,
2016.
Springer-Verlag, Heidelberg.
doi:10.1007/978-3-662-49122-5_6
Keyword(s):
CPAchecker,
Software Model Checking
Publisher's Version
PDF
Supplement
Abstract
We present local policy iteration (LPI), a new algorithm for deriving numerical invariants that combines the precision of max-policy iteration with the flexibility and scalability of conventional Kleene iterations. It is defined in the Configurable Program Analysis (CPA) framework, thus allowing inter-analysis communication. LPI uses adjustable-block encoding in order to traverse loop-free program sections, possibly containing branching, without introducing extra abstraction. Our technique operates over any template linear constraint domain, including the interval and octagon domains; templates can also be derived from the program source. The implementation is evaluated on a set of benchmarks from the International Competition on Software Verification (SV-COMP). It competes favorably with state-of-the-art analyzers.BibTeX Entry
@inproceedings{LPI, author = {Egor George Karpenkov and David Monniaux and Philipp Wendler}, title = {Program Analysis with Local Policy Iteration}, booktitle = {Proceedings of the 17th International Conference on Verification, Model Checking, and Abstract Interpretation (VMCAI~2016, St.~Petersburg, FL, USA, January 17-19)}, pages = {127--146}, year = {2016}, series = {LNCS~9583}, publisher = {Springer-Verlag, Heidelberg}, doi = {10.1007/978-3-662-49122-5_6}, sha256 = {}, url = {http://lpi.metaworld.me}, pdf = {https://arxiv.org/pdf/1509.03424}, abstract = {We present local policy iteration (LPI), a new algorithm for deriving numerical invariants that combines the precision of max-policy iteration with the flexibility and scalability of conventional Kleene iterations. It is defined in the Configurable Program Analysis (CPA) framework, thus allowing inter-analysis communication. LPI uses adjustable-block encoding in order to traverse loop-free program sections, possibly containing branching, without introducing extra abstraction. Our technique operates over any template linear constraint domain, including the interval and octagon domains; templates can also be derived from the program source. The implementation is evaluated on a set of benchmarks from the International Competition on Software Verification (SV-COMP). It competes favorably with state-of-the-art analyzers.}, keyword = {CPAchecker,Software Model Checking}, } -
CPA-BAM: Block-Abstraction Memoization with Value Analysis and Predicate Analysis (Competition Contribution).
In Marsha Chechik and
Jean-François Raskin, editors,
Proceedings of the 22nd International Conference on
Tools and Algorithms for the Construction and Analysis of Systems (TACAS 2016, Eindhoven, The Netherlands, April 2-8),
LNCS 9636,
pages 912-915,
2016.
Springer-Verlag, Heidelberg.
doi:10.1007/978-3-662-49674-9_58
Keyword(s):
CPAchecker,
Competition on Software Verification (SV-COMP),
Software Model Checking
Publisher's Version
PDF
Supplement
BibTeX Entry
@inproceedings{CPABAM-COMP16, author = {Karlheinz Friedberger}, title = {{CPA-BAM}: Block-Abstraction Memoization with Value Analysis and Predicate Analysis (Competition Contribution)}, booktitle = {Proceedings of the 22nd International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS~2016, Eindhoven, The Netherlands, April 2-8)}, editor = {Marsha Chechik and Jean{-}Fran{\c{c}}ois Raskin}, pages = {912--915}, year = {2016}, series = {LNCS~9636}, publisher = {Springer-Verlag, Heidelberg}, isbn = {978-3-662-49673-2}, doi = {10.1007/978-3-662-49674-9_58}, sha256 = {}, url = {https://doi.org/10.1007/978-3-662-49674-9_58}, keyword = {CPAchecker,Competition on Software Verification (SV-COMP),Software Model Checking}, } -
CPA-RefSel: CPAchecker with Refinement Selection (Competition Contribution).
In Marsha Chechik and
Jean-François Raskin, editors,
Proceedings of the 22nd International Conference on
Tools and Algorithms for the Construction and Analysis of Systems (TACAS 2016, Eindhoven, The Netherlands, April 2-8),
LNCS 9636,
pages 916-919,
2016.
Springer-Verlag, Heidelberg.
doi:10.1007/978-3-662-49674-9_59
Keyword(s):
CPAchecker,
Competition on Software Verification (SV-COMP),
Software Model Checking
Publisher's Version
PDF
Supplement
BibTeX Entry
@inproceedings{CPAREFSEL-COMP16, author = {Stefan L{\"{o}}we}, title = {{CPA-RefSel}: {{\sc CPAchecker}} with Refinement Selection (Competition Contribution)}, booktitle = {Proceedings of the 22nd International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS~2016, Eindhoven, The Netherlands, April 2-8)}, editor = {Marsha Chechik and Jean{-}Fran{\c{c}}ois Raskin}, pages = {916--919}, year = {2016}, series = {LNCS~9636}, publisher = {Springer-Verlag, Heidelberg}, isbn = {978-3-662-49673-2}, doi = {10.1007/978-3-662-49674-9_59}, sha256 = {}, url = {https://doi.org/10.1007/978-3-662-49674-9_59}, keyword = {CPAchecker,Competition on Software Verification (SV-COMP),Software Model Checking}, annote = {Won category DeviceDriversLinux64 in <span style="white-space: nowrap"><a href="https://sv-comp.sosy-lab.org/2016/">SV-COMP'16</a></span>}, }Additional Infos
Won category DeviceDriversLinux64 in SV-COMP'16
2015
-
Witness Validation and Stepwise Testification across Software Verifiers.
In E. Di Nitto,
M. Harman, and
P. Heymans, editors,
Proceedings of the 2015 10th Joint Meeting of the European Software Engineering Conference and the ACM SIGSOFT Symposium on
Foundations of Software Engineering (ESEC/FSE 2015, Bergamo, Italy, August 31 - September 4),
pages 721-733,
2015.
ACM, New York.
doi:10.1145/2786805.2786867
Keyword(s):
CPAchecker,
Ultimate,
Software Model Checking,
Witness-Based Validation,
Witness-Based Validation (main)
Publisher's Version
PDF
BibTeX Entry
@inproceedings{FSE15, author = {Dirk Beyer and Matthias Dangl and Daniel Dietsch and Matthias Heizmann and Andreas Stahlbauer}, title = {Witness Validation and Stepwise Testification across Software Verifiers}, booktitle = {Proceedings of the 2015 10th Joint Meeting of the European Software Engineering Conference and the ACM SIGSOFT Symposium on Foundations of Software Engineering (ESEC/FSE 2015, Bergamo, Italy, August 31 - September 4)}, editor = {E.~Di~Nitto and M.~Harman and P.~Heymans}, pages = {721-733}, year = {2015}, publisher = {ACM, New York}, isbn = {978-1-4503-3675-8}, doi = {10.1145/2786805.2786867}, url = {}, pdf = {https://www.sosy-lab.org/research/pub/2015-FSE.Witness_Validation_and_Stepwise_Testification_across_Software_Verifiers.pdf}, keyword = {CPAchecker,Ultimate,Software Model Checking,Witness-Based Validation,Witness-Based Validation (main)}, } -
Refinement Selection.
In B. Fischer and
J. Geldenhuys, editors,
Proceedings of the 22nd International Symposium on
Model Checking of Software (SPIN 2015, Stellenbosch, South Africa, August 24-26),
LNCS 9232,
pages 20-38,
2015.
Springer-Verlag, Heidelberg.
doi:10.1007/978-3-319-23404-5_3
Keyword(s):
CPAchecker,
Software Model Checking
Publisher's Version
PDF
Supplement
Abstract
Counterexample-guided abstraction refinement is a property-directed approach for the automatic construction of an abstract model for a given system. The approach learns information from infeasible error paths in order to refine the abstract model. We address the problem of selecting which information to learn from a given infeasible error path. In previous work, we presented a method that enables refinement selection by extracting a set of sliced prefixes from a given infeasible error path, each of which represents a different reason for infeasibility of the error path and thus, a possible way to refine the abstract model. In this work, we (1) define and investigate several promising heuristics for selecting an appropriate precision for refinement, and (2) propose a new combination of a value analysis and a predicate analysis that does not only find out which information to learn from an infeasible error path, but automatically decides which analysis should be preferred for a refinement. These contributions allow a more systematic refinement strategy for CEGAR-based analyses. We evaluated the idea on software verification. We provide an implementation of the new concepts in the verification framework CPAchecker and make it publicly available. In a thorough experimental study, we show that refinement selection often avoids state-space explosion where existing approaches diverge, and that it can be even more powerful if applied on a higher level, where it decides which analysis of a combination should be favored for a refinement.BibTeX Entry
@inproceedings{SPIN15b, author = {Dirk Beyer and Stefan L{\"o}we and Philipp Wendler}, title = {Refinement Selection}, booktitle = {Proceedings of the 22nd International Symposium on Model Checking of Software (SPIN~2015, Stellenbosch, South Africa, August 24-26)}, editor = {B.~Fischer and J.~Geldenhuys}, pages = {20-38}, year = {2015}, series = {LNCS~9232}, publisher = {Springer-Verlag, Heidelberg}, isbn = {978-3-319-23403-8}, doi = {10.1007/978-3-319-23404-5_3}, url = {https://www.sosy-lab.org/research/cpa-ref-sel/}, pdf = {https://www.sosy-lab.org/research/pub/2015-SPIN.Refinement_Selection.pdf}, abstract = {Counterexample-guided abstraction refinement is a property-directed approach for the automatic construction of an abstract model for a given system. The approach learns information from infeasible error paths in order to refine the abstract model. We address the problem of selecting which information to learn from a given infeasible error path. In previous work, we presented a method that enables refinement selection by extracting a set of sliced prefixes from a given infeasible error path, each of which represents a different reason for infeasibility of the error path and thus, a possible way to refine the abstract model. In this work, we (1) define and investigate several promising heuristics for selecting an appropriate precision for refinement, and (2) propose a new combination of a value analysis and a predicate analysis that does not only find out which information to learn from an infeasible error path, but automatically decides which analysis should be preferred for a refinement. These contributions allow a more systematic refinement strategy for CEGAR-based analyses. We evaluated the idea on software verification. We provide an implementation of the new concepts in the verification framework CPAchecker and make it publicly available. In a thorough experimental study, we show that refinement selection often avoids state-space explosion where existing approaches diverge, and that it can be even more powerful if applied on a higher level, where it decides which analysis of a combination should be favored for a refinement.}, keyword = {CPAchecker,Software Model Checking}, } -
Benchmarking and Resource Measurement.
In B. Fischer and
J. Geldenhuys, editors,
Proceedings of the 22nd International Symposium on
Model Checking of Software (SPIN 2015, Stellenbosch, South Africa, August 24-26),
LNCS 9232,
pages 160-178,
2015.
Springer-Verlag, Heidelberg.
doi:10.1007/978-3-319-23404-5_12
Keyword(s):
Benchmarking
Publisher's Version
PDF
Supplement
Abstract
Proper benchmarking and resource measurement is an important topic, because benchmarking is a widely-used method for the comparative evaluation of tools and algorithms in many research areas. It is essential for researchers, tool developers, and users, as well as for competitions. We formulate a set of requirements that are indispensable for reproducible benchmarking and reliable resource measurement of automatic solvers, verifiers, and similar tools, and discuss limitations of existing methods and benchmarking tools. Fulfilling these requirements in a benchmarking framework is complex and can (on Linux) currently only be done by using the cgroups feature of the kernel. We provide BenchExec, a ready-to-use, tool-independent, and free implementation of a benchmarking framework that fulfills all presented requirements, making reproducible benchmarking and reliable resource measurement easy. Our framework is able to work with a wide range of different tools and has proven its reliability and usefulness in the International Competition on Software Verification.BibTeX Entry
@inproceedings{SPIN15a, author = {Dirk Beyer and Stefan L{\"o}we and Philipp Wendler}, title = {Benchmarking and Resource Measurement}, booktitle = {Proceedings of the 22nd International Symposium on Model Checking of Software (SPIN~2015, Stellenbosch, South Africa, August 24-26)}, editor = {B.~Fischer and J.~Geldenhuys}, pages = {160-178}, year = {2015}, series = {LNCS~9232}, publisher = {Springer-Verlag, Heidelberg}, isbn = {978-3-319-23403-8}, doi = {10.1007/978-3-319-23404-5_12}, url = {https://www.sosy-lab.org/research/benchmarking/}, pdf = {https://www.sosy-lab.org/research/pub/2015-SPIN.Benchmarking_and_Resource_Measurement.pdf}, abstract = {Proper benchmarking and resource measurement is an important topic, because benchmarking is a widely-used method for the comparative evaluation of tools and algorithms in many research areas. It is essential for researchers, tool developers, and users, as well as for competitions. We formulate a set of requirements that are indispensable for reproducible benchmarking and reliable resource measurement of automatic solvers, verifiers, and similar tools, and discuss limitations of existing methods and benchmarking tools. Fulfilling these requirements in a benchmarking framework is complex and can (on Linux) currently only be done by using the cgroups feature of the kernel. We provide BenchExec, a ready-to-use, tool-independent, and free implementation of a benchmarking framework that fulfills all presented requirements, making reproducible benchmarking and reliable resource measurement easy. Our framework is able to work with a wide range of different tools and has proven its reliability and usefulness in the International Competition on Software Verification.}, keyword = {Benchmarking}, annote = {An <a href="https://www.sosy-lab.org/research/bib/Year/2017.complete.html#Benchmarking-STTT">extended version</a> of this article appeared in STTT.}, }Additional Infos
An extended version of this article appeared in STTT. -
Boosting k-Induction with Continuously-Refined Invariants.
In D. Kröning and
C. S. Pasareanu, editors,
Proceedings of the 27th International Conference on
Computer Aided Verification (CAV 2015, San Francisco, CA, USA, July 18-24),
LNCS 9206,
pages 622-640,
2015.
Springer-Verlag, Heidelberg.
doi:10.1007/978-3-319-21690-4_42
Keyword(s):
CPAchecker,
Software Model Checking
Publisher's Version
PDF
Supplement
Abstract
k-Induction is a promising technique to extend bounded model checking from falsification to verification. In software verification, k-induction works only if auxiliary invariants are used to strengthen the induction hypothesis. The problem that we address is to generate such invariants (1) automatically without user-interaction, (2) efficiently such that little verification time is spent on the invariant generation, and (3) that are sufficiently strong for a k-induction proof. We boost the k-induction approach to significantly increase effectiveness and efficiency in the following way: We start in parallel to k-induction a data-flow-based invariant generator that supports dynamic precision adjustment and refine the precision of the invariant generator continuously during the analysis, such that the invariants become increasingly stronger. The k-induction engine is extended such that the invariants from the invariant generator are injected in each iteration to strengthen the hypothesis. The new method solves the above-mentioned problem because it (1) automatically chooses an invariant by step-wise refinement, (2) starts always with a lightweight invariant generation that is computationally inexpensive, and (3) refines the invariant precision more and more to inject stronger and stronger invariants into the induction system. We present and evaluate an implementation of our approach, as well as all other existing approaches, in the open-source verification-framework CPAchecker. Our experiments show that combining k-induction with continuously-refined invariants significantly increases effectiveness and efficiency, and outperforms all existing implementations of k-induction-based verification of C programs in terms of successful results.BibTeX Entry
@inproceedings{CAV15, author = {Dirk Beyer and Matthias Dangl and Philipp Wendler}, title = {Boosting k-Induction with Continuously-Refined Invariants}, booktitle = {Proceedings of the 27th International Conference on Computer Aided Verification (CAV~2015, San Francisco, CA, USA, July 18-24)}, editor = {D.~Kr{\"o}ning and C.~S.~Pasareanu}, pages = {622-640}, year = {2015}, series = {LNCS~9206}, publisher = {Springer-Verlag, Heidelberg}, isbn = {978-3-319-21689-8}, doi = {10.1007/978-3-319-21690-4_42}, sha256 = {beb169351523c85e417e028c4e32b47c2c29e5db2e7b29ef8f5a2230e9562216}, url = {https://www.sosy-lab.org/research/cpa-k-induction/}, abstract = {k-Induction is a promising technique to extend bounded model checking from falsification to verification. In software verification, k-induction works only if auxiliary invariants are used to strengthen the induction hypothesis. The problem that we address is to generate such invariants (1) automatically without user-interaction, (2) efficiently such that little verification time is spent on the invariant generation, and (3) that are sufficiently strong for a k-induction proof. We boost the k-induction approach to significantly increase effectiveness and efficiency in the following way: We start in parallel to k-induction a data-flow-based invariant generator that supports dynamic precision adjustment and refine the precision of the invariant generator continuously during the analysis, such that the invariants become increasingly stronger. The k-induction engine is extended such that the invariants from the invariant generator are injected in each iteration to strengthen the hypothesis. The new method solves the above-mentioned problem because it (1) automatically chooses an invariant by step-wise refinement, (2) starts always with a lightweight invariant generation that is computationally inexpensive, and (3) refines the invariant precision more and more to inject stronger and stronger invariants into the induction system. We present and evaluate an implementation of our approach, as well as all other existing approaches, in the open-source verification-framework CPAchecker. Our experiments show that combining k-induction with continuously-refined invariants significantly increases effectiveness and efficiency, and outperforms all existing implementations of k-induction-based verification of C programs in terms of successful results.}, keyword = {CPAchecker,Software Model Checking}, } -
Sliced Path Prefixes: An Effective Method to Enable Refinement Selection.
In S. Graf and
M. Viswanathan, editors,
Proceedings of the 35th IFIP WG 6.1 International Conference on
Formal Techniques for Distributed Objects, Components, and Systems (FORTE 2015, Grenoble, France, June 2-4),
LNCS 9039,
pages 228-243,
2015.
Springer-Verlag, Heidelberg.
doi:10.1007/978-3-319-19195-9_15
Keyword(s):
CPAchecker,
Software Model Checking
Publisher's Version
PDF
Supplement
Abstract
Automatic software verification relies on constructing, for a given program, an abstract model that is (1) abstract enough to avoid state-space explosion and (2) precise enough to reason about the specification. Counterexample-guided abstraction refinement is a standard technique that suggests to extract information from infeasible error paths, in order to refine the abstract model if it is too imprecise. Existing approaches -including our previous work- do not choose the refinement for a given path systematically. We present a method that generates alternative refinements and allows to systematically choose a suited one. The method takes as input one given infeasible error path and applies a slicing technique to obtain a set of new error paths that are more abstract than the original error path but still infeasible, each for a different reason. The (more abstract) constraints of the new paths can be passed to a standard refinement procedure, in order to obtain a set of possible refinements, one for each new path. Our technique is completely independent from the abstract domain that is used in the program analysis, and does not rely on a certain proof technique, such as SMT solving. We implemented the new algorithm in the verification framework CPAchecker and made our extension publicly available. The experimental evaluation of our technique indicates that there is a wide range of possibilities on how to refine the abstract model for a given error path, and we demonstrate that the choice of which refinement to apply to the abstract model has a significant impact on the verification effectiveness and efficiency.BibTeX Entry
@inproceedings{FORTE15, author = {Dirk Beyer and Stefan L{\"o}we and Philipp Wendler}, title = {Sliced Path Prefixes: An Effective Method to Enable Refinement Selection}, booktitle = {Proceedings of the 35th IFIP WG 6.1 International Conference on Formal Techniques for Distributed Objects, Components, and Systems (FORTE~2015, Grenoble, France, June 2-4)}, editor = {S.~Graf and M.~Viswanathan}, pages = {228-243}, year = {2015}, series = {LNCS~9039}, publisher = {Springer-Verlag, Heidelberg}, isbn = {978-3-319-19194-2}, doi = {10.1007/978-3-319-19195-9_15}, sha256 = {96e16841eb13a602455334a71a516f509ad1b1e2328edade3d5954062b387e7d}, url = {https://www.sosy-lab.org/research/cpa-ref-sel/#FORTE15}, abstract = {Automatic software verification relies on constructing, for a given program, an abstract model that is (1) abstract enough to avoid state-space explosion and (2) precise enough to reason about the specification. Counterexample-guided abstraction refinement is a standard technique that suggests to extract information from infeasible error paths, in order to refine the abstract model if it is too imprecise. Existing approaches ---including our previous work--- do not choose the refinement for a given path systematically. We present a method that generates alternative refinements and allows to systematically choose a suited one. The method takes as input one given infeasible error path and applies a slicing technique to obtain a set of new error paths that are more abstract than the original error path but still infeasible, each for a different reason. The (more abstract) constraints of the new paths can be passed to a standard refinement procedure, in order to obtain a set of possible refinements, one for each new path. Our technique is completely independent from the abstract domain that is used in the program analysis, and does not rely on a certain proof technique, such as SMT solving. We implemented the new algorithm in the verification framework CPAchecker and made our extension publicly available. The experimental evaluation of our technique indicates that there is a wide range of possibilities on how to refine the abstract model for a given error path, and we demonstrate that the choice of which refinement to apply to the abstract model has a significant impact on the verification effectiveness and efficiency.}, keyword = {CPAchecker,Software Model Checking}, } -
Presence-Condition Simplification in Highly Configurable Systems.
In A. Bertolino,
G. Canfora, and
S. Elbaum, editors,
Proceedings of the 37th International Conference on
Software Engineering (ICSE 2015, Florence, Italy, May 16-24),
pages 178-188,
2015.
IEEE.
doi:10.1109/ICSE.2015.39
Keyword(s):
Software Model Checking
Publisher's Version
PDF
BibTeX Entry
@inproceedings{ICSE15, author = {Alexander von Rhein and Alexander Grebhahn and Sven Apel and Norbert Siegmund and Dirk Beyer and Thorsten Berger}, title = {Presence-Condition Simplification in Highly Configurable Systems}, booktitle = {Proceedings of the 37th International Conference on Software Engineering (ICSE~2015, Florence, Italy, May 16-24)}, editor = {A.~Bertolino and G.~Canfora and S.~Elbaum}, pages = {178-188}, year = {2015}, publisher = {IEEE}, isbn = {978-1-4799-1934-5}, doi = {10.1109/ICSE.2015.39}, url = {}, pdf = {https://www.sosy-lab.org/research/pub/2015-ICSE.Presence-Condition_Simplification_in_Highly_Configurable_Systems.pdf}, keyword = {Software Model Checking}, } -
Facilitating Reuse in Multi-Goal Test-Suite Generation for Software Product Lines.
In A. Egyed and
I. Schaefer, editors,
Proceedings of the 18th International Conference on
Fundamental Approaches to Software Engineering (FASE 2015, London, UK, April 13-15),
LNCS 9033,
pages 84-99,
2015.
Springer-Verlag, Heidelberg.
doi:10.1007/978-3-662-46675-9_6
Keyword(s):
CPAchecker,
Software Model Checking,
Software Testing
Publisher's Version
PDF
Supplement
BibTeX Entry
@inproceedings{FASE15, author = {Johannes B{\"u}rdek and Malte Lochau and Stefan Bauregger and Andreas Holzer and Alexander von Rhein and Sven Apel and Dirk Beyer}, title = {Facilitating Reuse in Multi-Goal Test-Suite Generation for Software Product Lines}, booktitle = {Proceedings of the 18th International Conference on Fundamental Approaches to Software Engineering (FASE~2015, London, UK, April 13-15)}, editor = {A.~Egyed and I.~Schaefer}, pages = {84-99}, year = {2015}, series = {LNCS~9033}, publisher = {Springer-Verlag, Heidelberg}, isbn = {978-3-662-46674-2}, doi = {10.1007/978-3-662-46675-9_6}, sha256 = {fcd4d2f3155e3e061318a444f578c41c5e224a7c76e1bf161fe55cc7ae01ae86}, url = {http://forsyte.at/software/cpatiger/}, keyword = {CPAchecker,Software Model Checking,Software Testing}, } -
Software Verification and Verifiable Witnesses (Report on SV-COMP 2015).
In C. Baier and
C. Tinelli, editors,
Proceedings of the 21st International Conference on
Tools and Algorithms for the Construction and Analysis of Systems (TACAS 2015, London, UK, April 13-17),
LNCS 9035,
pages 401-416,
2015.
Springer-Verlag, Heidelberg.
doi:10.1007/978-3-662-46681-0_31
Keyword(s):
Competition on Software Verification (SV-COMP),
Competition on Software Verification (SV-COMP Report),
Software Model Checking,
Witness-Based Validation
Publisher's Version
PDF
Supplement
BibTeX Entry
@inproceedings{TACAS15, author = {Dirk Beyer}, title = {Software Verification and Verifiable Witnesses (Report on {SV-COMP} 2015)}, booktitle = {Proceedings of the 21st International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS~2015, London, UK, April 13-17)}, editor = {C.~Baier and C.~Tinelli}, pages = {401-416}, year = {2015}, series = {LNCS~9035}, publisher = {Springer-Verlag, Heidelberg}, isbn = {978-3-662-46680-3}, doi = {10.1007/978-3-662-46681-0_31}, sha256 = {858448ee22256b3ed7f35603d81e942b58652f3b4d2660a22b858dc1c3ac16d0}, url = {https://sv-comp.sosy-lab.org/2015/}, keyword = {Competition on Software Verification (SV-COMP),Competition on Software Verification (SV-COMP Report),Software Model Checking,Witness-Based Validation}, } -
Interpolation for Value Analysis.
In U. Aßmann,
B. Demuth,
T. Spitta,
G. Püschel, and
R. Kaiser, editors,
Tagungsband Software Engineering 2015, Fachtagung des GI-Fachbereichs Softwaretechnik
(17. März - 20. März 2015, Dresden, Deutschland),
LNI 239,
pages 73-74,
2015.
Gesellschaft für Informatik (GI).
Keyword(s):
CPAchecker,
Software Model Checking
Publisher's Version
BibTeX Entry
@inproceedings{SE15-ExplicitCEGAR, author = {Dirk Beyer and Stefan L{\"{o}}we}, title = {Interpolation for Value Analysis}, booktitle = {Tagungsband Software Engineering 2015, Fachtagung des GI-Fachbereichs Softwaretechnik (17. M{\"{a}}rz - 20. M{\"{a}}rz 2015, Dresden, Deutschland)}, editor = {U.~A{\ss}mann and B.~Demuth and T.~Spitta and G.~P{\"{u}}schel and R.~Kaiser}, pages = {73-74}, year = {2015}, series = {{LNI}~239}, publisher = {Gesellschaft f{\"{u}}r Informatik ({GI})}, url = {}, keyword = {CPAchecker,Software Model Checking}, annote = {This is a summary of a <a href="https://www.sosy-lab.org/research/bib/Year/2013.html#FASE13">full article on this topic</a> that appeared in Proc. FASE 2013.}, doinone = {DOI not available}, urlpub = {https://dl.gi.de/handle/20.500.12116/2495}, }Additional Infos
This is a summary of a full article on this topic that appeared in Proc. FASE 2013. -
Conditional effects in fine-grained region logic.
In Proc. of Formal Techniques for Java-like Programs (FTfJP),
2015.
ACM.
BibTeX Entry
@inproceedings{ernst:ftfjp2015, author = {Yuyan Bao and Gary Leavens and Gidon Ernst}, title = {{Conditional effects in fine-grained region logic}}, booktitle = {Proc. of Formal Techniques for Java-like Programs (FTfJP)}, year = {2015}, publisher = {ACM}, } -
Inside a verified Flash file system: transactions & garbage collection.
In Proc. of Verified Software: Theories, Tools, Experiments (VSTTE),
LNCS,
pages 73-93,
2015.
Springer.
PDF
BibTeX Entry
@inproceedings{ernst:vstte2015, author = {Gidon Ernst and Jörg Pfähler and Gerhard Schellhorn and Wolfgang Reif}, title = {{Inside a verified Flash file system: transactions \& garbage collection}}, booktitle = {Proc. of Verified Software: Theories, Tools, Experiments (VSTTE)}, volume = {9593}, pages = {73--93}, year = {2015}, series = {LNCS}, publisher = {Springer}, pdf = {https://www.sosy-lab.org/research/pub/2015-VSTTE.Inside_a_Verified_Flash_File_System.pdf}, } -
CPAchecker with Support for Recursive Programs and Floating-Point Arithmetic (Competition Contribution).
In C. Baier and
C. Tinelli, editors,
Proceedings of the 21st International Conference on
Tools and Algorithms for the Construction and Analysis of Systems (TACAS 2015, London, UK, April 13-17),
LNCS 9035,
pages 423-425,
2015.
Springer-Verlag, Heidelberg.
doi:10.1007/978-3-662-46681-0_34
Keyword(s):
CPAchecker,
Competition on Software Verification (SV-COMP),
Software Model Checking
Publisher's Version
PDF
Supplement
Abstract
We submit to SV-COMP'15 the software-verification framework CPAchecker. The submitted configuration is a combination of seven different analyses, based on explicit-value analysis, k-induction, predicate analysis, and concrete memory graphs. These analyses use concepts such as CEGAR, lazy abstraction, interpolation, adjustable-block encoding, bounded model checking, invariant generation, and block-abstraction memoization. Found counterexamples are cross-checked by a bit-precise analysis. The combination of several different analyses copes well with the diversity of the verification tasks in SV-COMP.BibTeX Entry
@inproceedings{CPACHECKER-COMP15, author = {Matthias Dangl and Stefan L{\"{o}}we and Philipp Wendler}, title = {{{\sc CPAchecker}} with Support for Recursive Programs and Floating-Point Arithmetic (Competition Contribution)}, booktitle = {Proceedings of the 21st International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS~2015, London, UK, April 13-17)}, editor = {C.~Baier and C.~Tinelli}, pages = {423--425}, year = {2015}, series = {LNCS~9035}, publisher = {Springer-Verlag, Heidelberg}, isbn = {978-3-662-46680-3}, doi = {10.1007/978-3-662-46681-0_34}, sha256 = {}, url = {https://doi.org/10.1007/978-3-662-46681-0_34}, pdf = {https://www.sosy-lab.org/research/pub/2015-TACAS.CPAchecker_with_Support_for_Recursive_Programs_and_Floating-Point_Arithmetic.pdf}, abstract = {We submit to SV-COMP'15 the software-verification framework CPAchecker. The submitted configuration is a combination of seven different analyses, based on explicit-value analysis, k-induction, predicate analysis, and concrete memory graphs. These analyses use concepts such as CEGAR, lazy abstraction, interpolation, adjustable-block encoding, bounded model checking, invariant generation, and block-abstraction memoization. Found counterexamples are cross-checked by a bit-precise analysis. The combination of several different analyses copes well with the diversity of the verification tasks in SV-COMP.}, keyword = {CPAchecker,Competition on Software Verification (SV-COMP),Software Model Checking}, annote = {Won categories ControlFlow, MemorySafety, and Overall, and received three silver and two bronze medals in <span style="white-space: nowrap"><a href="https://sv-comp.sosy-lab.org/2015/">SV-COMP'15</a></span>}, }Additional Infos
Won categories ControlFlow, MemorySafety, and Overall, and received three silver and two bronze medals in SV-COMP'15
2014
-
Software Verification in the Google App-Engine Cloud.
In A. Biere and
R. Bloem, editors,
Proceedings of the 26th International Conference on
Computer-Aided Verification (CAV 2014, Vienna, Austria, July 18-22),
LNCS 8559,
pages 327-333,
2014.
Springer-Verlag, Heidelberg.
doi:10.1007/978-3-319-08867-9_21
Keyword(s):
CPAchecker,
Software Model Checking,
Cloud-Based Software Verification
Publisher's Version
PDF
Supplement
Abstract
Software verification often requires a large amount of computing resources. In the last years, cloud services emerged as an inexpensive, flexible, and energy-efficient source of computing power. We have investigated if such cloud resources can be used effectively for verification. We chose the platform-as-a-service offer Google App Engine and ported the open-source verification framework CPAchecker to it. We provide our new verification service as a web front-end to users who wish to solve single verification tasks (tutorial usage), and an API for integrating the service into existing verification infrastructures (massively parallel bulk usage). We experimentally evaluate the effectiveness of this service and show that it can be successfully used to offload verification work to the cloud, considerably sparing local verification resources.BibTeX Entry
@inproceedings{CAV14, author = {Dirk Beyer and Georg Dresler and Philipp Wendler}, title = {Software Verification in the {Google} {App-Engine} Cloud}, booktitle = {Proceedings of the 26th International Conference on Computer-Aided Verification (CAV~2014, Vienna, Austria, July 18-22)}, editor = {A.~Biere and R.~Bloem}, pages = {327-333}, year = {2014}, series = {LNCS~8559}, publisher = {Springer-Verlag, Heidelberg}, isbn = {978-3-319-08866-2}, doi = {10.1007/978-3-319-08867-9_21}, sha256 = {f92060721e703c8553d5420c34f07eea24fe25d36ae9c02217688606e1898704}, url = {http://www.sosy-lab.org/~dbeyer/cpa-appengine}, abstract = {Software verification often requires a large amount of computing resources. In the last years, cloud services emerged as an inexpensive, flexible, and energy-efficient source of computing power. We have investigated if such cloud resources can be used effectively for verification. We chose the platform-as-a-service offer Google App Engine and ported the open-source verification framework CPAchecker to it. We provide our new verification service as a web front-end to users who wish to solve single verification tasks (tutorial usage), and an API for integrating the service into existing verification infrastructures (massively parallel bulk usage). We experimentally evaluate the effectiveness of this service and show that it can be successfully used to offload verification work to the cloud, considerably sparing local verification resources.}, keyword = {CPAchecker,Software Model Checking,Cloud-Based Software Verification}, } -
A Formal Evaluation of DepDegree Based on Weyuker's Properties.
In C. Roy,
A. Begel, and
L. Moonen, editors,
Proceedings of the 22nd International Conference on
Program Comprehension (ICPC 2014, Hyderabad, India, June 2-3),
pages 258-261,
2014.
ACM.
doi:10.1145/2597008.2597794
Keyword(s):
Structural Analysis and Comprehension
Publisher's Version
PDF
Supplement
Abstract
Complexity of source code is an important characteristic that software engineers aim to quantify using static software measurement. Several measures used in practice as indicators for software complexity have theoretical flaws. In order to assess the quality of a software measure, Weyuker established a set of properties that an indicator for program-code complexity should satisfy. It is known that several well-established complexity indicators do not fulfill Weyuker's properties. We show that DepDegree, a measure for data-flow dependencies, satisfies all of Weyuker's properties.BibTeX Entry
@inproceedings{ICPC14, author = {Dirk Beyer and Peter H{\"a}ring}, title = {A Formal Evaluation of {DepDegree} Based on {Weyuker}'s Properties}, booktitle = {Proceedings of the 22nd International Conference on Program Comprehension (ICPC~2014, Hyderabad, India, June 2-3)}, editor = {C.~Roy and A.~Begel and L.~Moonen}, pages = {258-261}, year = {2014}, publisher = {ACM}, isbn = {978-1-4503-2879-1}, doi = {10.1145/2597008.2597794}, url = {http://www.sosy-lab.org/~dbeyer/DepDegreeProperties}, pdf = {https://www.sosy-lab.org/research/pub/2014-ICPC.A_Formal_Evaluation_of_DepDegree_Based_on_Weyukers_Properties.pdf}, abstract = {Complexity of source code is an important characteristic that software engineers aim to quantify using static software measurement. Several measures used in practice as indicators for software complexity have theoretical flaws. In order to assess the quality of a software measure, Weyuker established a set of properties that an indicator for program-code complexity should satisfy. It is known that several well-established complexity indicators do not fulfill Weyuker's properties. We show that DepDegree, a measure for data-flow dependencies, satisfies all of Weyuker's properties.}, keyword = {Structural Analysis and Comprehension}, } -
Status Report on Software Verification (Competition Summary SV-COMP 2014).
In E. Abraham and
K. Havelund, editors,
Proceedings of the 20th International Conference on
Tools and Algorithms for the Construction and Analysis of Systems (TACAS 2014, Grenoble, France, April 5-13),
LNCS 8413,
pages 373-388,
2014.
Springer-Verlag, Heidelberg.
doi:10.1007/978-3-642-54862-8_25
Keyword(s):
Competition on Software Verification (SV-COMP),
Competition on Software Verification (SV-COMP Report),
Software Model Checking
Publisher's Version
PDF
Supplement
Abstract
This report describes the 3rd International Competition on Software Verification (SV-COMP 2014), which is the third edition of a thorough comparative evaluation of fully automatic software verifiers. The reported results represent the state of the art in automatic software verification, in terms of effectiveness and efficiency. The verification tasks of the competition consist of nine categories containing a total of 2868 C programs, covering bit-vector operations, concurrent execution, control-flow and integer data-flow, device-drivers, heap data structures, memory manipulation via pointers, recursive functions, and sequentialized concurrency. The specifications include reachability of program labels and memory safety. The competition is organized as a satellite event at TACAS 2014 in Grenoble, France.BibTeX Entry
@inproceedings{TACAS14, author = {Dirk Beyer}, title = {Status Report on Software Verification (Competition Summary {SV-COMP} 2014)}, booktitle = {Proceedings of the 20th International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS~2014, Grenoble, France, April 5-13)}, editor = {E.~Abraham and K. Havelund}, pages = {373-388}, year = {2014}, series = {LNCS~8413}, publisher = {Springer-Verlag, Heidelberg}, isbn = {978-3-642-54861-1}, doi = {10.1007/978-3-642-54862-8_25}, sha256 = {33d6c82695f46b11a762d5237608b7934f8bf664f5bd36ad3e1722591b398d7c}, url = {https://sv-comp.sosy-lab.org/2014/}, pdf = {https://www.sosy-lab.org/research/pub/2014-TACAS.Status_Report_on_Software_Verification.pdf}, abstract = {This report describes the 3rd International Competition on Software Verification (SV-COMP 2014), which is the third edition of a thorough comparative evaluation of fully automatic software verifiers. The reported results represent the state of the art in automatic software verification, in terms of effectiveness and efficiency. The verification tasks of the competition consist of nine categories containing a total of 2868 C programs, covering bit-vector operations, concurrent execution, control-flow and integer data-flow, device-drivers, heap data structures, memory manipulation via pointers, recursive functions, and sequentialized concurrency. The specifications include reachability of program labels and memory safety. The competition is organized as a satellite event at TACAS 2014 in Grenoble, France.}, keyword = {Competition on Software Verification (SV-COMP),Competition on Software Verification (SV-COMP Report),Software Model Checking}, } -
Evaluation and Reproducibility of Program Analysis (Track Introduction).
In T. Margaria and
B. Steffen, editors,
Proceedings of the 6th International Symposium on
Leveraging Applications of Formal Methods, Verification, and Validation (ISoLA 2014, Corfu, Greece, October 8-11),
LNCS 8803,
pages 479-481,
2014.
Springer-Verlag, Heidelberg.
doi:10.1007/978-3-662-45231-8_37
Publisher's Version
PDF
Supplement
BibTeX Entry
@inproceedings{ISOLA14-TrackIntro, author = {Markus Schordan and Welf L{\"{o}}we and Dirk Beyer}, title = {Evaluation and Reproducibility of Program Analysis (Track Introduction)}, booktitle = {Proceedings of the 6th International Symposium on Leveraging Applications of Formal Methods, Verification, and Validation (ISoLA~2014, Corfu, Greece, October 8-11)}, editor = {T.~Margaria and B.~Steffen}, pages = {479-481}, year = {2014}, series = {LNCS~8803}, publisher = {Springer-Verlag, Heidelberg}, doi = {10.1007/978-3-662-45231-8_37}, sha256 = {}, url = {https://doi.org/10.1007/978-3-662-45231-8_37}, } -
Reusing Information in Multi-Goal Reachability Analyses.
In W. Hasselbring and
N. C. Ehmke, editors,
Tagungsband Software Engineering 2014, Fachtagung des GI-Fachbereichs Softwaretechnik
(25. Februar - 28. Februar 2014, Kiel, Deutschland),
LNI 227,
pages 97-98,
2014.
Gesellschaft für Informatik (GI).
Keyword(s):
CPAchecker,
Software Model Checking
Publisher's Version
PDF
BibTeX Entry
@inproceedings{SE14-MultiGoal, author = {Dirk Beyer and Andreas Holzer and Michael Tautschnig and Helmut Veith}, title = {Reusing Information in Multi-Goal Reachability Analyses}, booktitle = {Tagungsband Software Engineering 2014, Fachtagung des GI-Fachbereichs Softwaretechnik (25. Februar - 28. Februar 2014, Kiel, Deutschland)}, editor = {W.~Hasselbring and N.~C.~Ehmke}, pages = {97--98}, year = {2014}, series = {{LNI}~227}, publisher = {Gesellschaft f{\"{u}}r Informatik ({GI})}, pdf = {https://dl.gi.de/bitstream/handle/20.500.12116/30979/097.pdf?sequence=1&isAllowed=y}, keyword = {CPAchecker,Software Model Checking}, annote = {This is a summary of a <a href="https://www.sosy-lab.org/research/bib/Year/2013.html#ESOP13">full article on this topic</a> that appeared in Proc. ESOP 2013.}, doinone = {DOI not available}, urlpub = {https://dl.gi.de/handle/20.500.12116/30979}, }Additional Infos
This is a summary of a full article on this topic that appeared in Proc. ESOP 2013. -
Precision Reuse in CPAchecker.
In W. Hasselbring and
N. C. Ehmke, editors,
Tagungsband Software Engineering 2014, Fachtagung des GI-Fachbereichs Softwaretechnik
(25. Februar - 28. Februar 2014, Kiel, Deutschland),
LNI 227,
pages 41-42,
2014.
Gesellschaft für Informatik (GI).
Keyword(s):
CPAchecker,
Software Model Checking
Publisher's Version
PDF
Abstract
Continuous testing during development is a well-established technique for software-quality assurance. Continuous model checking from revision to revision is not yet established as a standard practice, because the enormous resource consumption makes its application impractical. Model checkers compute a large number of verification facts that are necessary for verifying if a given specification holds. We have identified a category of such intermediate results that are easy to store and efficient to reuse: abstraction precisions. The precision of an abstract domain specifies the level of abstraction that the analysis works on. Precisions are thus a precious result of the verification effort and it is a waste of resources to throw them away after each verification run. In particular, precisions are reasonably small and thus easy to store; they are easy to process and have a large impact on resource consumption. We experimentally show the impact of precision reuse on industrial verification problems created from 62 Linux kernel device drivers with 1119 revisions.BibTeX Entry
@inproceedings{SE14-Reuse, author = {Dirk Beyer and Stefan L{\"{o}}we and Evgeny Novikov and Andreas Stahlbauer and Philipp Wendler}, title = {Precision Reuse in CPAchecker}, booktitle = {Tagungsband Software Engineering 2014, Fachtagung des GI-Fachbereichs Softwaretechnik (25. Februar - 28. Februar 2014, Kiel, Deutschland)}, editor = {W.~Hasselbring and N.~C.~Ehmke}, pages = {41--42}, year = {2014}, series = {{LNI}~227}, publisher = {Gesellschaft f{\"{u}}r Informatik ({GI})}, pdf = {https://dl.gi.de/bitstream/handle/20.500.12116/30949/041.pdf?sequence=1&isAllowed=y}, abstract = {Continuous testing during development is a well-established technique for software-quality assurance. Continuous model checking from revision to revision is not yet established as a standard practice, because the enormous resource consumption makes its application impractical. Model checkers compute a large number of verification facts that are necessary for verifying if a given specification holds. We have identified a category of such intermediate results that are easy to store and efficient to reuse: abstraction precisions. The precision of an abstract domain specifies the level of abstraction that the analysis works on. Precisions are thus a precious result of the verification effort and it is a waste of resources to throw them away after each verification run. In particular, precisions are reasonably small and thus easy to store; they are easy to process and have a large impact on resource consumption. We experimentally show the impact of precision reuse on industrial verification problems created from 62 Linux kernel device drivers with 1119 revisions.}, keyword = {CPAchecker,Software Model Checking}, annote = {This is a summary of a <a href="https://www.sosy-lab.org/research/bib/Year/2013.html#FSE13">full article on this topic</a> that appeared in Proc. ESEC/FSE 2013.}, doinone = {DOI not available}, urlpub = {https://dl.gi.de/handle/20.500.12116/30949}, }Additional Infos
This is a summary of a full article on this topic that appeared in Proc. ESEC/FSE 2013. -
Modular refinement for submachines of ASMs.
In Proc. of Alloy, ASM, B, TLA, VDM, and Z (ABZ),
LNCS,
pages 188-203,
2014.
Springer.
PDF
BibTeX Entry
@inproceedings{ernst:abz2014, author = {Gidon Ernst and Jörg Pfähler and Gerhard Schellhorn and Wolfgang Reif}, title = {{Modular refinement for submachines of ASMs}}, booktitle = {Proc. of Alloy, ASM, B, TLA, VDM, and Z (ABZ)}, volume = {8477}, pages = {188--203}, year = {2014}, series = {LNCS}, publisher = {Springer}, pdf = {https://www.sosy-lab.org/research/pub/2014-ABZ.Modular_Refinement_for_Submachines_of_ASMs.pdf}, } -
Development of a verified Flash file system.
In Proc. of Alloy, ASM, B, TLA, VDM, and Z (ABZ),
LNCS,
pages 9-24,
2014.
Springer.
Invited Paper
PDF
BibTeX Entry
@inproceedings{ernst:abz2014-overview, author = {Gerhard Schellhorn and Gidon Ernst and Jörg Pfähler and Dominik Haneberg and Wolfgang Reif}, title = {{Development of a verified Flash file system}}, booktitle = {Proc. of Alloy, ASM, B, TLA, VDM, and Z (ABZ)}, volume = {8477}, pages = {9--24}, year = {2014}, series = {LNCS}, publisher = {Springer}, pdf = {https://www.sosy-lab.org/research/pub/2014-ABZ.Development_of_a_Verified_Flash_File_System.pdf}, note = {Invited Paper}, } -
CPAchecker with Sequential Combination of
Explicit-Value Analyses and Predicate Analyses (Competition Contribution).
In E. Abraham and
K. Havelund, editors,
Proceedings of the 20th International Conference on
Tools and Algorithms for the Construction and Analysis of Systems (TACAS 2014, Grenoble, France, April 5-13),
LNCS 8413,
pages 392-394,
2014.
Springer-Verlag, Heidelberg.
doi:10.1007/978-3-642-54862-8_27
Keyword(s):
CPAchecker,
Competition on Software Verification (SV-COMP),
Software Model Checking
Publisher's Version
PDF
Supplement
Abstract
CPAchecker is a framework for software verification, built on the foundations of configurable program analysis (CPA). For the SV-COMP'14, we file a CPAchecker configuration that runs up to five analyses in sequence. The first two analyses of our approach utilize the explicit-value domain for modeling the state space, while the remaining analyses are based on predicate abstraction. In addition to that, a bit-precise counterexample checker comes into action whenever an analysis finds a counterexample. The combination of conceptually different analyses is key to the success of our verification approach, as the diversity of verification tasks is taken into account.BibTeX Entry
@inproceedings{CPACHECKER-COMP14, author = {Stefan~L{\"{o}}we and Mikhail~U.~Mandrykin and Philipp~Wendler}, title = {{{\sc CPAchecker}} with Sequential Combination of Explicit-Value Analyses and Predicate Analyses (Competition Contribution)}, booktitle = {Proceedings of the 20th International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS~2014, Grenoble, France, April 5-13)}, editor = {E.~Abraham and K. Havelund}, pages = {392-394}, year = {2014}, series = {LNCS~8413}, publisher = {Springer-Verlag, Heidelberg}, isbn = {978-3-642-54861-1}, doi = {10.1007/978-3-642-54862-8_27}, sha256 = {}, url = {https://doi.org/10.1007/978-3-642-54862-8_27}, pdf = {https://www.sosy-lab.org/research/pub/2014-TACAS.CPAchecker_with_Sequential_Combination_of_Explicit-Value_Analyses_and_Predicate_Analyses.pdf}, abstract = {CPAchecker is a framework for software verification, built on the foundations of configurable program analysis (CPA). For the SV-COMP'14, we file a CPAchecker configuration that runs up to five analyses in sequence. The first two analyses of our approach utilize the explicit-value domain for modeling the state space, while the remaining analyses are based on predicate abstraction. In addition to that, a bit-precise counterexample checker comes into action whenever an analysis finds a counterexample. The combination of conceptually different analyses is key to the success of our verification approach, as the diversity of verification tasks is taken into account.}, keyword = {CPAchecker,Competition on Software Verification (SV-COMP),Software Model Checking}, annote = {Won categories ControlFlow, MemorySafety, and Simple, and received one silver and one bronze medal in <span style="white-space: nowrap"><a href="https://sv-comp.sosy-lab.org/2014/">SV-COMP'14</a></span>}, }Additional Infos
Won categories ControlFlow, MemorySafety, and Simple, and received one silver and one bronze medal in SV-COMP'14
2013
-
Domain Types: Abstract-Domain Selection Based on Variable Usage.
In V. Bertacco and
A. Legay, editors,
Proceedings of the 9th Haifa Verification Conference (HVC 2013, Haifa, Israel, November 5-7),
LNCS 8244,
pages 262-278,
2013.
Springer-Verlag, Heidelberg.
doi:10.1007/978-3-319-03077-7_18
Keyword(s):
CPAchecker,
Software Model Checking
Publisher's Version
PDF
Supplement
Abstract
The success of software model checking depends on finding an appropriate abstraction of the program to verify. The choice of the abstract domain and the analysis configuration is currently left to the user, who may not be familiar with the tradeoffs and performance details of the available abstract domains. We introduce the concept of domain types, which classify the program variables into types that are more fine-grained than standard declared types (e.g., 'int' and 'long') to guide the selection of an appropriate abstract domain for a model checker. Our implementation on top of an existing verification framework determines the domain type for each variable in a pre-analysis step, based on the usage of variables in the program, and then assigns each variable to an abstract domain. Based on a series of experiments on a comprehensive set of verification tasks from international verification competitions, we demonstrate that the choice of the abstract domain per variable (we consider one explicit and one symbolic domain) can substantially improve the verification in terms of performance and precision.BibTeX Entry
@inproceedings{HVC13, author = {Sven Apel and Dirk Beyer and Karlheinz Friedberger and Franco Raimondi and Alexander von Rhein}, title = {Domain Types: Abstract-Domain Selection Based on Variable Usage}, booktitle = {Proceedings of the 9th Haifa Verification Conference (HVC 2013, Haifa, Israel, November 5-7)}, editor = {V.~Bertacco and A.~Legay}, pages = {262-278}, year = {2013}, series = {LNCS~8244}, publisher = {Springer-Verlag, Heidelberg}, isbn = {978-3-319-03076-0}, doi = {10.1007/978-3-319-03077-7_18}, url = {https://www.sosy-lab.org/research/domaintypes/}, pdf = {https://www.sosy-lab.org/research/pub/2013-HVC.Domain_Types_Abstract-Domain_Selection_Based_on_Variable_Usage.pdf}, abstract = {The success of software model checking depends on finding an appropriate abstraction of the program to verify. The choice of the abstract domain and the analysis configuration is currently left to the user, who may not be familiar with the tradeoffs and performance details of the available abstract domains. We introduce the concept of domain types, which classify the program variables into types that are more fine-grained than standard declared types (e.g., `int' and `long') to guide the selection of an appropriate abstract domain for a model checker. Our implementation on top of an existing verification framework determines the domain type for each variable in a pre-analysis step, based on the usage of variables in the program, and then assigns each variable to an abstract domain. Based on a series of experiments on a comprehensive set of verification tasks from international verification competitions, we demonstrate that the choice of the abstract domain per variable (we consider one explicit and one symbolic domain) can substantially improve the verification in terms of performance and precision.}, keyword = {CPAchecker,Software Model Checking}, } -
Precision Reuse for Efficient Regression Verification.
In B. Meyer,
L. Baresi, and
M. Mezini, editors,
Proceedings of the 9th Joint Meeting of the European Software Engineering Conference and
the ACM SIGSOFT Symposium on Foundations of Software Engineering (ESEC/FSE 2013, St. Petersburg, Russia, August 18-26),
pages 389-399,
2013.
ACM.
doi:10.1145/2491411.2491429
Keyword(s):
CPAchecker,
Software Model Checking
Publisher's Version
PDF
Supplement
Abstract
Continuous testing during development is a well-established technique for software-quality assurance. Continuous model checking from revision to revision is not yet established as a standard practice, because the enormous resource consumption makes its application impractical. Model checkers compute a large number of verification facts that are necessary for verifying if a given specification holds. We have identified a category of such intermediate results that are easy to store and efficient to reuse: abstraction precisions. The precision of an abstract domain specifies the level of abstraction that the analysis works on. Precisions are thus a precious result of the verification effort and it is a waste of resources to throw them away after each verification run. In particular, precisions are reasonably small and thus easy to store; they are easy to process and have a large impact on resource consumption. We experimentally show the impact of precision reuse on industrial verification problems created from 62 Linux kernel device drivers with 1119 revisions.BibTeX Entry
@inproceedings{FSE13, author = {Dirk Beyer and Stefan L{\"o}we and Evgeny Novikov and Andreas Stahlbauer and Philipp Wendler}, title = {Precision Reuse for Efficient Regression Verification}, booktitle = {Proceedings of the 9th Joint Meeting of the European Software Engineering Conference and the ACM SIGSOFT Symposium on Foundations of Software Engineering (ESEC/FSE 2013, St. Petersburg, Russia, August 18-26)}, editor = {B.~Meyer and L.~Baresi and M.~Mezini}, pages = {389-399}, year = {2013}, publisher = {ACM}, isbn = {}, doi = {10.1145/2491411.2491429}, url = {https://www.sosy-lab.org/research/cpa-reuse/}, pdf = {https://www.sosy-lab.org/research/pub/2013-FSE.Precision_Reuse_for_Efficient_Regression_Verification.pdf}, abstract = {Continuous testing during development is a well-established technique for software-quality assurance. Continuous model checking from revision to revision is not yet established as a standard practice, because the enormous resource consumption makes its application impractical. Model checkers compute a large number of verification facts that are necessary for verifying if a given specification holds. We have identified a category of such intermediate results that are easy to store and efficient to reuse: abstraction precisions. The precision of an abstract domain specifies the level of abstraction that the analysis works on. Precisions are thus a precious result of the verification effort and it is a waste of resources to throw them away after each verification run. In particular, precisions are reasonably small and thus easy to store; they are easy to process and have a large impact on resource consumption. We experimentally show the impact of precision reuse on industrial verification problems created from 62 Linux kernel device drivers with 1119 revisions.}, keyword = {CPAchecker,Software Model Checking}, } -
Reuse of Verification Results: Conditional Model Checking, Precision Reuse, and Verification Witnesses.
In E. Bartocci and
C. R. Ramakrishnan, editors,
Proceedings of the 2013 International Symposium
on Model Checking of Software (SPIN 2013, Stony Brook, NY, USA, July 8-9),
LNCS 7976,
pages 1-17,
2013.
Springer-Verlag, Heidelberg.
doi:10.1007/978-3-642-39176-7_1
Keyword(s):
Software Model Checking,
Witness-Based Validation,
Witness-Based Validation (main)
Publisher's Version
PDF
Supplement
Abstract
Verification is a complex algorithmic task, requiring large amounts of computing resources. One approach to reduce the resource consumption is to reuse information from previous verification runs. This paper gives an overview of three techniques for such information reuse. Conditional model checking outputs a condition that describes the state space that was successfully verified, and accepts as input a condition that instructs the model checker which parts of the system should be verified; thus, later verification runs can use the output condition of previous runs in order to not verify again parts of the state space that were already verified. Precision reuse is a technique to use intermediate results from previous verification runs to accelerate further verification runs of the system; information about the level of abstraction in the abstract model can be reused in later verification runs. Typical model checkers provide an error path through the system as witness for having proved that a system violates a property, and a few model checkers provide some kind of proof certificate as a witness for the correctness of the system; these witnesses should be such that the verifiers can read them and -with less computational effort- (re-) verify that the witness is valid.BibTeX Entry
@inproceedings{SPIN13, author = {Dirk Beyer and Philipp Wendler}, title = {Reuse of Verification Results: Conditional Model Checking, Precision Reuse, and Verification Witnesses}, booktitle = {Proceedings of the 2013 International Symposium on Model Checking of Software (SPIN~2013, Stony Brook, NY, USA, July 8-9)}, editor = {E.~Bartocci and C.~R.~Ramakrishnan}, pages = {1-17}, year = {2013}, series = {LNCS~7976}, publisher = {Springer-Verlag, Heidelberg}, isbn = {}, doi = {10.1007/978-3-642-39176-7_1}, sha256 = {}, url = {http://www.sosy-lab.org/~dbeyer/cpa-reuse-gen/}, pdf = {https://www.sosy-lab.org/research/pub/2013-SPIN.Reuse_of_Verification_Results.pdf}, abstract = {Verification is a complex algorithmic task, requiring large amounts of computing resources. One approach to reduce the resource consumption is to reuse information from previous verification runs. This paper gives an overview of three techniques for such information reuse. Conditional model checking outputs a condition that describes the state space that was successfully verified, and accepts as input a condition that instructs the model checker which parts of the system should be verified; thus, later verification runs can use the output condition of previous runs in order to not verify again parts of the state space that were already verified. Precision reuse is a technique to use intermediate results from previous verification runs to accelerate further verification runs of the system; information about the level of abstraction in the abstract model can be reused in later verification runs. Typical model checkers provide an error path through the system as witness for having proved that a system violates a property, and a few model checkers provide some kind of proof certificate as a witness for the correctness of the system; these witnesses should be such that the verifiers can read them and ---with less computational effort--- (re-) verify that the witness is valid.}, keyword = {Software Model Checking,Witness-Based Validation,Witness-Based Validation (main)}, } -
Strategies for Product-Line Verification: Case Studies and Experiments.
In D. Notkin,
B. H. C. Cheng, and
K. Pohl, editors,
Proceedings of the 35th International Conference on
Software Engineering (ICSE 2013, San Francisco, CA, USA, May 18-26),
pages 482-491,
2013.
IEEE.
doi:10.1109/ICSE.2013.6606594
Keyword(s):
Software Model Checking
Publisher's Version
PDF
Supplement
Abstract
Product-line technology is increasingly used in mission-critical and safety-critical applications. Hence, researchers are developing verification approaches that follow different strategies to cope with the specific properties of product lines. While the research community is discussing the mutual strengths and weaknesses of the different strategies-mostly at a conceptual level-there is a lack of evidence in terms of case studies, tool implementations, and experiments. We have collected and prepared six product lines as subject systems for experimentation. Furthermore, we have developed a model-checking tool chain for C-based and Java-based product lines, called SPLverifier, which we use to compare sample-based and family-based strategies with regard to verification performance and the ability to find defects. Based on the experimental results and an analytical model, we revisit the discussion of the strengths and weaknesses of product-line-verification strategies.BibTeX Entry
@inproceedings{ICSE13, author = {Sven Apel and Alexander von Rhein and Philipp Wendler and Armin Gr{\"o}{\ss}linger and Dirk Beyer}, title = {Strategies for Product-Line Verification: Case Studies and Experiments}, booktitle = {Proceedings of the 35th International Conference on Software Engineering (ICSE~2013, San Francisco, CA, USA, May 18-26)}, editor = {D.~Notkin and B.~H.~C.~Cheng and K.~Pohl}, pages = {482-491}, year = {2013}, publisher = {IEEE}, isbn = {978-1-4673-3076-3}, doi = {10.1109/ICSE.2013.6606594}, url = {http://fosd.net/FAV}, pdf = {https://www.sosy-lab.org/research/pub/2013-ICSE.Strategies_for_Product-Line_Verification.pdf}, abstract = {Product-line technology is increasingly used in mission-critical and safety-critical applications. Hence, researchers are developing verification approaches that follow different strategies to cope with the specific properties of product lines. While the research community is discussing the mutual strengths and weaknesses of the different strategies---mostly at a conceptual level---there is a lack of evidence in terms of case studies, tool implementations, and experiments. We have collected and prepared six product lines as subject systems for experimentation. Furthermore, we have developed a model-checking tool chain for C-based and Java-based product lines, called SPLverifier, which we use to compare sample-based and family-based strategies with regard to verification performance and the ability to find defects. Based on the experimental results and an analytical model, we revisit the discussion of the strengths and weaknesses of product-line--verification strategies.}, keyword = {Software Model Checking}, } -
Second Competition on Software Verification (Summary of SV-COMP 2013).
In N. Piterman and
S. Smolka, editors,
Proceedings of the 19th International Conference on
Tools and Algorithms for the Construction and Analysis of Systems (TACAS 2013, Rome, Italy, March 16-24),
LNCS 7795,
pages 594-609,
2013.
Springer-Verlag, Heidelberg.
doi:10.1007/978-3-642-36742-7_43
Keyword(s):
Competition on Software Verification (SV-COMP),
Competition on Software Verification (SV-COMP Report),
Software Model Checking
Publisher's Version
PDF
Supplement
Abstract
This report describes the 2nd International Competition on Software Verification (SV-COMP 2013), which is the second edition of this thorough evaluation of fully automatic verifiers for software programs. The reported results represent the 2012 state-of-the-art in automatic software verification, in terms of effectiveness and efficiency. The benchmark set of verification tasks consists of eleven categories containing a total of 2315 programs, written in C, and exposing features of integers, heap-data structures, bit-vector operations, and concurrency; the properties include reachability and memory safety. The competition is again organized as a satellite event of TACAS.BibTeX Entry
@inproceedings{TACAS13, author = {Dirk Beyer}, title = {Second Competition on Software Verification ({S}ummary of {SV-COMP} 2013)}, booktitle = {Proceedings of the 19th International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS~2013, Rome, Italy, March 16-24)}, editor = {N.~Piterman and S.~Smolka}, pages = {594-609}, year = {2013}, series = {LNCS~7795}, publisher = {Springer-Verlag, Heidelberg}, isbn = {978-3-642-36741-0}, doi = {10.1007/978-3-642-36742-7_43}, sha256 = {a9a0e10c91869f8d855f4c54534d64370dfec39791cd7befacd5c775a99a4ea9}, url = {https://sv-comp.sosy-lab.org/2013/}, pdf = {https://www.sosy-lab.org/research/pub/2013-TACAS.Second_Competition_on_Software_Verification.pdf}, abstract = {This report describes the 2nd International Competition on Software Verification (SV-COMP 2013), which is the second edition of this thorough evaluation of fully automatic verifiers for software programs. The reported results represent the 2012 state-of-the-art in automatic software verification, in terms of effectiveness and efficiency. The benchmark set of verification tasks consists of eleven categories containing a total of 2315 programs, written in C, and exposing features of integers, heap-data structures, bit-vector operations, and concurrency; the properties include reachability and memory safety. The competition is again organized as a satellite event of TACAS.}, keyword = {Competition on Software Verification (SV-COMP),Competition on Software Verification (SV-COMP Report),Software Model Checking}, } -
Information Reuse for Multi-goal Reachability Analyses.
In M. Felleisen and
P. Gardner, editors,
Proceedings of the 22nd European Symposium on Programming
(ESOP 2013, Rome, Italy, March 19-22),
LNCS 7792,
pages 472-491,
2013.
Springer-Verlag, Heidelberg.
doi:10.1007/978-3-642-37036-6_26
Keyword(s):
CPAchecker,
Software Model Checking
Publisher's Version
PDF
Presentation
Abstract
It is known that model checkers can generate test inputs as witnesses for reachability specifications (or, equivalently, as counterexamples for safety properties). While this use of model checkers for testing yields a theoretically sound test-generation procedure, it scales poorly for computing complex test suites for large sets of test goals, because each test goal requires an expensive run of the model checker. We represent test goals as automata and exploit relations between automata in order to reuse existing reachability information for the analysis of subsequent test goals. Exploiting the sharing of sub-automata in a series of reachability queries, we achieve considerable performance improvements over the standard approach. We show the practical use of our multi-goal reachability analysis in a predicate-abstraction-based test-input generator for the test-specification language FQL.BibTeX Entry
@inproceedings{ESOP13, author = {Dirk Beyer and Andreas Holzer and Michael Tautschnig and Helmut Veith}, title = {Information Reuse for Multi-goal Reachability Analyses}, booktitle = {Proceedings of the 22nd European Symposium on Programming (ESOP~2013, Rome, Italy, March 19-22)}, editor = {M.~Felleisen and P.~Gardner}, pages = {472-491}, year = {2013}, series = {LNCS~7792}, publisher = {Springer-Verlag, Heidelberg}, isbn = {978-3-642-37035-9}, doi = {10.1007/978-3-642-37036-6_26}, sha256 = {9112a1c10c81e5aa3948a3a66bfd5ae1ab0d3c08186a67329fcf3efbb7f4d406}, url = {}, presentation = {https://www.sosy-lab.org/research/prs/2013-03-21_ESOP13_InformationReuse_Andreas.pdf}, abstract = {It is known that model checkers can generate test inputs as witnesses for reachability specifications (or, equivalently, as counterexamples for safety properties). While this use of model checkers for testing yields a theoretically sound test-generation procedure, it scales poorly for computing complex test suites for large sets of test goals, because each test goal requires an expensive run of the model checker. We represent test goals as automata and exploit relations between automata in order to reuse existing reachability information for the analysis of subsequent test goals. Exploiting the sharing of sub-automata in a series of reachability queries, we achieve considerable performance improvements over the standard approach. We show the practical use of our multi-goal reachability analysis in a predicate-abstraction-based test-input generator for the test-specification language FQL.}, keyword = {CPAchecker,Software Model Checking}, } -
Explicit-State Software Model Checking Based on CEGAR and Interpolation.
In V. Cortellessa and
D. Varro, editors,
Proceedings of the 16th International Conference on
Fundamental Approaches to Software Engineering (FASE 2013, Rome, Italy, March 20-22),
LNCS 7793,
pages 146-162,
2013.
Springer-Verlag, Heidelberg.
doi:10.1007/978-3-642-37057-1_11
Keyword(s):
CPAchecker,
Software Model Checking
Publisher's Version
PDF
Abstract
Abstraction, counterexample-guided refinement, and interpolation are techniques that are essential to the success of predicate-based program analysis. These techniques have not yet been applied together to explicit-value program analysis. We present an approach that integrates abstraction and interpolation-based refinement into an explicit-value analysis, i.e., a program analysis that tracks explicit values for a specified set of variables (the precision). The algorithm uses an abstract reachability graph as central data structure and a path-sensitive dynamic approach for precision adjustment. We evaluate our algorithm on the benchmark set of the Competition on Software Verification 2012 (SV-COMP'12) to show that our new approach is highly competitive. We also show that combining our new approach with an auxiliary predicate analysis scores significantly higher than the SV-COMP'12 winner.BibTeX Entry
@inproceedings{FASE13, author = {Dirk Beyer and Stefan L{\"o}we}, title = {Explicit-State Software Model Checking Based on {CEGAR} and Interpolation}, booktitle = {Proceedings of the 16th International Conference on Fundamental Approaches to Software Engineering (FASE~2013, Rome, Italy, March 20-22)}, editor = {V.~Cortellessa and D.~Varro}, pages = {146-162}, year = {2013}, series = {LNCS~7793}, publisher = {Springer-Verlag, Heidelberg}, isbn = {978-3-642-37056-4}, doi = {10.1007/978-3-642-37057-1_11}, sha256 = {3e2ba52da100fd835736b3673fa47a3429522dd3f4c0834b13f29d6a68a8bd45}, url = {}, abstract = {Abstraction, counterexample-guided refinement, and interpolation are techniques that are essential to the success of predicate-based program analysis. These techniques have not yet been applied together to explicit-value program analysis. We present an approach that integrates abstraction and interpolation-based refinement into an explicit-value analysis, i.e., a program analysis that tracks explicit values for a specified set of variables (the precision). The algorithm uses an abstract reachability graph as central data structure and a path-sensitive dynamic approach for precision adjustment. We evaluate our algorithm on the benchmark set of the Competition on Software Verification 2012 (SV-COMP'12) to show that our new approach is highly competitive. We also show that combining our new approach with an auxiliary predicate analysis scores significantly higher than the SV-COMP'12 winner.}, keyword = {CPAchecker,Software Model Checking}, } -
BDD-Based Software Model Checking with CPAchecker.
In A. Kucera et al., editors,
Proceedings of the Annual Doctoral Workshop on
Mathematical and Engineering Methods in Computer Science
(MEMICS 2012, Znojmo, Czech Republic, October 26-28),
LNCS 7721,
pages 1-11,
2013.
Springer-Verlag, Heidelberg.
doi:10.1007/978-3-642-36046-6_1
Keyword(s):
CPAchecker,
Software Model Checking
Publisher's Version
PDF
BibTeX Entry
@inproceedings{MEMICS12, author = {Dirk Beyer and Andreas Stahlbauer}, title = {{BDD}-Based Software Model Checking with {{\sc CPAchecker}}}, booktitle = {Proceedings of the Annual Doctoral Workshop on Mathematical and Engineering Methods in Computer Science (MEMICS~2012, Znojmo, Czech Republic, October 26-28)}, editor = {A.~Kucera~et~al.}, pages = {1-11}, year = {2013}, series = {LNCS~7721}, publisher = {Springer-Verlag, Heidelberg}, isbn = {978-3-642-36044-2}, doi = {10.1007/978-3-642-36046-6_1}, url = {}, pdf = {https://www.sosy-lab.org/research/pub/2013-MEMICS.BDD-Based_Software_Model_Checking_with_CPAchecker.pdf}, keyword = {CPAchecker,Software Model Checking}, } -
Formal specification of an erase block management layer for Flash memory.
In Haifa Verification Conference (HVC),
LNCS,
pages 214-229,
2013.
Springer.
PDF
BibTeX Entry
@inproceedings{ernst:hvc2013, author = {Jörg Pfähler and Gidon Ernst and Gerhard Schellhorn and Dominik Haneberg and Wolfgang Reif}, title = {{Formal specification of an erase block management layer for Flash memory}}, booktitle = {Haifa Verification Conference (HVC)}, volume = {8244}, pages = {214--229}, year = {2013}, series = {LNCS}, publisher = {Springer}, pdf = {https://www.sosy-lab.org/research/pub/2013-HVC_Formal_Specification_of_an_Erase_Block_management_Layer_for_Flash_Memory.pdf}, } -
Verification of a Virtual Filesystem Switch.
In Proc. of Verified Software: Theories, Tools, Experiments (VSTTE),
LNCS,
pages 242-261,
2013.
Springer.
PDF
BibTeX Entry
@inproceedings{ernst:vstte2013, author = {Gidon Ernst and Gerhard Schellhorn and Dominik Haneberg and Jörg Pfähler and Wolfgang Reif}, title = {{Verification of a Virtual Filesystem Switch}}, booktitle = {Proc. of Verified Software: Theories, Tools, Experiments (VSTTE)}, volume = {8164}, pages = {242--261}, year = {2013}, series = {LNCS}, publisher = {Springer}, pdf = {https://www.sosy-lab.org/research/pub/2013-VSTTE.Verification_of_a_Virtual_Filesystem_Switch.pdf}, } -
CPAchecker with Sequential Combination of
Explicit-State Analysis and Predicate Analysis (Competition Contribution).
In N. Piterman and
S. Smolka, editors,
Proceedings of the 19th International Conference on
Tools and Algorithms for the Construction and Analysis of Systems (TACAS 2013, Rome, Italy, March 16-24),
LNCS 7795,
pages 613-615,
2013.
Springer-Verlag, Heidelberg.
doi:10.1007/978-3-642-36742-7_45
Keyword(s):
CPAchecker,
Competition on Software Verification (SV-COMP),
Software Model Checking
Publisher's Version
PDF
Supplement
Abstract
CPAchecker is an open-source framework for software verification, based on the concepts of configurable program analysis (CPA). We submit a CPAchecker configuration that uses a sequential combination of two approaches. It starts with an explicit-state analysis, and, if no answer can be found within some time, switches to a predicate analysis with adjustable-block encoding and CEGAR.BibTeX Entry
@inproceedings{CPACHECKERSEQCOM-COMP13, author = {Philipp Wendler}, title = {{{\sc CPAchecker}} with Sequential Combination of Explicit-State Analysis and Predicate Analysis (Competition Contribution)}, booktitle = {Proceedings of the 19th International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS~2013, Rome, Italy, March 16-24)}, editor = {N.~Piterman and S.~Smolka}, pages = {613-615}, year = {2013}, series = {LNCS~7795}, publisher = {Springer-Verlag, Heidelberg}, isbn = {978-3-642-36741-0}, doi = {10.1007/978-3-642-36742-7_45}, sha256 = {}, url = {https://doi.org/10.1007/978-3-642-36742-7_45}, pdf = {https://www.sosy-lab.org/research/pub/2013-TACAS.CPAchecker_with_Sequential_Combination_of_Explicit-State_Analysis_and_Predicate_Analysis.pdf}, abstract = {CPAchecker is an open-source framework for software verification, based on the concepts of configurable program analysis (CPA). We submit a CPAchecker configuration that uses a sequential combination of two approaches. It starts with an explicit-state analysis, and, if no answer can be found within some time, switches to a predicate analysis with adjustable-block encoding and CEGAR.}, keyword = {CPAchecker,Competition on Software Verification (SV-COMP),Software Model Checking}, annote = {Won category Overall and received five bronze medals in <span style="white-space: nowrap"><a href="https://sv-comp.sosy-lab.org/2013/">SV-COMP'13</a></span>}, }Additional Infos
Won category Overall and received five bronze medals in SV-COMP'13 -
CPAchecker with Explicit-Value Analysis
Based on CEGAR and Interpolation (Competition Contribution).
In N. Piterman and
S. Smolka, editors,
Proceedings of the 19th International Conference on
Tools and Algorithms for the Construction and Analysis of Systems (TACAS 2013, Rome, Italy, March 16-24),
LNCS 7795,
pages 610-612,
2013.
Springer.
doi:10.1007/978-3-642-36742-7_44
Keyword(s):
CPAchecker,
Competition on Software Verification (SV-COMP),
Software Model Checking
Publisher's Version
PDF
Supplement
BibTeX Entry
@inproceedings{CPACHECKEREXPLICIT-COMP13, author = {Stefan L{\"{o}}we}, title = {{{\sc CPAchecker}} with Explicit-Value Analysis Based on {CEGAR} and Interpolation (Competition Contribution)}, booktitle = {Proceedings of the 19th International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS~2013, Rome, Italy, March 16-24)}, editor = {N.~Piterman and S.~Smolka}, pages = {610-612}, year = {2013}, series = {LNCS~7795}, publisher = {Springer}, isbn = {978-3-642-36741-0}, doi = {10.1007/978-3-642-36742-7_44}, sha256 = {}, url = {https://doi.org/10.1007/978-3-642-36742-7_44}, keyword = {CPAchecker,Competition on Software Verification (SV-COMP),Software Model Checking}, annote = {Received four silver medals in <span style="white-space: nowrap"><a href="https://sv-comp.sosy-lab.org/2013/">SV-COMP'13</a></span>}, }Additional Infos
Received four silver medals in SV-COMP'13
2012
-
Conditional Model Checking: A Technique to Pass Information between Verifiers.
In Tevfik Bultan and
Martin Robillard, editors,
Proceedings of the 20th ACM SIGSOFT International Symposium on the
Foundations of Software Engineering (FSE 2012, Cary, NC, November 10-17),
2012.
ACM.
doi:10.1145/2393596.2393664
Keyword(s):
CPAchecker,
Software Model Checking
Publisher's Version
PDF
Supplement
Abstract
Software model checking, as an undecidable problem, has three possible outcomes: (1) the program satisfies the specification, (2) the program does not satisfy the specification, and (3) the model checker fails. The third outcome usually manifests itself in a space-out, time-out, or one component of the verification tool giving up; in all of these failing cases, significant computation is performed by the verification tool before the failure, but no result is reported. We propose to reformulate the model-checking problem as follows, in order to have the verification tool report a summary of the performed work even in case of failure: given a program and a specification, the model checker returns a condition P -usually a state predicate- such that the program satisfies the specification under the condition P -that is, as long as the program does not leave the states in which P is satisfied. In our experiments, we investigated as one major application of conditional model checking the sequential combination of model checkers with information passing. We give the condition that one model checker produces, as input to a second conditional model checker, such that the verification problem for the second is restricted to the part of the state space that is not covered by the condition, i.e., the second model checker works on the problems that the first model checker could not solve. Our experiments demonstrate that repeated application of conditional model checkers, passing information from one model checker to the next, can significantly improve the verification results and performance, i.e., we can now verify programs that we could not verify before.BibTeX Entry
@inproceedings{FSE12, author = {Dirk Beyer and Thomas A. Henzinger and M. Erkan Keremoglu and Philipp Wendler}, title = {Conditional Model Checking: {A} Technique to Pass Information between Verifiers}, booktitle = {Proceedings of the 20th ACM SIGSOFT International Symposium on the Foundations of Software Engineering (FSE~2012, Cary, NC, November 10-17)}, editor = {Tevfik Bultan and Martin Robillard}, pages = {}, year = {2012}, publisher = {ACM}, isbn = {978-1-4503-1614-9}, doi = {10.1145/2393596.2393664}, url = {https://www.sosy-lab.org/research/cpa-cmc/}, pdf = {https://www.sosy-lab.org/research/pub/2012-FSE.Conditional_Model_Checking.pdf}, abstract = {Software model checking, as an undecidable problem, has three possible outcomes: (1) the program satisfies the specification, (2) the program does not satisfy the specification, and (3) the model checker fails. The third outcome usually manifests itself in a space-out, time-out, or one component of the verification tool giving up; in all of these failing cases, significant computation is performed by the verification tool before the failure, but no result is reported. We propose to reformulate the model-checking problem as follows, in order to have the verification tool report a summary of the performed work even in case of failure: given a program and a specification, the model checker returns a condition P ---usually a state predicate--- such that the program satisfies the specification under the condition P ---that is, as long as the program does not leave the states in which P is satisfied. In our experiments, we investigated as one major application of conditional model checking the sequential combination of model checkers with information passing. We give the condition that one model checker produces, as input to a second conditional model checker, such that the verification problem for the second is restricted to the part of the state space that is not covered by the condition, i.e., the second model checker works on the problems that the first model checker could not solve. Our experiments demonstrate that repeated application of conditional model checkers, passing information from one model checker to the next, can significantly improve the verification results and performance, i.e., we can now verify programs that we could not verify before.}, keyword = {CPAchecker,Software Model Checking}, } -
Algorithms for Software Model Checking: Predicate Abstraction vs. IMPACT.
In Gianpiero Cabodi and
Satnam Singh, editors,
Proceedings of the 12th International Conference on
Formal Methods in Computer-Aided Design
(FMCAD 2012, Cambrige, UK, October 22-25),
pages 106-113,
2012.
FMCAD.
Keyword(s):
CPAchecker,
Software Model Checking
Publisher's Version
PDF
Supplement
Abstract
CEGAR, SMT solving, and Craig interpolation are successful approaches for software model checking. We compare two of the most important algorithms that are based on these techniques: lazy predicate abstraction (as in BLAST) and lazy abstraction with interpolants (as in IMPACT). We unify the algorithms formally (by expressing both in the CPA framework) as well as in practice (by implementing them in the same tool). This allows us to flexibly experiment with new configurations and gain new insights, both about their most important differences and commonalities, as well as about their performance characteristics. We show that the essential contribution of the IMPACT algorithm is the reduction of the number of refinements, and compare this to another approach for reducing refinement effort: adjustable-block encoding (ABE).BibTeX Entry
@inproceedings{FMCAD12, author = {Dirk Beyer and Philipp Wendler}, title = {Algorithms for Software Model Checking: Predicate Abstraction vs. {IMPACT}}, booktitle = {Proceedings of the 12th International Conference on Formal Methods in Computer-Aided Design (FMCAD~2012, Cambrige, UK, October 22-25)}, editor = {Gianpiero Cabodi and Satnam Singh}, pages = {106-113}, year = {2012}, publisher = {FMCAD}, isbn = {978-1-4673-4831-7}, url = {https://www.sosy-lab.org/research/cpa-uni/}, pdf = {https://www.sosy-lab.org/research/pub/2012-FMCAD.Algorithms_for_Software_Model_Checking.pdf}, abstract = {CEGAR, SMT solving, and Craig interpolation are successful approaches for software model checking. We compare two of the most important algorithms that are based on these techniques: lazy predicate abstraction (as in BLAST) and lazy abstraction with interpolants (as in IMPACT). We unify the algorithms formally (by expressing both in the CPA framework) as well as in practice (by implementing them in the same tool). This allows us to flexibly experiment with new configurations and gain new insights, both about their most important differences and commonalities, as well as about their performance characteristics. We show that the essential contribution of the IMPACT algorithm is the reduction of the number of refinements, and compare this to another approach for reducing refinement effort: adjustable-block encoding (ABE).}, keyword = {CPAchecker,Software Model Checking}, doinone = {DOI not available}, urlpub = {https://ieeexplore.ieee.org/document/6462562/}, } -
Competition on Software Verification (SV-COMP).
In C. Flanagan and
B. König, editors,
Proceedings of the 18th International Conference on
Tools and Algorithms for the Construction and Analysis of Systems (TACAS 2012, Tallinn, Estonia, March 27-30),
LNCS 7214,
pages 504-524,
2012.
Springer-Verlag, Heidelberg.
doi:10.1007/978-3-642-28756-5_38
Keyword(s):
Competition on Software Verification (SV-COMP),
Competition on Software Verification (SV-COMP Report),
Software Model Checking
Publisher's Version
PDF
Supplement
Abstract
This report describes the definitions, rules, setup, procedure, and results of the 1st International Competition on Software Verification. The verification community has performed competitions in various areas in the past, and SV-COMP'12 is the first competition of verification tools that take software programs as input and run a fully automatic verification of a given safety property. This year's competition is organized as a satellite event of the International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS).BibTeX Entry
@inproceedings{TACAS12, author = {Dirk Beyer}, title = {Competition on Software Verification ({SV-COMP})}, booktitle = {Proceedings of the 18th International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS~2012, Tallinn, Estonia, March 27-30)}, editor = {C.~Flanagan and B.~K{\"o}nig}, pages = {504--524}, year = {2012}, series = {LNCS~7214}, publisher = {Springer-Verlag, Heidelberg}, isbn = {}, doi = {10.1007/978-3-642-28756-5_38}, sha256 = {77183c925bfa38fdd3cae2f65ed8d94aceb39a0805bad96adec5e4e70048e49b}, url = {https://sv-comp.sosy-lab.org/2012/}, abstract = {This report describes the definitions, rules, setup, procedure, and results of the 1st International Competition on Software Verification. The verification community has performed competitions in various areas in the past, and SV-COMP'12 is the first competition of verification tools that take software programs as input and run a fully automatic verification of a given safety property. This year's competition is organized as a satellite event of the International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS).}, keyword = {Competition on Software Verification (SV-COMP),Competition on Software Verification (SV-COMP Report),Software Model Checking}, } -
Linux Driver Verification.
In T. Margaria and
B. Steffen, editors,
Proceedings of the 5th International Symposium on
Leveraging Applications of Formal Methods, Verification, and Validation (ISoLA 2012, Part II, Heraklion, Crete, October 15-18),
LNCS 7610,
pages 1-6,
2012.
Springer-Verlag, Heidelberg.
doi:10.1007/978-3-642-34032-1_1
Keyword(s):
Software Model Checking
Publisher's Version
PDF
BibTeX Entry
@inproceedings{LDV12, author = {Dirk Beyer and Alexander K. Petrenko}, title = {{Linux} Driver Verification}, booktitle = {Proceedings of the 5th International Symposium on Leveraging Applications of Formal Methods, Verification, and Validation (ISoLA~2012, Part II, Heraklion, Crete, October 15-18)}, editor = {T.~Margaria and B.~Steffen}, pages = {1-6}, year = {2012}, series = {LNCS~7610}, publisher = {Springer-Verlag, Heidelberg}, isbn = {}, doi = {10.1007/978-3-642-34032-1_1}, url = {}, pdf = {https://www.sosy-lab.org/research/pub/2012-ISOLA.Linux_Driver_Verification.pdf}, keyword = {Software Model Checking}, } -
The RERS Grey-Box Challenge 2012: Analysis of Event-Condition-Action Systems.
In T. Margaria and
B. Steffen, editors,
Proceedings of the 5th International Symposium on
Leveraging Applications of Formal Methods, Verification, and Validation (ISoLA 2012, Part I, Heraklion, Crete, October 15-18),
LNCS 7609,
pages 608-614,
2012.
Springer-Verlag, Heidelberg.
doi:10.1007/978-3-642-34026-0_45
Keyword(s):
Software Model Checking
Publisher's Version
PDF
BibTeX Entry
@inproceedings{RERS12, author = {Falk Howar and Malte Isberner and Maik Merten and Bernhard Steffen and Dirk Beyer}, title = {The {RERS} Grey-Box Challenge 2012: Analysis of Event-Condition-Action Systems}, booktitle = {Proceedings of the 5th International Symposium on Leveraging Applications of Formal Methods, Verification, and Validation (ISoLA~2012, Part I, Heraklion, Crete, October 15-18)}, editor = {T.~Margaria and B.~Steffen}, pages = {608-614}, year = {2012}, series = {LNCS~7609}, publisher = {Springer-Verlag, Heidelberg}, isbn = {}, doi = {10.1007/978-3-642-34026-0_45}, url = {}, pdf = {https://www.sosy-lab.org/research/pub/2012-ISOLA.The_RERS_Grey-Box_Challenge_2012_Analysis_of_Event-Condition-Action_Systems.pdf}, keyword = {Software Model Checking}, } -
A formal model of a Virtual Filesystem Switch.
In Proc. of Software and Systems Modeling (SSV),
EPTCS,
pages 33-45,
2012.
PDF
BibTeX Entry
@inproceedings{ernst:ssv2012, author = {Gidon Ernst and Gerhard Schellhorn and Dominik Haneberg and Jörg Pfähler and Wolfgang Reif}, title = {{A formal model of a Virtual Filesystem Switch}}, booktitle = {Proc. of Software and Systems Modeling (SSV)}, volume = {102}, pages = {33--45}, year = {2012}, series = {EPTCS}, pdf = {https://www.sosy-lab.org/research/pub/2012-SSV.A_Formal_Model_of_a_Virtual_Filesysem_Switch.pdf}, } -
CPAchecker with Adjustable Predicate Analysis (Competition Contribution).
In C. Flanagan and
B. König, editors,
Proceedings of the 18th International Conference on
Tools and Algorithms for the Construction and Analysis of Systems (TACAS 2012, Tallinn, Estonia, March 27-30),
LNCS 7214,
pages 528-530,
2012.
Springer-Verlag, Heidelberg.
doi:10.1007/978-3-642-28756-5_40
Keyword(s):
CPAchecker,
Competition on Software Verification (SV-COMP),
Software Model Checking
Publisher's Version
PDF
Supplement
Abstract
CPAchecker is a freely available software-verification framework, built on the concepts of configurable program analysis (CPA). CPAchecker integrates most of the state-of-the-art technologies for software model checking, such as counterexample-guided abstraction refinement (CEGAR), lazy predicate abstraction, interpolation-based refinement, and large-block encoding. The CPA for predicate analysis with adjustable-block encoding (ABE) is very promising in many categories, and thus, we submit a CPAchecker configuration that uses this analysis approach to the competition.BibTeX Entry
@inproceedings{CPACHECKERABE-COMP12, author = {Stefan L{\"{o}}we and Philipp Wendler}, title = {{{\sc CPAchecker}} with Adjustable Predicate Analysis (Competition Contribution)}, booktitle = {Proceedings of the 18th International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS~2012, Tallinn, Estonia, March 27-30)}, editor = {C.~Flanagan and B.~K{\"o}nig}, pages = {528--530}, year = {2012}, series = {LNCS~7214}, publisher = {Springer-Verlag, Heidelberg}, doi = {10.1007/978-3-642-28756-5_40}, sha256 = {}, url = {https://doi.org/10.1007/978-3-642-28756-5_40}, pdf = {https://www.sosy-lab.org/research/pub/2012-TACAS.CPAchecker_with_Adjustable_Predicate_Analysis.pdf}, abstract = {CPAchecker is a freely available software-verification framework, built on the concepts of configurable program analysis (CPA). CPAchecker integrates most of the state-of-the-art technologies for software model checking, such as counterexample-guided abstraction refinement (CEGAR), lazy predicate abstraction, interpolation-based refinement, and large-block encoding. The CPA for predicate analysis with adjustable-block encoding (ABE) is very promising in many categories, and thus, we submit a CPAchecker configuration that uses this analysis approach to the competition.}, keyword = {CPAchecker,Competition on Software Verification (SV-COMP),Software Model Checking}, annote = {Won category ControlFlowInteger and received one silver and two bronze medals in <span style="white-space: nowrap"><a href="https://sv-comp.sosy-lab.org/2012/">SV-COMP'12</a></span>}, }Additional Infos
Won category ControlFlowInteger and received one silver and two bronze medals in SV-COMP'12
2011
-
Detection of Feature Interactions using Feature-Aware Verification.
In Proceedings of the 26th International Conference on
Automated Software Engineering (ASE 2011, Lawrence, KS, November 6-10),
pages 372-375,
2011.
IEEE.
doi:10.1109/ASE.2011.6100075
Keyword(s):
Software Model Checking
Publisher's Version
PDF
Supplement
Abstract
A software product line is a set of software products that are distinguished in terms of features (i.e., end-user-visible units of behavior). Feature interactions -situations in which the combination of features leads to emergent and possibly critical behavior- are a major source of failures in software product lines. We explore how feature-aware verification can improve the automatic detection of feature interactions in software product lines. Feature-aware verification uses product-line-verification techniques and supports the specification of feature properties along with the features in separate and composable units. It integrates the technique of variability encoding to verify a product line without generating and checking a possibly exponential number of feature combinations. We developed the tool suite SPLverifier for feature-aware verification, which is based on standard model-checking technology. We applied it to an e-mail system that incorporates domain knowledge of AT&T. We found that feature interactions can be detected automatically based on specifications that have only local knowledge.BibTeX Entry
@inproceedings{ASE11, author = {Sven Apel and Hendrik Speidel and Philipp Wendler and Alexander von Rhein and Dirk Beyer}, title = {Detection of Feature Interactions using Feature-Aware Verification}, booktitle = {Proceedings of the 26th International Conference on Automated Software Engineering (ASE~2011, Lawrence, KS, November 6-10)}, pages = {372-375}, year = {2011}, publisher = {IEEE}, isbn = {978-1-4577-1639-3}, doi = {10.1109/ASE.2011.6100075}, url = {http://fosd.net/FAV}, pdf = {https://www.sosy-lab.org/research/pub/2011-ASE.Detection_of_Feature_Interactions_using_Feature-Aware_Verification.pdf}, abstract = {A software product line is a set of software products that are distinguished in terms of features (i.e., end-user--visible units of behavior). Feature interactions ---situations in which the combination of features leads to emergent and possibly critical behavior--- are a major source of failures in software product lines. We explore how feature-aware verification can improve the automatic detection of feature interactions in software product lines. Feature-aware verification uses product-line--verification techniques and supports the specification of feature properties along with the features in separate and composable units. It integrates the technique of variability encoding to verify a product line without generating and checking a possibly exponential number of feature combinations. We developed the tool suite SPLverifier for feature-aware verification, which is based on standard model-checking technology. We applied it to an e-mail system that incorporates domain knowledge of AT&T. We found that feature interactions can be detected automatically based on specifications that have only local knowledge.}, keyword = {Software Model Checking}, } -
CPAchecker: A Tool for Configurable Software Verification.
In G. Gopalakrishnan and
S. Qadeer, editors,
Proceedings of the 23rd International Conference on
Computer Aided Verification (CAV 2011, Snowbird, UT, July 14-20),
LNCS 6806,
pages 184-190,
2011.
Springer-Verlag, Heidelberg.
doi:10.1007/978-3-642-22110-1_16
Keyword(s):
CPAchecker,
Software Model Checking
Publisher's Version
PDF
Supplement
Abstract
Configurable software verification is a recent concept for expressing different program analysis and model checking approaches in one single formalism. This paper presents CPAchecker, a tool and framework that aims at easy integration of new verification components. Every abstract domain, together with the corresponding operations, implements the interface of configurable program analysis (CPA). The main algorithm is configurable to perform a reachability analysis on arbitrary combinations of existing CPAs. In software verification, it takes a considerable amount of effort to convert a verification idea into actual experimental results - we aim at accelerating this process. We hope that researchers find it convenient and productive to implement new verification ideas and algorithms using this flexible and easy-to-extend platform, and that it advances the field by making it easier to perform practical experiments. The tool is implemented in Java and runs as command-line tool or as Eclipse plug-in. CPAchecker implements CPAs for several abstract domains. We evaluate the efficiency of the current version of our tool on software-verification benchmarks from the literature, and compare it with other state-of-the-art model checkers. CPAchecker is an open-source toolkit and publicly available.BibTeX Entry
@inproceedings{CAV11, author = {Dirk Beyer and M. Erkan Keremoglu}, title = {{{\sc CPAchecker}}: A Tool for Configurable Software Verification}, booktitle = {Proceedings of the 23rd International Conference on Computer Aided Verification (CAV~2011, Snowbird, UT, July 14-20)}, editor = {G.~Gopalakrishnan and S.~Qadeer}, pages = {184-190}, year = {2011}, series = {LNCS~6806}, publisher = {Springer-Verlag, Heidelberg}, isbn = {978-3-642-22109-5}, doi = {10.1007/978-3-642-22110-1_16}, sha256 = {0b9016de32b714f799da2cf19d3bf8f96cc33069db70beb2e22bbca07c58e2ee}, url = {https://cpachecker.sosy-lab.org}, abstract = {Configurable software verification is a recent concept for expressing different program analysis and model checking approaches in one single formalism. This paper presents CPAchecker, a tool and framework that aims at easy integration of new verification components. Every abstract domain, together with the corresponding operations, implements the interface of configurable program analysis (CPA). The main algorithm is configurable to perform a reachability analysis on arbitrary combinations of existing CPAs. In software verification, it takes a considerable amount of effort to convert a verification idea into actual experimental results --- we aim at accelerating this process. We hope that researchers find it convenient and productive to implement new verification ideas and algorithms using this flexible and easy-to-extend platform, and that it advances the field by making it easier to perform practical experiments. The tool is implemented in Java and runs as command-line tool or as Eclipse plug-in. CPAchecker implements CPAs for several abstract domains. We evaluate the efficiency of the current version of our tool on software-verification benchmarks from the literature, and compare it with other state-of-the-art model checkers. CPAchecker is an open-source toolkit and publicly available.}, keyword = {CPAchecker,Software Model Checking}, } -
Feature Cohesion in Software Product Lines: An Exploratory Study.
In Proceedings of the 33rd International Conference on
Software Engineering
(ICSE 2011, Honolulu, HI, May 21-28),
pages 421-430,
2011.
ACM Press, New York (NY).
doi:10.1145/1985793.1985851
Keyword(s):
Structural Analysis and Comprehension
Publisher's Version
PDF
Abstract
Software product lines gain momentum in research and industry. Many product-line approaches use features as a central abstraction mechanism. Feature-oriented software development aims at encapsulating features in cohesive units to support program comprehension, variability, and reuse. Surprisingly, not much is known about the characteristics of cohesion in feature-oriented product lines, although proper cohesion is of special interest in product-line engineering due to its focus on variability and reuse. To fill this gap, we conduct an exploratory study on forty software product lines of different sizes and domains. A distinguishing property of our approach is that we use both classic software measures and novel measures that are based on distances in clustering layouts, which can be used also for visual exploration of product-line architectures. This way, we can draw a holistic picture of feature cohesion. In our exploratory study, we found several interesting correlations (e.g., between development process and feature cohesion) and we discuss insights and perspectives of investigating feature cohesion (e.g., regarding feature interfaces and programming style).BibTeX Entry
@inproceedings{ICSE11, author = {Sven Apel and Dirk Beyer}, title = {Feature Cohesion in Software Product Lines: An Exploratory Study}, booktitle = {Proceedings of the 33rd International Conference on Software Engineering (ICSE~2011, Honolulu, HI, May 21-28)}, pages = {421-430}, year = {2011}, publisher = {ACM Press, New York (NY)}, isbn = {978-1-4503-0445-0}, doi = {10.1145/1985793.1985851}, url = {}, pdf = {https://www.sosy-lab.org/research/pub/2011-ICSE.Feature_Cohesion_in_Software_Product_Lines_An_Exploratory_Study.pdf}, abstract = {Software product lines gain momentum in research and industry. Many product-line approaches use features as a central abstraction mechanism. Feature-oriented software development aims at encapsulating features in cohesive units to support program comprehension, variability, and reuse. Surprisingly, not much is known about the characteristics of cohesion in feature-oriented product lines, although proper cohesion is of special interest in product-line engineering due to its focus on variability and reuse. To fill this gap, we conduct an exploratory study on forty software product lines of different sizes and domains. A distinguishing property of our approach is that we use both classic software measures and novel measures that are based on distances in clustering layouts, which can be used also for visual exploration of product-line architectures. This way, we can draw a holistic picture of feature cohesion. In our exploratory study, we found several interesting correlations (e.g., between development process and feature cohesion) and we discuss insights and perspectives of investigating feature cohesion (e.g., regarding feature interfaces and programming style).}, keyword = {Structural Analysis and Comprehension}, } -
Verification of B+ trees: an experiment combining shape analysis and interactive theorem proving.
In Proc. of Software Engineering and Formal Methods (SEFM),
LNCS,
pages 188-203,
2011.
Springer.
PDF
BibTeX Entry
@inproceedings{ernst:sefm2011, author = {Gidon Ernst and Gerhard Schellhorn and Wolfgang Reif}, title = {{Verification of B+ trees: an experiment combining shape analysis and interactive theorem proving}}, booktitle = {Proc. of Software Engineering and Formal Methods (SEFM)}, volume = {7041}, pages = {188--203}, year = {2011}, series = {LNCS}, publisher = {Springer}, pdf = {https://www.sosy-lab.org/research/pub/2011-SEFM.Verification_of_B+_trees.pdf}, } -
The COST IC0701 verification competition 2011.
In Proc. of Formal Verification of Object-Oriented Software (FoVeOOS),
LNCS,
pages 3-21,
2011.
Springer.
BibTeX Entry
@inproceedings{ernst:foveoos2011, author = {Thorsten Bormer and Marc Brockschmidt and Dino Distefano and Gidon Ernst and Jean{-}Christophe Filli{\^{a}}tre and Radu Grigore and Marieke Huisman and Vladimir Klebanov and Claude March{\'{e}} and Rosemary Monahan and Wojciech Mostowski and Nadia Polikarpova and Christoph Scheben and Gerhard Schellhorn and Bogdan Tofan and Julian Tschannen and Mattias Ulbrich}, title = {{The COST IC0701 verification competition 2011}}, booktitle = {Proc. of Formal Verification of Object-Oriented Software (FoVeOOS)}, volume = {7421}, pages = {3--21}, year = {2011}, series = {LNCS}, publisher = {Springer}, } -
Simulating a Flash File System with CoreASM and Eclipse.
In Proc. of Dependable Software for Critical Infrastructures (DSCI),
GI Lecture Notes in Informatics,
2011.
Gesellschaft für Informatik.
BibTeX Entry
@inproceedings{ernst:gi2011, author = {Maximilian Junker and Dominik Haneberg and Gerhard Schellhorn and Wolfgang Reif and Gidon Ernst}, title = {{Simulating a Flash File System with CoreASM and Eclipse}}, booktitle = {Proc. of Dependable Software for Critical Infrastructures (DSCI)}, volume = {192}, year = {2011}, series = {GI Lecture Notes in Informatics}, publisher = {Gesellschaft für Informatik}, } -
Interleaved programs and rely-guarantee reasoning with ITL.
In Proc. of Temporal Representation and Reasoning (TIME),
pages 99-106,
2011.
IEEE.
PDF
BibTeX Entry
@inproceedings{ernst:time2011, author = {Gerhard Schellhorn and Bogdan Tofan and Gidon Ernst and Wolfgang Reif}, title = {{Interleaved programs and rely-guarantee reasoning with ITL}}, booktitle = {Proc. of Temporal Representation and Reasoning (TIME)}, pages = {99--106}, year = {2011}, publisher = {IEEE}, pdf = {https://www.sosy-lab.org/research/pub/2011-TIME.Interleaved_Program_Rely-Guarantee-Reasoning-with-ITL.pdf}, }
2010
-
Predicate Abstraction with Adjustable-Block Encoding.
In Proceedings of the 10th International Conference on
Formal Methods in Computer-Aided Design
(FMCAD 2010, Lugano, October 20-23),
pages 189-197,
2010.
FMCAD.
Keyword(s):
CPAchecker,
Software Model Checking
Publisher's Version
PDF
Supplement
Abstract
Several successful software model checkers are based on a technique called single-block encoding (SBE), which computes costly predicate abstractions after every single program operation. Large-block encoding (LBE) computes abstractions only after a large number of operations, and it was shown that this significantly improves the verification performance. In this work, we present adjustable-block encoding (ABE), a unifying framework that allows to express both previous approaches. In addition, it provides the flexibility to specify any block size between SBE and LBE, and also beyond LBE, through the adjustment of one single parameter. Such a unification of different concepts makes it easier to understand the fundamental properties of the analysis, and makes the differences of the variants more explicit. We evaluate different configurations on example C programs, and identify one that is currently the best.BibTeX Entry
@inproceedings{FMCAD10, author = {Dirk Beyer and M.~Erkan Keremoglu and Philipp Wendler}, title = {Predicate Abstraction with Adjustable-Block Encoding}, booktitle = {Proceedings of the 10th International Conference on Formal Methods in Computer-Aided Design (FMCAD~2010, Lugano, October 20-23)}, pages = {189-197}, year = {2010}, publisher = {FMCAD}, isbn = {}, url = {http://www.sosy-lab.org/~dbeyer/cpa-abe/}, pdf = {https://www.sosy-lab.org/research/pub/2010-FMCAD.Predicate_Abstraction_with_Adjustable-Block_Encoding.pdf}, abstract = {Several successful software model checkers are based on a technique called single-block encoding (SBE), which computes costly predicate abstractions after every single program operation. Large-block encoding (LBE) computes abstractions only after a large number of operations, and it was shown that this significantly improves the verification performance. In this work, we present adjustable-block encoding (ABE), a unifying framework that allows to express both previous approaches. In addition, it provides the flexibility to specify any block size between SBE and LBE, and also beyond LBE, through the adjustment of one single parameter. Such a unification of different concepts makes it easier to understand the fundamental properties of the analysis, and makes the differences of the variants more explicit. We evaluate different configurations on example C programs, and identify one that is currently the best.}, keyword = {CPAchecker,Software Model Checking}, annote = {Won the NRW Young Scientist Award 2010 in Dynamic Intelligent Systems}, doinone = {DOI not available}, urlpub = {https://ieeexplore.ieee.org/document/5770949/}, }Additional Infos
Won the NRW Young Scientist Award 2010 in Dynamic Intelligent Systems -
CheckDep: A Tool for Tracking Software Dependencies.
In Proceedings of the 18th IEEE International Conference on
Program Comprehension (ICPC 2010, Braga, June 30 - July 2),
pages 42-43,
2010.
IEEE Computer Society Press, Los Alamitos (CA).
doi:10.1109/ICPC.2010.51
Keyword(s):
Structural Analysis and Comprehension
Publisher's Version
PDF
Abstract
Many software developers use a syntactical 'diff' in order to perform a quick review before committing changes to the repository. Others are notified of the change by e-mail (containing diffs or change logs), and they review the received information to determine if their work is affected. We lift this simple process from the code level to the more abstract level of dependencies: a software developer can use CheckDep to inspect introduced and removed dependencies before committing new versions, and other developers receive summaries of the changed dependencies via e-mail. We find the tool useful in our software-development activities and now make the tool publicly available.BibTeX Entry
@inproceedings{ICPC10c, author = {Dirk Beyer and Ashgan Fararooy}, title = {{{\sc CheckDep}}: A Tool for Tracking Software Dependencies}, booktitle = {Proceedings of the 18th IEEE International Conference on Program Comprehension (ICPC~2010, Braga, June 30 - July 2)}, pages = {42-43}, year = {2010}, publisher = {IEEE Computer Society Press, Los Alamitos~(CA)}, isbn = {978-0-7695-4113-6}, doi = {10.1109/ICPC.2010.51}, url = {}, pdf = {https://www.sosy-lab.org/research/pub/2010-ICPC.CheckDep_A_Tool_for_Tracking_Software_Dependencies.pdf}, abstract = {Many software developers use a syntactical `diff' in order to perform a quick review before committing changes to the repository. Others are notified of the change by e-mail (containing diffs or change logs), and they review the received information to determine if their work is affected. We lift this simple process from the code level to the more abstract level of dependencies: a software developer can use CheckDep to inspect introduced and removed dependencies before committing new versions, and other developers receive summaries of the changed dependencies via e-mail. We find the tool useful in our software-development activities and now make the tool publicly available.}, keyword = {Structural Analysis and Comprehension}, } -
DepDigger: A Tool for Detecting Complex Low-Level Dependencies.
In Proceedings of the 18th IEEE International Conference on
Program Comprehension (ICPC 2010, Braga, June 30 - July 2),
pages 40-41,
2010.
IEEE Computer Society Press, Los Alamitos (CA).
doi:10.1109/ICPC.2010.52
Keyword(s):
Structural Analysis and Comprehension
Publisher's Version
PDF
Abstract
We present a tool that identifies complex data-flow dependencies on code-level, based on the measure dep-degree. Low-level dependencies between program operations are modeled by the use-def graph, which is generated from reaching definitions of variables. The tool annotates program operations with their dep-degree values, such that 'difficult' program operations are easy to locate. We hope that this tool helps detecting and preventing code degeneration, which is often a challenge in today's software projects, due to the high refactoring and restructuring frequency.BibTeX Entry
@inproceedings{ICPC10b, author = {Dirk Beyer and Ashgan Fararooy}, title = {{{\sc DepDigger}}: A Tool for Detecting Complex Low-Level Dependencies}, booktitle = {Proceedings of the 18th IEEE International Conference on Program Comprehension (ICPC~2010, Braga, June 30 - July 2)}, pages = {40-41}, year = {2010}, publisher = {IEEE Computer Society Press, Los Alamitos~(CA)}, isbn = {978-0-7695-4113-6}, doi = {10.1109/ICPC.2010.52}, url = {}, pdf = {https://www.sosy-lab.org/research/pub/2010-ICPC.DepDigger_A_Tool_for_Detecting_Complex_Low-Level_Dependencies.pdf}, abstract = {We present a tool that identifies complex data-flow dependencies on code-level, based on the measure dep-degree. Low-level dependencies between program operations are modeled by the use-def graph, which is generated from reaching definitions of variables. The tool annotates program operations with their dep-degree values, such that `difficult' program operations are easy to locate. We hope that this tool helps detecting and preventing code degeneration, which is often a challenge in today's software projects, due to the high refactoring and restructuring frequency.}, keyword = {Structural Analysis and Comprehension}, } -
A Simple and Effective Measure for Complex Low-Level Dependencies.
In Proceedings of the 18th IEEE International Conference on
Program Comprehension (ICPC 2010, Braga, June 30 - July 2),
pages 80-83,
2010.
IEEE Computer Society Press, Los Alamitos (CA).
doi:10.1109/ICPC.2010.49
Keyword(s):
Structural Analysis and Comprehension
Publisher's Version
PDF
Presentation
Abstract
The measure dep-degree is a simple indicator for structural problems and complex dependencies on code-level. We model low-level dependencies between program operations as use-def graph, which is generated from reaching definitions of variables. The more dependencies a program operation has, the more different program states have to be considered and the more difficult it is to understand the operation. Dep-degree is simple to compute and interpret, flexible and scalable in its application, and independently complementing other indicators. Preliminary experiments suggest that the measure dep-degree, which simply counts the number of dependency edges in the use-def graph, is a good indicator for readability and understandablity.BibTeX Entry
@inproceedings{ICPC10a, author = {Dirk Beyer and Ashgan Fararooy}, title = {A Simple and Effective Measure for Complex Low-Level Dependencies}, booktitle = {Proceedings of the 18th IEEE International Conference on Program Comprehension (ICPC~2010, Braga, June 30 - July 2)}, pages = {80-83}, year = {2010}, publisher = {IEEE Computer Society Press, Los Alamitos~(CA)}, isbn = {978-0-7695-4113-6}, doi = {10.1109/ICPC.2010.49}, url = {}, pdf = {https://www.sosy-lab.org/research/pub/2010-ICPC.A_Simple_and_Effective_Measure_for_Complex_Low-Level_Dependencies.pdf}, presentation = {https://www.sosy-lab.org/research/prs/2010-06_DepDegree.pdf}, abstract = {The measure dep-degree is a simple indicator for structural problems and complex dependencies on code-level. We model low-level dependencies between program operations as use-def graph, which is generated from reaching definitions of variables. The more dependencies a program operation has, the more different program states have to be considered and the more difficult it is to understand the operation. Dep-degree is simple to compute and interpret, flexible and scalable in its application, and independently complementing other indicators. Preliminary experiments suggest that the measure dep-degree, which simply counts the number of dependency edges in the use-def graph, is a good indicator for readability and understandablity.}, keyword = {Structural Analysis and Comprehension}, } -
Shape Refinement through Explicit Heap Analysis.
In D.S. Rosenblum and
G. Taentzer, editors,
Proceedings of the 13th International Conference on
Fundamental Approaches to Software Engineering (FASE 2010, Paphos, Cyprus, March 22-26),
LNCS 6013,
pages 263-277,
2010.
Springer-Verlag, Heidelberg.
doi:10.1007/978-3-642-12029-9_19
Keyword(s):
BLAST,
Software Model Checking
Publisher's Version
PDF
Abstract
Shape analysis is a promising technique to prove program properties about recursive data structures. The challenge is to automatically determine the data-structure type, and to supply the shape analysis with the necessary information about the data structure. We present a stepwise approach to the selection of instrumentation predicates for a TVLA-based shape analysis, which takes us a step closer towards the fully automatic verification of data structures. The approach uses two techniques to guide the refinement of shape abstractions: (1) during program exploration, an explicit heap analysis collects sample instances of the heap structures, which are used to identify the data structures that are manipulated by the program; and (2) during abstraction refinement along an infeasible error path, we consider different possible heap abstractions and choose the coarsest one that eliminates the infeasible path. We have implemented this combined approach for automatic shape refinement as an extension of the software model checker BLAST. Example programs from a data-structure library that manipulate doubly-linked lists and trees were successfully verified by our tool.BibTeX Entry
@inproceedings{FASE10, author = {Dirk Beyer and Thomas A.~Henzinger and Gr{\'e}gory Th{\'e}oduloz and Damien Zufferey}, title = {Shape Refinement through Explicit Heap Analysis}, booktitle = {Proceedings of the 13th International Conference on Fundamental Approaches to Software Engineering (FASE~2010, Paphos, Cyprus, March 22-26)}, editor = {D.S.~Rosenblum and G.~Taentzer}, pages = {263-277}, year = {2010}, series = {LNCS~6013}, publisher = {Springer-Verlag, Heidelberg}, isbn = {978-3-642-12028-2}, doi = {10.1007/978-3-642-12029-9_19}, sha256 = {60468f681028a3ac427897405ee5c99894c5baa340896454d378d55267be304f}, url = {}, pdf = {https://www.sosy-lab.org/research/pub/2010-FASE.Shape_Refinement_through_Explicit_Heap_Analysis.pdf}, abstract = {Shape analysis is a promising technique to prove program properties about recursive data structures. The challenge is to automatically determine the data-structure type, and to supply the shape analysis with the necessary information about the data structure. We present a stepwise approach to the selection of instrumentation predicates for a TVLA-based shape analysis, which takes us a step closer towards the fully automatic verification of data structures. The approach uses two techniques to guide the refinement of shape abstractions: (1) during program exploration, an explicit heap analysis collects sample instances of the heap structures, which are used to identify the data structures that are manipulated by the program; and (2) during abstraction refinement along an infeasible error path, we consider different possible heap abstractions and choose the coarsest one that eliminates the infeasible path. We have implemented this combined approach for automatic shape refinement as an extension of the software model checker BLAST. Example programs from a data-structure library that manipulate doubly-linked lists and trees were successfully verified by our tool.}, keyword = {BLAST,Software Model Checking}, } -
Software Verification Tools (Track Introduction).
In T. Margaria and
B. Steffen, editors,
Proceedings of the 9th International Symposium on
Leveraging Applications of Formal Methods, Verification, and Validation
(ISoLA 2020, Rhodes, Greece, October 20-30), part 4,
LNCS 12479,
pages 177-181,
2010.
Springer-Verlag, Heidelberg.
doi:10.1007/978-3-030-83723-5_12
Publisher's Version
PDF
BibTeX Entry
@inproceedings{ISOLA20d-TrackIntro, author = {Markus Schordan and Dirk Beyer and Irena Bojanova}, title = {Software Verification Tools (Track Introduction)}, booktitle = {Proceedings of the 9th International Symposium on Leveraging Applications of Formal Methods, Verification, and Validation (ISoLA~2020, Rhodes, Greece, October 20--30), part 4}, editor = {T.~Margaria and B.~Steffen}, pages = {177-181}, year = {2010}, series = {LNCS~12479}, publisher = {Springer-Verlag, Heidelberg}, isbn = {978-3-030-03420-7}, doi = {10.1007/978-3-030-83723-5_12}, sha256 = {}, url = {}, keyword = {}, } -
Optimized Java binary and virtual machine for tiny motes.
In Proc. of Distributed Computing in Sensor Systems (DCOSS),
LNCS,
pages 15-30,
2010.
Springer.
BibTeX Entry
@inproceedings{ernst:dcoss2010, author = {Faisal Aslam and Luminous Fennell and Christian Schindelhauer and Peter Thiemann and Gidon Ernst and Elmar Haussmann and Stefan Rührup and and Zastash A. Uzmi}, title = {{Optimized Java binary and virtual machine for tiny motes}}, booktitle = {Proc. of Distributed Computing in Sensor Systems (DCOSS)}, volume = {6131}, pages = {15--30}, year = {2010}, series = {LNCS}, publisher = {Springer}, }
2009
-
Software Model Checking via Large-Block Encoding.
In Proceedings of the 9th International Conference on
Formal Methods in Computer-Aided Design
(FMCAD 2009, Austin, TX, November 15-18),
pages 25-32,
2009.
IEEE Computer Society Press, Los Alamitos (CA).
doi:10.1109/FMCAD.2009.5351147
Keyword(s):
CPAchecker,
Software Model Checking
Publisher's Version
PDF
Abstract
Several successful approaches to software verification are based on the construction and analysis of an abstract reachability tree (ART). The ART represents unwindings of the control-flow graph of the program. Traditionally, a transition of the ART represents a single block of the program, and therefore, we call this approach single-block encoding (SBE). SBE may result in a huge number of program paths to be explored, which constitutes a fundamental source of inefficiency. We propose a generalization of the approach, in which transitions of the ART represent larger portions of the program; we call this approach large-block encoding (LBE). LBE may reduce the number of paths to be explored up to exponentially. Within this framework, we also investigate symbolic representations: for representing abstract states, in addition to conjunctions as used in SBE, we investigate the use of arbitrary Boolean formulas; for computing abstract-successor states, in addition to Cartesian predicate abstraction as used in SBE, we investigate the use of Boolean predicate abstraction. The new encoding leverages the efficiency of state-of-the-art SMT solvers, which can symbolically compute abstract large-block successors. Our experiments on benchmark C programs show that the large-block encoding outperforms the single-block encoding.BibTeX Entry
@inproceedings{FMCAD09, author = {Dirk Beyer and Alessandro Cimatti and Alberto Griggio and M.~Erkan Keremoglu and Roberto Sebastiani}, title = {Software Model Checking via Large-Block Encoding}, booktitle = {Proceedings of the 9th International Conference on Formal Methods in Computer-Aided Design (FMCAD~2009, Austin, TX, November 15-18)}, pages = {25-32}, year = {2009}, publisher = {IEEE Computer Society Press, Los Alamitos~(CA)}, isbn = {978-1-4244-4966-8}, doi = {10.1109/FMCAD.2009.5351147}, url = {}, pdf = {https://www.sosy-lab.org/research/pub/2009-FMCAD.Software_Model_Checking_via_Large-Block_Encoding.pdf}, abstract = {Several successful approaches to software verification are based on the construction and analysis of an abstract reachability tree (ART). The ART represents unwindings of the control-flow graph of the program. Traditionally, a transition of the ART represents a single block of the program, and therefore, we call this approach single-block encoding (SBE). SBE may result in a huge number of program paths to be explored, which constitutes a fundamental source of inefficiency. We propose a generalization of the approach, in which transitions of the ART represent larger portions of the program; we call this approach large-block encoding (LBE). LBE may reduce the number of paths to be explored up to exponentially. Within this framework, we also investigate symbolic representations: for representing abstract states, in addition to conjunctions as used in SBE, we investigate the use of arbitrary Boolean formulas; for computing abstract-successor states, in addition to Cartesian predicate abstraction as used in SBE, we investigate the use of Boolean predicate abstraction. The new encoding leverages the efficiency of state-of-the-art SMT solvers, which can symbolically compute abstract large-block successors. Our experiments on benchmark C programs show that the large-block encoding outperforms the single-block encoding.}, keyword = {CPAchecker,Software Model Checking}, }
2008
-
Program Analysis with Dynamic Precision Adjustment.
In Proceedings of the 23rd IEEE/ACM International Conference on
Automated Software Engineering
(ASE 2008, L'Aquila, September 15-19),
pages 29-38,
2008.
IEEE Computer Society Press, Los Alamitos (CA).
doi:10.1109/ASE.2008.13
Keyword(s):
BLAST,
Software Model Checking
Publisher's Version
PDF
Presentation
Abstract
We present and evaluate a framework and tool for combining multiple program analyses which allows the dynamic (on-line) adjustment of the precision of each analysis depending on the accumulated results. For example, the explicit tracking of the values of a variable may be switched off in favor of a predicate abstraction when and where the number of different variable values that have been encountered has exceeded a specified threshold. The method is evaluated on verifying the SSH client/server software and shows significant gains compared with predicate abstraction-based model checking.BibTeX Entry
@inproceedings{ASE08, author = {Dirk Beyer and Thomas A.~Henzinger and Gr{\'e}gory Th{\'e}oduloz}, title = {Program Analysis with Dynamic Precision Adjustment}, booktitle = {Proceedings of the 23rd IEEE/ACM International Conference on Automated Software Engineering (ASE~2008, L'Aquila, September 15-19)}, pages = {29-38}, year = {2008}, publisher = {IEEE Computer Society Press, Los Alamitos~(CA)}, isbn = {978-1-4244-2187-9}, doi = {10.1109/ASE.2008.13}, url = {}, pdf = {https://www.sosy-lab.org/research/pub/2008-ASE.Program_Analysis_with_Dynamic_Precision_Adjustment.pdf}, presentation = {https://www.sosy-lab.org/research/prs/2009-04-16_UCB_CPAplus.pdf}, abstract = {We present and evaluate a framework and tool for combining multiple program analyses which allows the dynamic (on-line) adjustment of the precision of each analysis depending on the accumulated results. For example, the explicit tracking of the values of a variable may be switched off in favor of a predicate abstraction when and where the number of different variable values that have been encountered has exceeded a specified threshold. The method is evaluated on verifying the SSH client/server software and shows significant gains compared with predicate abstraction-based model checking.}, keyword = {BLAST,Software Model Checking}, annote = {}, } -
CSIsat: Interpolation for LA+EUF.
In A. Gupta and
S. Malik, editors,
Proceedings of the 20th International Conference on
Computer Aided Verification
(CAV 2008, Princeton, NY, July 7-14),
LNCS 5123,
pages 304-308,
2008.
Springer-Verlag, Heidelberg.
doi:10.1007/978-3-540-70545-1_29
Keyword(s):
Software Model Checking
Publisher's Version
PDF
Supplement
Abstract
We present CSIsat, an interpolating decision procedure for the quantifier-free theory of rational linear arithmetic and equality with uninterpreted function symbols. Our implementation combines the efficiency of linear programming for solving the arithmetic part with the efficiency of a SAT solver to reason about the boolean structure. We evaluate the efficiency of our tool on benchmarks from software verification. Binaries and the source code of CSIsat are publicly available as free software.BibTeX Entry
@inproceedings{CAV08, author = {Dirk Beyer and Damien Zufferey and Rupak Majumdar}, title = {{{\sc CSIsat}}: Interpolation for {LA+EUF}}, booktitle = {Proceedings of the 20th International Conference on Computer Aided Verification (CAV~2008, Princeton, NY, July 7-14)}, editor = {A.~Gupta and S.~Malik}, pages = {304-308}, year = {2008}, series = {LNCS~5123}, publisher = {Springer-Verlag, Heidelberg}, isbn = {978-3-540-70543-7}, doi = {10.1007/978-3-540-70545-1_29}, sha256 = {5fc10da29ed7a1c90589b1c2080f5b347404b56107364e608b25d10f4b48161f}, url = {http://www.sosy-lab.org/~dbeyer/CSIsat/}, pdf = {https://www.sosy-lab.org/research/pub/2008-CAV.CSIsat_Interpolation_for_LA_EUF.pdf}, abstract = {We present CSIsat, an interpolating decision procedure for the quantifier-free theory of rational linear arithmetic and equality with uninterpreted function symbols. Our implementation combines the efficiency of linear programming for solving the arithmetic part with the efficiency of a SAT solver to reason about the boolean structure. We evaluate the efficiency of our tool on benchmarks from software verification. Binaries and the source code of CSIsat are publicly available as free software.}, keyword = {Software Model Checking}, annote = {}, } -
CCVisu: Automatic Visual Software Decomposition.
In Proceedings of the 30th ACM/IEEE International Conference on
Software Engineering (ICSE 2008, Leipzig, May 10-18),
pages 967-968,
2008.
ACM Press, New York (NY).
doi:10.1145/1370175.1370211
Keyword(s):
Structural Analysis and Comprehension
Publisher's Version
PDF
Supplement
Abstract
Understanding the structure of large existing (and evolving) software systems is a major challenge for software engineers. In reverse engineering, we aim to compute, for a given software system, a decomposition of the system into its subsystems. CCVisu is a lightweight tool that takes as input a software graph model and computes a visual representation of the system's structure, i.e., it structures the system into separated groups of artifacts that are strongly related, and places them in a 2- or 3-dimensional space. Besides the decomposition into subsystems, it reveals the relatedness between the subsystems via interpretable distances. The tool reads a software graph from a simple text file in RSF format, e.g., call, inheritance, containment, or co-change graphs. The resulting system structure is currently either directly presented on the screen, or written to an output file in SVG, VRML, or plain text format. The tool is designed as a reusable software component, easy to use, and easy to integrate into other tools; it is based on efficient algorithms and supports several formats for data interchange.BibTeX Entry
@inproceedings{ICSE08, author = {Dirk Beyer}, title = {{{\sc CCVisu}}: Automatic Visual Software Decomposition}, booktitle = {Proceedings of the 30th ACM/IEEE International Conference on Software Engineering (ICSE~2008, Leipzig, May 10-18)}, pages = {967-968}, year = {2008}, publisher = {ACM Press, New York~(NY)}, isbn = {978-1-60558-079-1}, doi = {10.1145/1370175.1370211}, url = {http://www.sosy-lab.org/~dbeyer/CCVisu/}, pdf = {https://www.sosy-lab.org/research/pub/2008-ICSE.CCVisu_Automatic_Visual_Software_Decomposition.pdf}, abstract = {Understanding the structure of large existing (and evolving) software systems is a major challenge for software engineers. In reverse engineering, we aim to compute, for a given software system, a decomposition of the system into its subsystems. CCVisu is a lightweight tool that takes as input a software graph model and computes a visual representation of the system's structure, i.e., it structures the system into separated groups of artifacts that are strongly related, and places them in a 2- or 3-dimensional space. Besides the decomposition into subsystems, it reveals the relatedness between the subsystems via interpretable distances. The tool reads a software graph from a simple text file in RSF format, e.g., call, inheritance, containment, or co-change graphs. The resulting system structure is currently either directly presented on the screen, or written to an output file in SVG, VRML, or plain text format. The tool is designed as a reusable software component, easy to use, and easy to integrate into other tools; it is based on efficient algorithms and supports several formats for data interchange.}, keyword = {Structural Analysis and Comprehension}, annote = {}, } -
Introducing TakaTuka: a Java Virtual Machine for motes.
In Proc. of the Embedded Network Sensor Systems (SENSYS),
pages 399-400,
2008.
ACM.
Poster Abstract
BibTeX Entry
@inproceedings{ernst:sensys2008, author = {Faisal Aslam and Christian Schindelhauer and Gidon Ernst and David Spyra and Jan Meyer and Mohannad Zalloom}, title = {{Introducing TakaTuka: a Java Virtual Machine for motes}}, booktitle = {Proc. of the Embedded Network Sensor Systems (SENSYS)}, pages = {399--400}, year = {2008}, publisher = {ACM}, note = {Poster Abstract}, }
2007
-
An Application of Web-Service Interfaces.
In Proceedings of the 2007 IEEE International Conference on
Web Services
(ICWS 2007, Salt Lake City, UT, July 9-13),
pages 831-838,
2007.
IEEE Computer Society Press, Los Alamitos (CA).
doi:10.1109/ICWS.2007.32
Keyword(s):
Interfaces for Component-Based Design
Publisher's Version
PDF
Abstract
We present a case study to illustrate our formalism for the specification and verification of the method-invocation behavior of web-service applications constructed from asynchronously interacting multi-threaded distributed components. Our model is expressive enough to allow the representation of recursion and dynamic thread creation, and yet permits the algorithmic analysis of the following two questions: (1) Does a given service satisfy a safety specification? (2) Can a given service be substituted by a another service in an arbitrary context? Our case study is based on the Amazon.com E-Commerce Services (ECS) platform.BibTeX Entry
@inproceedings{ICWS07, author = {Dirk Beyer and Arindam Chakrabarti and Thomas A.~Henzinger and Sanjit A. Seshia}, title = {An Application of Web-Service Interfaces}, booktitle = {Proceedings of the 2007 IEEE International Conference on Web Services (ICWS~2007, Salt Lake City, UT, July 9-13)}, pages = {831-838}, year = {2007}, publisher = {IEEE Computer Society Press, Los Alamitos~(CA)}, isbn = {0-7695-2924-0}, doi = {10.1109/ICWS.2007.32}, url = {}, pdf = {https://www.sosy-lab.org/research/pub/2007-ICWS.An_Application_of_Web-Service_Interfaces.pdf}, abstract = {We present a case study to illustrate our formalism for the specification and verification of the method-invocation behavior of web-service applications constructed from asynchronously interacting multi-threaded distributed components. Our model is expressive enough to allow the representation of recursion and dynamic thread creation, and yet permits the algorithmic analysis of the following two questions: (1) Does a given service satisfy a safety specification? (2) Can a given service be substituted by a another service in an arbitrary context? Our case study is based on the Amazon.com E-Commerce Services (ECS) platform.}, keyword = {Interfaces for Component-Based Design}, annote = {}, } -
Algorithms for Interface Synthesis.
In W. Damm and
H. Hermanns, editors,
Proceedings of the 19th International Conference on
Computer Aided Verification
(CAV 2007, Berlin, July 3-7),
LNCS 4590,
pages 4-19,
2007.
Springer-Verlag, Heidelberg.
doi:10.1007/978-3-540-73368-3_4
Keyword(s):
Interfaces for Component-Based Design,
Software Model Checking
Publisher's Version
PDF
Abstract
A temporal interface for a software component is a finite automaton that specifies the legal sequences of calls to functions that are provided by the component. We compare and evaluate three different algorithms for automatically extracting temporal interfaces from program code: (1) a game algorithm that computes the interface as a representation of the most general environment strategy to avoid a safety violation; (2) a learning algorithm that repeatedly queries the program to construct the minimal interface automaton; and (3) a CEGAR algorithm that iteratively refines an abstract interface hypothesis by adding relevant program variables. For comparison purposes, we present and implement the three algorithms in a unifying formal setting. While the three algorithms compute the same output and have similar worst-case complexities, their actual running times may differ considerably for a given input program. On the theoretical side, we provide for each of the three algorithms a family of input programs on which that algorithm outperforms the two alternatives. On the practical side, we evaluate the three algorithms experimentally on a variety of Java libraries.BibTeX Entry
@inproceedings{CAV07b, author = {Dirk Beyer and Thomas A.~Henzinger and Vasu Singh}, title = {Algorithms for Interface Synthesis}, booktitle = {Proceedings of the 19th International Conference on Computer Aided Verification (CAV~2007, Berlin, July 3-7)}, editor = {W.~Damm and H.~Hermanns}, pages = {4-19}, year = {2007}, series = {LNCS~4590}, publisher = {Springer-Verlag, Heidelberg}, isbn = {978-3-540-73367-6}, doi = {10.1007/978-3-540-73368-3_4}, sha256 = {c8d7d6300d354fb38917b44cb9fbd3ebbad1737d75107ccc124af001677679ec}, url = {}, pdf = {https://www.sosy-lab.org/research/pub/2007-CAV.Algorithms_for_Interface_Synthesis.pdf}, abstract = {A temporal interface for a software component is a finite automaton that specifies the legal sequences of calls to functions that are provided by the component. We compare and evaluate three different algorithms for automatically extracting temporal interfaces from program code: (1) a game algorithm that computes the interface as a representation of the most general environment strategy to avoid a safety violation; (2) a learning algorithm that repeatedly queries the program to construct the minimal interface automaton; and (3) a CEGAR algorithm that iteratively refines an abstract interface hypothesis by adding relevant program variables. For comparison purposes, we present and implement the three algorithms in a unifying formal setting. While the three algorithms compute the same output and have similar worst-case complexities, their actual running times may differ considerably for a given input program. On the theoretical side, we provide for each of the three algorithms a family of input programs on which that algorithm outperforms the two alternatives. On the practical side, we evaluate the three algorithms experimentally on a variety of Java libraries.}, keyword = {Interfaces for Component-Based Design,Software Model Checking}, annote = {}, } -
Configurable Software Verification:
Concretizing the Convergence of
Model Checking and Program Analysis.
In W. Damm and
H. Hermanns, editors,
Proceedings of the 19th International Conference on
Computer Aided Verification
(CAV 2007, Berlin, July 3-7),
LNCS 4590,
pages 504-518,
2007.
Springer-Verlag, Heidelberg.
doi:10.1007/978-3-540-73368-3_51
Keyword(s):
BLAST,
Software Model Checking
Publisher's Version
PDF
Presentation
Abstract
In automatic software verification, we have observed a theoretical convergence of model checking and program analysis. In practice, however, model checkers are still mostly concerned with precision, e.g., the removal of spurious counterexamples; for this purpose they build and refine reachability trees. Lattice-based program analyzers, on the other hand, are primarily concerned with efficiency. We designed an algorithm and built a tool that can be configured to perform not only a purely tree-based or a purely lattice-based analysis, but offers many intermediate settings that have not been evaluated before. The algorithm and tool take one or more abstract interpreters, such as a predicate abstraction and a shape analysis, and configure their execution and interaction using several parameters. Our experiments show that such customization may lead to dramatic improvements in the precision-efficiency spectrum.BibTeX Entry
@inproceedings{CAV07a, author = {Dirk Beyer and Thomas A.~Henzinger and Gr{\'e}gory Th{\'e}oduloz}, title = {Configurable Software Verification: Concretizing the Convergence of Model Checking and Program Analysis}, booktitle = {Proceedings of the 19th International Conference on Computer Aided Verification (CAV~2007, Berlin, July 3-7)}, editor = {W.~Damm and H.~Hermanns}, pages = {504-518}, year = {2007}, series = {LNCS~4590}, publisher = {Springer-Verlag, Heidelberg}, isbn = {978-3-540-73367-6}, doi = {10.1007/978-3-540-73368-3_51}, sha256 = {1c5d0f57fdded4d659c5a86984e19307c50d7770442fa908db20430a22e4b276}, url = {}, pdf = {https://www.sosy-lab.org/research/pub/2007-CAV.Configurable_Software_Verification.pdf}, presentation = {https://www.sosy-lab.org/research/prs/2007-07-07_CAV07_Configurable-Program-Analysis.pdf}, abstract = {In automatic software verification, we have observed a theoretical convergence of model checking and program analysis. In practice, however, model checkers are still mostly concerned with precision, e.g., the removal of spurious counterexamples; for this purpose they build and refine reachability trees. Lattice-based program analyzers, on the other hand, are primarily concerned with efficiency. We designed an algorithm and built a tool that can be configured to perform not only a purely tree-based or a purely lattice-based analysis, but offers many intermediate settings that have not been evaluated before. The algorithm and tool take one or more abstract interpreters, such as a predicate abstraction and a shape analysis, and configure their execution and interaction using several parameters. Our experiments show that such customization may lead to dramatic improvements in the precision-efficiency spectrum.}, keyword = {BLAST,Software Model Checking}, annote = {<a href="https://www.sosy-lab.org/research/pub/2007-CAV.Configurable_Software_Verification.Errata.txt"> Errata</a> available.}, }Additional Infos
Errata available. -
Path Invariants.
In Proceedings of the 2007 ACM Conference on
Programming Language Design and Implementation
(PLDI 2007, San Diego, CA, June 10-13),
pages 300-309,
2007.
ACM Press, New York (NY).
doi:10.1145/1250734.1250769
Keyword(s):
BLAST,
Software Model Checking
Publisher's Version
PDF
Abstract
The success of software verification depends on the ability to find a suitable abstraction of a program automatically. We propose a method for automated abstraction refinement which overcomes some limitations of current predicate discovery schemes. In current schemes, the cause of a false alarm is identified as an infeasible error path, and the abstraction is refined in order to remove that path. By contrast, we view the cause of a false alarm -the spurious counterexample- as a full-fledged program, namely, a fragment of the original program whose control-flow graph may contain loops and represent unbounded computations. There are two advantages to using such path programs as counterexamples for abstraction refinement. First, we can bring the whole machinery of program analysis to bear on path programs, which are typically small compared to the original program. Specifically, we use constraint-based invariant generation to automatically infer invariants of path programs -so-called path invariants. Second, we use path invariants for abstraction refinement in order to remove not one infeasibility at a time, but at once all (possibly infinitely many) infeasible error computations that are represented by a path program. Unlike previous predicate discovery schemes, our method handles loops without unrolling them; it infers abstractions that involve universal quantification and naturally incorporates disjunctive reasoning.BibTeX Entry
@inproceedings{PLDI07, author = {Dirk Beyer and Thomas A.~Henzinger and Rupak Majumdar and Andrey Rybalchenko}, title = {Path Invariants}, booktitle = {Proceedings of the 2007 ACM Conference on Programming Language Design and Implementation (PLDI~2007, San Diego, CA, June 10-13)}, pages = {300-309}, year = {2007}, publisher = {ACM Press, New York~(NY)}, isbn = {978-1-59593-633-2}, doi = {10.1145/1250734.1250769}, url = {}, pdf = {https://www.sosy-lab.org/research/pub/2007-PLDI.Path_Invariants.pdf}, abstract = {The success of software verification depends on the ability to find a suitable abstraction of a program automatically. We propose a method for automated abstraction refinement which overcomes some limitations of current predicate discovery schemes. In current schemes, the cause of a false alarm is identified as an infeasible error path, and the abstraction is refined in order to remove that path. By contrast, we view the cause of a false alarm ---the spurious counterexample--- as a full-fledged program, namely, a fragment of the original program whose control-flow graph may contain loops and represent unbounded computations. There are two advantages to using such path programs as counterexamples for abstraction refinement. First, we can bring the whole machinery of program analysis to bear on path programs, which are typically small compared to the original program. Specifically, we use constraint-based invariant generation to automatically infer invariants of path programs ---so-called path invariants. Second, we use path invariants for abstraction refinement in order to remove not one infeasibility at a time, but at once all (possibly infinitely many) infeasible error computations that are represented by a path program. Unlike previous predicate discovery schemes, our method handles loops without unrolling them; it infers abstractions that involve universal quantification and naturally incorporates disjunctive reasoning.}, keyword = {BLAST,Software Model Checking}, annote = {Video: <a href="https://www.youtube.com/watch?v=vUN0n23zVuw"> https://www.youtube.com/watch?v=vUN0n23zVuw</a><BR>}, }Additional Infos
-
Invariant Synthesis for Combined Theories.
In B. Cook and
A. Podelski, editors,
Proceedings of the Eighth International Conference on
Verification, Model Checking, and Abstract Interpretation
(VMCAI 2007, Nice, January 14-16),
LNCS 4349,
pages 378-394,
2007.
Springer-Verlag, Heidelberg.
doi:10.1007/978-3-540-69738-1_27
Keyword(s):
BLAST,
Software Model Checking
Publisher's Version
PDF
Abstract
We present a constraint-based algorithm for the synthesis of invariants expressed in the combined theory of linear arithmetic and uninterpreted function symbols. Given a set of programmer-specified invariant templates, our algorithm reduces the invariant synthesis problem to a sequence of arithmetic constraint satisfaction queries. Since the combination of linear arithmetic and uninterpreted functions is a widely applied predicate domain for program verification, our algorithm provides a powerful tool to statically and automatically reason about program correctness. The algorithm can also be used for the synthesis of invariants over arrays and set data structures, because satisfiability questions for the theories of sets and arrays can be reduced to the theory of linear arithmetic with uninterpreted functions. We have implemented our algorithm and used it to find invariants for a low-level memory allocator written in C.BibTeX Entry
@inproceedings{VMCAI07, author = {Dirk Beyer and Thomas A.~Henzinger and Rupak Majumdar and Andrey Rybalchenko}, title = {Invariant Synthesis for Combined Theories}, booktitle = {Proceedings of the Eighth International Conference on Verification, Model Checking, and Abstract Interpretation (VMCAI~2007, Nice, January 14-16)}, editor = {B.~Cook and A.~Podelski}, pages = {378-394}, year = {2007}, series = {LNCS~4349}, publisher = {Springer-Verlag, Heidelberg}, isbn = {978-3-540-69735-0}, doi = {10.1007/978-3-540-69738-1_27}, url = {}, pdf = {https://www.sosy-lab.org/research/pub/2007-VMCAI.Invariant_Synthesis_for_Combined_Theories.pdf}, abstract = {We present a constraint-based algorithm for the synthesis of invariants expressed in the combined theory of linear arithmetic and uninterpreted function symbols. Given a set of programmer-specified invariant templates, our algorithm reduces the invariant synthesis problem to a sequence of arithmetic constraint satisfaction queries. Since the combination of linear arithmetic and uninterpreted functions is a widely applied predicate domain for program verification, our algorithm provides a powerful tool to statically and automatically reason about program correctness. The algorithm can also be used for the synthesis of invariants over arrays and set data structures, because satisfiability questions for the theories of sets and arrays can be reduced to the theory of linear arithmetic with uninterpreted functions. We have implemented our algorithm and used it to find invariants for a low-level memory allocator written in C.}, keyword = {BLAST,Software Model Checking}, annote = {}, }
2006
-
Animated Visualization of Software History
using Evolution Storyboards.
In Proceedings of the 13th IEEE Working Conference on
Reverse Engineering (WCRE 2006, Benevento, October 23-27),
pages 199-208,
2006.
IEEE Computer Society Press, Los Alamitos (CA).
doi:10.1109/WCRE.2006.14
Keyword(s):
Structural Analysis and Comprehension
Publisher's Version
PDF
Abstract
The understanding of the structure of a software system can be improved by analyzing the system's evolution during development. Visualizations of software history that provide only static views do not capture the dynamic nature of software evolution. We present a new visualization technique, the Evolution Storyboard, which provides dynamic views of the evolution of a software's structure. An evolution storyboard consists of a sequence of animated panels, which highlight the structural changes in the system; one panel for each considered time period. Using storyboards, engineers can spot good design, signs of structural decay, or the spread of cross cutting concerns in the code. We implemented our concepts in a tool, which automatically extracts software dependency graphs from version control repositories and computes storyboards based on panels for different time periods. For applying our approach in practice, we provide a step by step guide that others can follow along the storyboard visualizations, in order to study the evolution of large systems. We have applied our method to several large open source software systems. In this paper, we demonstrate that our method provides additional information (compared to static views) on the ArgoUML project, an open source UML modeling tool.BibTeX Entry
@inproceedings{WCRE06, author = {Dirk Beyer and Ahmed E.~Hassan}, title = {Animated Visualization of Software History using Evolution Storyboards}, booktitle = {Proceedings of the 13th IEEE Working Conference on Reverse Engineering (WCRE~2006, Benevento, October 23-27)}, pages = {199-208}, year = {2006}, publisher = {IEEE Computer Society Press, Los Alamitos~(CA)}, isbn = {1095-1350}, doi = {10.1109/WCRE.2006.14}, url = {}, pdf = {https://www.sosy-lab.org/research/pub/2006-WCRE.Animated_Visualization_of_Software_History_using_Evolution_Storyboards.pdf}, abstract = {The understanding of the structure of a software system can be improved by analyzing the system's evolution during development. Visualizations of software history that provide only static views do not capture the dynamic nature of software evolution. We present a new visualization technique, the Evolution Storyboard, which provides dynamic views of the evolution of a software's structure. An evolution storyboard consists of a sequence of animated panels, which highlight the structural changes in the system; one panel for each considered time period. Using storyboards, engineers can spot good design, signs of structural decay, or the spread of cross cutting concerns in the code. We implemented our concepts in a tool, which automatically extracts software dependency graphs from version control repositories and computes storyboards based on panels for different time periods. For applying our approach in practice, we provide a step by step guide that others can follow along the storyboard visualizations, in order to study the evolution of large systems. We have applied our method to several large open source software systems. In this paper, we demonstrate that our method provides additional information (compared to static views) on the ArgoUML project, an open source UML modeling tool.}, keyword = {Structural Analysis and Comprehension}, annote = {}, } -
Lazy Shape Analysis.
In T. Ball and
R.B. Jones, editors,
Proceedings of the 18th International Conference on
Computer Aided Verification
(CAV 2006, Seattle, WA, August 17-20),
LNCS 4144,
pages 532-546,
2006.
Springer-Verlag, Heidelberg.
doi:10.1007/11817963_48
Keyword(s):
BLAST,
Software Model Checking
Publisher's Version
PDF
Supplement
Abstract
Many software model checkers are based on predicate abstraction. If the verification goal depends on pointer structures, the approach does not work well, because it is difficult to find adequate predicate abstractions for the heap. In contrast, shape analysis, which uses graph-based heap abstractions, can provide a compact representation of recursive data structures. We integrate shape analysis into the software model checker BLAST. Because shape analysis is expensive, we do not apply it globally. Instead, we ensure that, like predicates, shape graphs are computed and stored locally, only where necessary for proving the verification goal. To achieve this, we extend lazy abstraction refinement, which so far has been used only for predicate abstractions, to three-valued logical structures. This approach does not only increase the precision of model checking, but it also increases the efficiency of shape analysis. We implemented the technique by extending BLAST with calls to TVLA.BibTeX Entry
@inproceedings{CAV06, author = {Dirk Beyer and Thomas A.~Henzinger and Gr{\'e}gory Th{\'e}oduloz}, title = {Lazy Shape Analysis}, booktitle = {Proceedings of the 18th International Conference on Computer Aided Verification (CAV~2006, Seattle, WA, August 17-20)}, editor = {T.~Ball and R.B.~Jones}, pages = {532-546}, year = {2006}, series = {LNCS~4144}, publisher = {Springer-Verlag, Heidelberg}, isbn = {3-540-37406-X}, doi = {10.1007/11817963_48}, sha256 = {83aadf3fff8dfe5462691e635eb3f21fda3bd09638cf78a94d71ca87efbd5390}, url = {http://www.sosy-lab.org/~dbeyer/blast_sa/}, pdf = {https://www.sosy-lab.org/research/pub/2006-CAV.Lazy_Shape_Analysis.pdf}, abstract = {Many software model checkers are based on predicate abstraction. If the verification goal depends on pointer structures, the approach does not work well, because it is difficult to find adequate predicate abstractions for the heap. In contrast, shape analysis, which uses graph-based heap abstractions, can provide a compact representation of recursive data structures. We integrate shape analysis into the software model checker BLAST. Because shape analysis is expensive, we do not apply it globally. Instead, we ensure that, like predicates, shape graphs are computed and stored locally, only where necessary for proving the verification goal. To achieve this, we extend lazy abstraction refinement, which so far has been used only for predicate abstractions, to three-valued logical structures. This approach does not only increase the precision of model checking, but it also increases the efficiency of shape analysis. We implemented the technique by extending BLAST with calls to TVLA.}, keyword = {BLAST,Software Model Checking}, annote = {An extended version of this paper appeared in Proc. Dagstuhl Seminar 06081, IBFI Schloss Dagstuhl, 2006: <BR> <a href="http://drops.dagstuhl.de/portals/06081/"> http://drops.dagstuhl.de/portals/06081/</a> <BR> Supplementary material: <a href="http://www.sosy-lab.org/~dbeyer/blast_sa/"> http://www.sosy-lab.org/~dbeyer/blast_sa/</a>}, }Additional Infos
An extended version of this paper appeared in Proc. Dagstuhl Seminar 06081, IBFI Schloss Dagstuhl, 2006:
http://drops.dagstuhl.de/portals/06081/
Supplementary material: http://www.sosy-lab.org/~dbeyer/blast_sa/ -
Evolution Storyboards:
Visualization of Software Structure Dynamics.
In Proceedings of the 14th IEEE International Conference on
Program Comprehension (ICPC 2006, Athens, June 14-16),
pages 248-251,
2006.
IEEE Computer Society Press, Los Alamitos (CA).
doi:10.1109/ICPC.2006.21
Keyword(s):
Structural Analysis and Comprehension
Publisher's Version
PDF
Abstract
Large software systems have a rich development history. Mining certain aspects of this rich history can reveal interesting insights into the system and its structure. Previous approaches to visualize the evolution of software systems provide static views. These static views often do not fully capture the dynamic nature of evolution. We introduce the Evolution Storyboard, a visualization which provides dynamic views of the evolution of a software's structure. Our tool implementation takes as input a series of software graphs, e.g., call graphs or co-change graphs, and automatically generates an evolution storyboard. To illustrate the concept, we present a storyboard for PostgreSQL, as a representative example for large open source systems. Evolution storyboards help to understand a system's structure and to reveal its possible decay over time. The storyboard highlights important changes in the structure during the lifetime of a software system, and how artifacts changed their dependencies over time.BibTeX Entry
@inproceedings{ICPC06, author = {Dirk Beyer and Ahmed E.~Hassan}, title = {Evolution Storyboards: Visualization of Software Structure Dynamics}, booktitle = {Proceedings of the 14th IEEE International Conference on Program Comprehension (ICPC~2006, Athens, June 14-16)}, pages = {248-251}, year = {2006}, publisher = {IEEE Computer Society Press, Los Alamitos~(CA)}, isbn = {0-7695-2601-2}, doi = {10.1109/ICPC.2006.21}, url = {}, pdf = {https://www.sosy-lab.org/research/pub/2006-ICPC.Evolution_Storyboards_Visualization_of_Software_Structure_Dynamics.pdf}, abstract = {Large software systems have a rich development history. Mining certain aspects of this rich history can reveal interesting insights into the system and its structure. Previous approaches to visualize the evolution of software systems provide static views. These static views often do not fully capture the dynamic nature of evolution. We introduce the Evolution Storyboard, a visualization which provides dynamic views of the evolution of a software's structure. Our tool implementation takes as input a series of software graphs, e.g., call graphs or co-change graphs, and automatically generates an evolution storyboard. To illustrate the concept, we present a storyboard for PostgreSQL, as a representative example for large open source systems. Evolution storyboards help to understand a system's structure and to reveal its possible decay over time. The storyboard highlights important changes in the structure during the lifetime of a software system, and how artifacts changed their dependencies over time.}, keyword = {Structural Analysis and Comprehension}, annote = {}, } -
Relational Programming with CrocoPat.
In Proceedings of the 28th ACM/IEEE International Conference on
Software Engineering (ICSE 2006, Shanghai, May 20-28),
pages 807-810,
2006.
ACM Press, New York (NY).
doi:10.1145/1134285.1134420
Keyword(s):
Structural Analysis and Comprehension
Publisher's Version
PDF
Supplement
Abstract
Many structural analyses of software systems are naturally formalized as relational queries, for example, the detection of design patterns, patterns of problematic design, code clones, dead code, and differences between the as-built and the as-designed architecture. This paper describes CrocoPat, an application-independent tool for relational programming. Through its efficiency and its expressive language, CrocoPat enables practically important analyses of real-world software systems that are not possible with other graph analysis tools, in particular analyses that involve transitive closures and the detection of patterns in graphs. The language is easy to use, because it is based on the well-known first-order predicate logic. The tool is easy to integrate into other software systems, because it is a small command-line tool that uses a simple text format for input and output of relations.BibTeX Entry
@inproceedings{ICSE06b, author = {Dirk Beyer}, title = {Relational Programming with {{\sc CrocoPat}}}, booktitle = {Proceedings of the 28th ACM/IEEE International Conference on Software Engineering (ICSE~2006, Shanghai, May 20-28)}, pages = {807-810}, year = {2006}, publisher = {ACM Press, New York~(NY)}, isbn = {1-59593-375-1}, doi = {10.1145/1134285.1134420}, url = {http://www.sosy-lab.org/~dbeyer/CrocoPat/}, pdf = {https://www.sosy-lab.org/research/pub/2006-ICSE.Relational_Programming_with_CrocoPat.pdf}, abstract = {Many structural analyses of software systems are naturally formalized as relational queries, for example, the detection of design patterns, patterns of problematic design, code clones, dead code, and differences between the as-built and the as-designed architecture. This paper describes CrocoPat, an application-independent tool for relational programming. Through its efficiency and its expressive language, CrocoPat enables practically important analyses of real-world software systems that are not possible with other graph analysis tools, in particular analyses that involve transitive closures and the detection of patterns in graphs. The language is easy to use, because it is based on the well-known first-order predicate logic. The tool is easy to integrate into other software systems, because it is a small command-line tool that uses a simple text format for input and output of relations.}, keyword = {Structural Analysis and Comprehension}, annote = {CrocoPat is available at: <a href="http://www.sosy-lab.org/~dbeyer/CrocoPat/"> http://www.sosy-lab.org/~dbeyer/CrocoPat/</a>}, }Additional Infos
CrocoPat is available at: http://www.sosy-lab.org/~dbeyer/CrocoPat/ -
Symbolic Invariant Verification for Systems
with Dynamic Structural Adaptation.
In Proceedings of the 28th ACM/IEEE International Conference on
Software Engineering (ICSE 2006, Shanghai, May 20-28),
pages 72-81,
2006.
ACM Press, New York (NY).
doi:10.1145/1134285.1134297
Keyword(s):
Software Model Checking
Publisher's Version
PDF
Abstract
The next generation of networked mechatronic systems will be characterized by complex coordination and structural adaptation at run-time. Crucial safety properties have to be guaranteed for all potential structural configurations. Testing cannot provide safety guarantees, while current model checking and theorem proving techniques do not scale for such systems. We present a verification technique for arbitrarily large multi-agent systems from the mechatronic domain, featuring complex coordination and structural adaptation. We overcome the limitations of existing techniques by exploiting the local character of structural safety properties. The system state is modeled as a graph, system transitions are modeled as rule applications in a graph transformation system, and safety properties of the system are encoded as inductive invariants (permitting the verification of infinite state systems). We developed a symbolic verification procedure that allows us to perform the computation on an efficient BDD-based graph manipulation engine, and we report performance results for several examples.BibTeX Entry
@inproceedings{ICSE06a, author = {Basil Becker and Dirk Beyer and Holger Giese and Florian Klein and Daniela Schilling}, title = {Symbolic Invariant Verification for Systems with Dynamic Structural Adaptation}, booktitle = {Proceedings of the 28th ACM/IEEE International Conference on Software Engineering (ICSE~2006, Shanghai, May 20-28)}, pages = {72-81}, year = {2006}, publisher = {ACM Press, New York~(NY)}, isbn = {1-59593-375-1}, doi = {10.1145/1134285.1134297}, url = {}, pdf = {https://www.sosy-lab.org/research/pub/2006-ICSE.Symbolic_Invariant_Verification_for_Systems_with_Dynamic_Structural_Adaptation.pdf}, abstract = {The next generation of networked mechatronic systems will be characterized by complex coordination and structural adaptation at run-time. Crucial safety properties have to be guaranteed for all potential structural configurations. Testing cannot provide safety guarantees, while current model checking and theorem proving techniques do not scale for such systems. We present a verification technique for arbitrarily large multi-agent systems from the mechatronic domain, featuring complex coordination and structural adaptation. We overcome the limitations of existing techniques by exploiting the local character of structural safety properties. The system state is modeled as a graph, system transitions are modeled as rule applications in a graph transformation system, and safety properties of the system are encoded as inductive invariants (permitting the verification of infinite state systems). We developed a symbolic verification procedure that allows us to perform the computation on an efficient BDD-based graph manipulation engine, and we report performance results for several examples.}, keyword = {Software Model Checking}, annote = {}, } -
A Tool for Verified Design using
Alloy for Specification and CrocoPat for Verification.
In D. Jackson and
P. Zave, editors,
Proceedings of the First Alloy Workshop
(ALLOY 2006, Portland, OR, November 6),
2006.
Keyword(s):
Structural Analysis and Comprehension
PDF
Abstract
The context of our work is a project that focuses on methods and tools for modeling enterprise architectures. An enterprise architecture model represents the structure of an enterprise across multiple levels, from the markets in which it operates down to the implementation of the technical systems that support its operation. These models are based on an ontology that defines the model elements and their relations. In this paper, we describe an efficient method to fully automatically verify the design that our modeling tool manages. We specify the ontology in Alloy, and use the efficient interpreter for relational programs CrocoPat to verify that the design fulfills all constraints specified in the ontology. Technically, we transform all constraints from Alloy into a relational program in CrocoPat's programming language. Then, we execute the relational program and feed it with a relational representation of the design as input, in order to check that the design element instances fulfill all constraints of the Alloy representation of the ontology. We also present the current limitations of our approach and how -by overcoming these limitations- we can develop an Alloy-based parameterized modeling tool.BibTeX Entry
@inproceedings{ALLOY06, author = {Alain Wegmann and Lam-Son Le and Lotfi Hussami and Dirk Beyer}, title = {A Tool for Verified Design using {Alloy} for Specification and {CrocoPat} for Verification}, booktitle = {Proceedings of the First Alloy Workshop (ALLOY~2006, Portland, OR, November 6)}, editor = {D.~Jackson and P.~Zave}, pages = {}, year = {2006}, publisher = {}, isbn = {}, url = {}, pdf = {https://www.sosy-lab.org/research/pub/2006-ALLOY.A_Tool_for_Verified_Design_using_Alloy_for_Specification_and_CrocoPat_for_Verification.pdf}, abstract = {The context of our work is a project that focuses on methods and tools for modeling enterprise architectures. An enterprise architecture model represents the structure of an enterprise across multiple levels, from the markets in which it operates down to the implementation of the technical systems that support its operation. These models are based on an ontology that defines the model elements and their relations. In this paper, we describe an efficient method to fully automatically verify the design that our modeling tool manages. We specify the ontology in Alloy, and use the efficient interpreter for relational programs CrocoPat to verify that the design fulfills all constraints specified in the ontology. Technically, we transform all constraints from Alloy into a relational program in CrocoPat's programming language. Then, we execute the relational program and feed it with a relational representation of the design as input, in order to check that the design element instances fulfill all constraints of the Alloy representation of the ontology. We also present the current limitations of our approach and how -by overcoming these limitations- we can develop an Alloy-based parameterized modeling tool.}, keyword = {Structural Analysis and Comprehension}, doinone = {DOI not available}, } -
Co-Change Visualization Applied to PostgreSQL and ArgoUML.
In Proceedings of the Third International Workshop on
Mining Software Repositories
(MSR 2006, Shanghai, May 22-23),
pages 165-166,
2006.
ACM Press.
doi:10.1145/1137983.1138023
Keyword(s):
Structural Analysis and Comprehension
Publisher's Version
PDF
Supplement
Abstract
Co-change visualization is a method to recover the subsystem structure of a software system from the version history, based on common changes and visual clustering. This paper presents the results of applying the tool CCVisu, which implements co-change visualization, to the two open-source software systems PostgreSQL and ArgoUML. The input of the method is the co-change graph, which can be easily extracted by CCVisu from a CVS version repository. The output is a graph layout that places software artifacts that were often commonly changed at close positions, and artifacts that were rarely co-changed at distant positions. This property of the layout is due to the clustering property of the underlying energy model, which evaluates the quality of a produced layout. The layout can be displayed on the screen, or saved to a file in SVG or VRML format.BibTeX Entry
@inproceedings{MSR06, author = {Dirk Beyer}, title = {Co-Change Visualization Applied to {PostgreSQL} and {ArgoUML}}, booktitle = {Proceedings of the Third International Workshop on Mining Software Repositories (MSR~2006, Shanghai, May 22-23)}, pages = {165-166}, year = {2006}, publisher = {ACM Press}, isbn = {1-59593-397-2}, doi = {10.1145/1137983.1138023}, url = {http://www.sosy-lab.org/~dbeyer/ccvisu_msr}, pdf = {https://www.sosy-lab.org/research/pub/2006-MSR.Co-Change_Visualization_Applied_to_PostgreSQL_and_ArgoUML.pdf}, abstract = {Co-change visualization is a method to recover the subsystem structure of a software system from the version history, based on common changes and visual clustering. This paper presents the results of applying the tool CCVisu, which implements co-change visualization, to the two open-source software systems PostgreSQL and ArgoUML. The input of the method is the co-change graph, which can be easily extracted by CCVisu from a CVS version repository. The output is a graph layout that places software artifacts that were often commonly changed at close positions, and artifacts that were rarely co-changed at distant positions. This property of the layout is due to the clustering property of the underlying energy model, which evaluates the quality of a produced layout. The layout can be displayed on the screen, or saved to a file in SVG or VRML format.}, keyword = {Structural Analysis and Comprehension}, annote = {}, }
2005
-
Co-Change Visualization.
In Proceedings of the 21st IEEE International Conference on
Software Maintenance (ICSM 2005, Budapest, September 25-30),
Industrial and Tool volume,
pages 89-92,
2005.
Keyword(s):
Structural Analysis and Comprehension
PDF
Supplement
Abstract
Clustering layouts of software systems combine two important aspects: they reveal groups of related artifacts of the software system, and they produce a visualization of the results that is easy to understand. Co-change visualization is a lightweight method for computing clustering layouts of software systems for which the change history is available. This paper describes CCVisu, a tool that implements co-change visualization. It extracts the co-change graph from a version repository, and computes a layout, which positions the artifacts of the software system in a two- or three-dimensional space. Two artifacts are positioned closed together in the layout if they were often changed together. The tool is designed as a framework, easy to use, and easy to integrate into reengineering environments; several formats for data interchange are already implemented. The graph layout is currently provided in VRML format, in a standard text format, or directly drawn on the screen.BibTeX Entry
@inproceedings{ICSM05, author = {Dirk Beyer}, title = {Co-Change Visualization}, booktitle = {Proceedings of the 21st IEEE International Conference on Software Maintenance (ICSM~2005, Budapest, September 25-30), Industrial and Tool volume}, pages = {89-92}, year = {2005}, isbn = {963-460-980-5}, url = {../../2005-ICSM.Co-Change_Visualization/main.html}, pdf = {https://www.sosy-lab.org/research/pub/2005-ICSM.Co-Change_Visualization.pdf}, abstract = {Clustering layouts of software systems combine two important aspects: they reveal groups of related artifacts of the software system, and they produce a visualization of the results that is easy to understand. Co-change visualization is a lightweight method for computing clustering layouts of software systems for which the change history is available. This paper describes CCVisu, a tool that implements co-change visualization. It extracts the co-change graph from a version repository, and computes a layout, which positions the artifacts of the software system in a two- or three-dimensional space. Two artifacts are positioned closed together in the layout if they were often changed together. The tool is designed as a framework, easy to use, and easy to integrate into reengineering environments; several formats for data interchange are already implemented. The graph layout is currently provided in VRML format, in a standard text format, or directly drawn on the screen.}, keyword = {Structural Analysis and Comprehension}, address = {Budapest}, annote = {Tool Paper <BR> CCVisu is available at: <a href="http://www.sosy-lab.org/~dbeyer/CCVisu/"> http://www.sosy-lab.org/~dbeyer/CCVisu</a>}, doinone = {DOI not available}, }Additional Infos
Tool Paper
CCVisu is available at: http://www.sosy-lab.org/~dbeyer/CCVisu -
Clustering Software Artifacts
Based on Frequent Common Changes.
In Proceedings of the 13th IEEE International Workshop on
Program Comprehension (IWPC 2005, St. Louis, MO, May 15-16),
pages 259-268,
2005.
IEEE Computer Society Press, Los Alamitos (CA).
doi:10.1109/WPC.2005.12
Keyword(s):
Structural Analysis and Comprehension
Publisher's Version
PDF
Supplement
Abstract
Changes of software systems are less expensive and less error-prone if they affect only one subsystem. Thus, clusters of artifacts that are frequently changed together are subsystem candidates. We introduce a two-step method for identifying such clusters. First, a model of common changes of software artifacts, called co-change graph, is extracted from the version control repository of the software system. Second, a layout of the co-change graph is computed that reveals clusters of frequently co-changed artifacts. We derive requirements for such layouts, and introduce an energy model for producing layouts that fulfill these requirements. We evaluate the method by applying it to three example systems, and comparing the resulting layouts to authoritative decompositions.BibTeX Entry
@inproceedings{IWPC05, author = {Dirk Beyer and Andreas Noack}, title = {Clustering Software Artifacts Based on Frequent Common Changes}, booktitle = {Proceedings of the 13th IEEE International Workshop on Program Comprehension (IWPC~2005, St. Louis, MO, May 15-16)}, pages = {259-268}, year = {2005}, publisher = {IEEE Computer Society Press, Los Alamitos~(CA)}, isbn = {0-7695-2254-8}, doi = {10.1109/WPC.2005.12}, url = {http://www.sosy-lab.org/~dbeyer/co-change}, pdf = {https://www.sosy-lab.org/research/pub/2005-IWPC.Clustering_Software_Artifacts_Based_on_Frequent_Common_Changes.pdf}, abstract = {Changes of software systems are less expensive and less error-prone if they affect only one subsystem. Thus, clusters of artifacts that are frequently changed together are subsystem candidates. We introduce a two-step method for identifying such clusters. First, a model of common changes of software artifacts, called co-change graph, is extracted from the version control repository of the software system. Second, a layout of the co-change graph is computed that reveals clusters of frequently co-changed artifacts. We derive requirements for such layouts, and introduce an energy model for producing layouts that fulfill these requirements. We evaluate the method by applying it to three example systems, and comparing the resulting layouts to authoritative decompositions.}, keyword = {Structural Analysis and Comprehension}, annote = {Supplementary material: <a href="http://www.sosy-lab.org/~dbeyer/co-change/"> http://www.sosy-lab.org/~dbeyer/co-change/</a>}, }Additional Infos
Supplementary material: http://www.sosy-lab.org/~dbeyer/co-change/ -
Web Service Interfaces.
In Proceedings of the 14th ACM International
World Wide Web Conference (WWW 2005, Chiba, May 10-14),
pages 148-159,
2005.
ACM Press, New York (NY).
doi:10.1145/1060745.1060770
Keyword(s):
Interfaces for Component-Based Design
Publisher's Version
PDF
Abstract
We present a language for specifying web service interfaces. A web service interface puts three kinds of constraints on the users of the service. First, the interface specifies the methods that can be called by a client, together with types of input and output parameters; these are called signature constraints. Second, the interface may specify propositional constraints on method calls and output values that may occur in a web service conversation; these are called consistency constraints. Third, the interface may specify temporal constraints on the ordering of method calls; these are called protocol constraints. The interfaces can be used to check, first, if two or more web services are compatible, and second, if a web service A can be safely substituted for a web service B. The algorithm for compatibility checking verifies that two or more interfaces fulfill each others' constraints. The algorithm for substitutivity checking verifies that service A demands fewer and fulfills more constraints than service B.BibTeX Entry
@inproceedings{WWW05, author = {Dirk Beyer and Arindam Chakrabarti and Thomas A. Henzinger}, title = {Web Service Interfaces}, booktitle = {Proceedings of the 14th ACM International World Wide Web Conference (WWW~2005, Chiba, May 10-14)}, pages = {148-159}, year = {2005}, publisher = {ACM Press, New York~(NY)}, isbn = {1-59593-046-9}, doi = {10.1145/1060745.1060770}, url = {}, pdf = {https://www.sosy-lab.org/research/pub/2005-WWW.Web_Service_Interfaces.pdf}, abstract = {We present a language for specifying web service interfaces. A web service interface puts three kinds of constraints on the users of the service. First, the interface specifies the methods that can be called by a client, together with types of input and output parameters; these are called signature constraints. Second, the interface may specify propositional constraints on method calls and output values that may occur in a web service conversation; these are called consistency constraints. Third, the interface may specify temporal constraints on the ordering of method calls; these are called protocol constraints. The interfaces can be used to check, first, if two or more web services are compatible, and second, if a web service A can be safely substituted for a web service B. The algorithm for compatibility checking verifies that two or more interfaces fulfill each others' constraints. The algorithm for substitutivity checking verifies that service A demands fewer and fulfills more constraints than service B.}, keyword = {Interfaces for Component-Based Design}, annote = {}, } -
Checking Memory Safety with Blast.
In M. Cerioli, editors,
Proceedings of the Eighth International Conference on
Fundamental Approaches to Software Engineering (FASE 2005, Edinburgh, April 2-10),
LNCS 3442,
pages 2-18,
2005.
Springer-Verlag, Heidelberg.
doi:10.1007/978-3-540-31984-9_2
Keyword(s):
BLAST,
Software Model Checking
Publisher's Version
PDF
Abstract
BLAST is an automatic verification tool for checking temporal safety properties of C programs. Given a C program and a temporal safety property, BLAST statically proves that either the program satisfies the safety property or the program has an execution trace that exhibits a violation of the property. BLAST constructs, explores, and refines abstractions of the program state space based on lazy predicate abstraction and interpolation-based predicate discovery. We show how BLAST can be used to statically prove memory safety for C programs. We take a two-step approach. First, we use CCured, a type-based memory safety analyzer, to annotate with run-time checks all program points that cannot be proved memory safe by the type system. Second, we use BLAST to remove as many of the run-time checks as possible (by proving that these checks never fail), and to generate for the remaining run-time checks execution traces that witness them fail. Our experience shows that BLAST can remove many of the run-time checks added by CCured and provide useful information to the programmer about many of the remaining checks.BibTeX Entry
@inproceedings{FASE05, author = {Dirk Beyer and Thomas A. Henzinger and Ranjit Jhala and Rupak Majumdar}, title = {Checking Memory Safety with {{\sc Blast}}}, booktitle = {Proceedings of the Eighth International Conference on Fundamental Approaches to Software Engineering (FASE~2005, Edinburgh, April 2-10)}, editor = {M.~Cerioli}, pages = {2-18}, year = {2005}, series = {LNCS~3442}, publisher = {Springer-Verlag, Heidelberg}, isbn = {3-540-25420-X}, doi = {10.1007/978-3-540-31984-9_2}, sha256 = {8216a41d893b4e987c3a19b82a0c8be06aa7f5bc9b35ef26ab226b95aea241b3}, url = {}, pdf = {https://www.sosy-lab.org/research/pub/2005-FASE.Checking_Memory_Safety_with_Blast.pdf}, abstract = {BLAST is an automatic verification tool for checking temporal safety properties of C programs. Given a C program and a temporal safety property, BLAST statically proves that either the program satisfies the safety property or the program has an execution trace that exhibits a violation of the property. BLAST constructs, explores, and refines abstractions of the program state space based on lazy predicate abstraction and interpolation-based predicate discovery. We show how BLAST can be used to statically prove memory safety for C programs. We take a two-step approach. First, we use CCured, a type-based memory safety analyzer, to annotate with run-time checks all program points that cannot be proved memory safe by the type system. Second, we use BLAST to remove as many of the run-time checks as possible (by proving that these checks never fail), and to generate for the remaining run-time checks execution traces that witness them fail. Our experience shows that BLAST can remove many of the run-time checks added by CCured and provide useful information to the programmer about many of the remaining checks.}, keyword = {BLAST,Software Model Checking}, annote = {}, } -
An Interface Formalism for Web Services.
In Proceedings of the First International Workshop on
Foundations of Interface Technologies (FIT 2005, San Francisco, CA, August 21),
2005.
Keyword(s):
Interfaces for Component-Based Design
PDF
Supplement
Abstract
Web application development using distributed components and web services presents new software integration challenges, because solutions often cross vendor, administrative, and other boundaries across which neither binary nor source code can be shared. We present a methodology that addresses this problem through a formalism for specifying and manipulating behavioral interfaces of multi-threaded open software components that communicate with each other through method calls. An interface constrains both the implementation and the user of a web service to fulfill certain assumptions that are specified by the interface. Our methodology consists of three increasingly expressive classes of interfaces. Signature interfaces specify the methods that can be invoked by the user, together with parameters. Consistency interfaces add propositional constraints, enhancing signature interfaces with the ability to specify choice and causality. Protocol interfaces specify, in addition, temporal ordering constraints on method invocations. We provide approaches to check if two or more interfaces are compatible; if a web service can be safely substituted for another one; and if a web service satisfies a specification that represents a desired behavioral property.BibTeX Entry
@inproceedings{FIT05, author = {Dirk Beyer and Arindam Chakrabarti and Thomas A. Henzinger}, title = {An Interface Formalism for Web Services}, booktitle = {Proceedings of the First International Workshop on Foundations of Interface Technologies (FIT~2005, San Francisco, CA, August 21)}, pages = {}, year = {2005}, isbn = {}, url = {http://infoscience.epfl.ch/search?recid=114605&ln=en}, pdf = {https://www.sosy-lab.org/research/pub/2007-EPFL-TR002.An_Interface_Formalism_for_Web_Services.pdf}, abstract = {Web application development using distributed components and web services presents new software integration challenges, because solutions often cross vendor, administrative, and other boundaries across which neither binary nor source code can be shared. We present a methodology that addresses this problem through a formalism for specifying and manipulating behavioral interfaces of multi-threaded open software components that communicate with each other through method calls. An interface constrains both the implementation and the user of a web service to fulfill certain assumptions that are specified by the interface. Our methodology consists of three increasingly expressive classes of interfaces. Signature interfaces specify the methods that can be invoked by the user, together with parameters. Consistency interfaces add propositional constraints, enhancing signature interfaces with the ability to specify choice and causality. Protocol interfaces specify, in addition, temporal ordering constraints on method invocations. We provide approaches to check if two or more interfaces are compatible; if a web service can be safely substituted for another one; and if a web service satisfies a specification that represents a desired behavioral property.}, keyword = {Interfaces for Component-Based Design}, annote = {}, doinone = {DOI not available}, }
2004
-
The Blast Query Language for Software Verification.
In R. Giacobazzi, editors,
Proceedings of the 11th International
Static Analysis Symposium (SAS 2004, Verona, August 26-28),
LNCS 3148,
pages 2-18,
2004.
Springer-Verlag, Heidelberg.
doi:10.1007/978-3-540-27864-1_2
Keyword(s):
BLAST,
Software Model Checking
Publisher's Version
PDF
Abstract
BLAST is an automatic verification tool for checking temporal safety properties of C programs. BLAST is based on lazy predicate abstraction driven by interpolation-based predicate discovery. In this paper, we present the BLAST specification language. The language specifies program properties at two levels of precision. At the lower level, monitor automata are used to specify temporal safety properties of program executions (traces). At the higher level, relational reachability queries over program locations are used to combine lower-level trace properties. The two-level specification language can be used to break down a verification task into several independent calls of the model-checking engine. In this way, each call to the model checker may have to analyze only part of the program, or part of the specification, and may thus succeed in a reduction of the number of predicates needed for the analysis. In addition, the two-level specification language provides a means for structuring and maintaining specifications.BibTeX Entry
@inproceedings{SAS04, author = {Dirk Beyer and Adam J. Chlipala and Thomas A. Henzinger and Ranjit Jhala and Rupak Majumdar}, title = {The {{\sc Blast}} Query Language for Software Verification}, booktitle = {Proceedings of the 11th International Static Analysis Symposium (SAS~2004, Verona, August 26-28)}, editor = {R.~Giacobazzi}, pages = {2-18}, year = {2004}, series = {LNCS~3148}, publisher = {Springer-Verlag, Heidelberg}, isbn = {3-540-22791-1}, doi = {10.1007/978-3-540-27864-1_2}, url = {}, pdf = {https://www.sosy-lab.org/research/pub/2004-SAS.The_Blast_Query_Language_for_Software_Verification.pdf}, abstract = {BLAST is an automatic verification tool for checking temporal safety properties of C programs. BLAST is based on lazy predicate abstraction driven by interpolation-based predicate discovery. In this paper, we present the BLAST specification language. The language specifies program properties at two levels of precision. At the lower level, monitor automata are used to specify temporal safety properties of program executions (traces). At the higher level, relational reachability queries over program locations are used to combine lower-level trace properties. The two-level specification language can be used to break down a verification task into several independent calls of the model-checking engine. In this way, each call to the model checker may have to analyze only part of the program, or part of the specification, and may thus succeed in a reduction of the number of predicates needed for the analysis. In addition, the two-level specification language provides a means for structuring and maintaining specifications.}, keyword = {BLAST,Software Model Checking}, annote = {}, } -
An Eclipse Plug-in for Model Checking.
In Proceedings of the 12th IEEE International Workshop on
Program Comprehension (IWPC 2004, Bari, June 24-26),
pages 251-255,
2004.
IEEE Computer Society Press, Los Alamitos (CA).
doi:10.1109/WPC.2004.1311069
Keyword(s):
BLAST,
Software Model Checking
Publisher's Version
PDF
Abstract
While model checking has been successful in uncovering subtle bugs in code, its adoption in software engineering practice has been hampered by the absence of a simple interface to the programmer in an integrated development environment. We describe an integration of the software model checker BLAST into the Eclipse development environment. We provide a verification interface for practical solutions for some typical program analysis problems -assertion checking, reachability analysis, dead code analysis, and test generation- directly on the source code. The analysis is completely automatic, and assumes no knowledge of model checking or formal notation. Moreover, the interface supports incremental program verification to support incremental design and evolution of code.BibTeX Entry
@inproceedings{IWPC04, author = {Dirk Beyer and Thomas A. Henzinger and Ranjit Jhala and Rupak Majumdar}, title = {An {Eclipse} Plug-in for Model Checking}, booktitle = {Proceedings of the 12th IEEE International Workshop on Program Comprehension (IWPC~2004, Bari, June 24-26)}, pages = {251-255}, year = {2004}, publisher = {IEEE Computer Society Press, Los Alamitos~(CA)}, isbn = {0-7695-2149-5}, doi = {10.1109/WPC.2004.1311069}, url = {}, pdf = {https://www.sosy-lab.org/research/pub/2004-IWPC.An_Eclipse_Plug-in_for_Model_Checking.pdf}, abstract = {While model checking has been successful in uncovering subtle bugs in code, its adoption in software engineering practice has been hampered by the absence of a simple interface to the programmer in an integrated development environment. We describe an integration of the software model checker BLAST into the Eclipse development environment. We provide a verification interface for practical solutions for some typical program analysis problems --assertion checking, reachability analysis, dead code analysis, and test generation-- directly on the source code. The analysis is completely automatic, and assumes no knowledge of model checking or formal notation. Moreover, the interface supports incremental program verification to support incremental design and evolution of code.}, keyword = {BLAST,Software Model Checking}, annote = {}, } -
Generating Tests from Counterexamples.
In Proceedings of the 26th IEEE International Conference on
Software Engineering (ICSE 2004, Edinburgh, May 26-28),
pages 326-335,
2004.
IEEE Computer Society Press, Los Alamitos (CA).
doi:10.1109/ICSE.2004.1317455
Keyword(s):
BLAST,
Software Model Checking
Publisher's Version
PDF
Abstract
We have extended the software model checker BLAST to automatically generate test suites that guarantee full coverage with respect to a given predicate. More precisely, given a C program and a target predicate p, BLAST determines the set L of program locations which program execution can reach with p true, and automatically generates a set of test vectors that exhibit the truth of p at all locations in L. We have used BLAST to generate test suites and to detect dead code in C programs with up to 30K lines of code. The analysis and test-vector generation is fully automatic (no user intervention) and exact (no false positives).BibTeX Entry
@inproceedings{ICSE04, author = {Dirk Beyer and Adam J. Chlipala and Thomas A. Henzinger and Ranjit Jhala and Rupak Majumdar}, title = {Generating Tests from Counterexamples}, booktitle = {Proceedings of the 26th IEEE International Conference on Software Engineering (ICSE~2004, Edinburgh, May 26-28)}, pages = {326-335}, year = {2004}, publisher = {IEEE Computer Society Press, Los Alamitos~(CA)}, isbn = {0-7695-2163-0}, doi = {10.1109/ICSE.2004.1317455}, url = {}, pdf = {https://www.sosy-lab.org/research/pub/2004-ICSE.Generating_Tests_from_Counterexamples.pdf}, abstract = {We have extended the software model checker BLAST to automatically generate test suites that guarantee full coverage with respect to a given predicate. More precisely, given a C program and a target predicate p, BLAST determines the set L of program locations which program execution can reach with p true, and automatically generates a set of test vectors that exhibit the truth of p at all locations in L. We have used BLAST to generate test suites and to detect dead code in C programs with up to 30K lines of code. The analysis and test-vector generation is fully automatic (no user intervention) and exact (no false positives).}, keyword = {BLAST,Software Model Checking}, annote = {}, }
2003
-
Simple and Efficient Relational Querying
of Software Structures.
In Proceedings of the Tenth IEEE Working Conference on
Reverse Engineering (WCRE 2003, Victoria, BC, November 13-16),
pages 216-225,
2003.
IEEE Computer Society Press, Los Alamitos (CA).
doi:10.1109/WCRE.2003.1287252
Keyword(s):
Structural Analysis and Comprehension
Publisher's Version
PDF
Supplement
Abstract
Many analyses of software systems can be formalized as relational queries, for example the detection of design patterns, of patterns of problematic design, of code clones, of dead code, and of differences between the as-built and the as-designed architecture. This paper describes the concepts of CrocoPat, a tool for querying and manipulating relations. CrocoPat is easy to use, because of its simple query and manipulation language based on predicate calculus, and its simple file format for relations. CrocoPat is efficient, because it internally represents relations as binary decision diagrams, a data structure that is well-known as a compact representation of large relations in computer-aided verification. CrocoPat is general, because it manipulates not only graphs (i.e. binary relations), but n-ary relations.BibTeX Entry
@inproceedings{WCRE03, author = {Dirk Beyer and Andreas Noack and Claus Lewerentz}, title = {Simple and Efficient Relational Querying of Software Structures}, booktitle = {Proceedings of the Tenth IEEE Working Conference on Reverse Engineering (WCRE~2003, Victoria, BC, November 13-16)}, pages = {216-225}, year = {2003}, publisher = {IEEE Computer Society Press, Los Alamitos~(CA)}, isbn = {0-7695-2027-8}, doi = {10.1109/WCRE.2003.1287252}, url = {http://www.sosy-lab.org/~dbeyer/CrocoPat/}, pdf = {https://www.sosy-lab.org/research/pub/2003-WCRE.Simple_and_Efficient_Relational_Querying_of_Software_Structures.pdf}, abstract = {Many analyses of software systems can be formalized as relational queries, for example the detection of design patterns, of patterns of problematic design, of code clones, of dead code, and of differences between the as-built and the as-designed architecture. This paper describes the concepts of CrocoPat, a tool for querying and manipulating relations. CrocoPat is easy to use, because of its simple query and manipulation language based on predicate calculus, and its simple file format for relations. CrocoPat is efficient, because it internally represents relations as binary decision diagrams, a data structure that is well-known as a compact representation of large relations in computer-aided verification. CrocoPat is general, because it manipulates not only graphs (i.e. binary relations), but n-ary relations.}, keyword = {Structural Analysis and Comprehension}, annote = {CrocoPat's concepts, an introduction to the BDD-based implementation, software analysis applications, and performance measurements.<BR>}, }Additional Infos
CrocoPat's concepts, an introduction to the BDD-based implementation, software analysis applications, and performance measurements.
-
Can Decision Diagrams Overcome State Space Explosion
in Real-Time Verification?.
In H. König,
M. Heiner, and
A. Wolisz, editors,
Proceedings of the 23rd IFIP International Conference on
Formal Techniques for Networked and Distributed Systems
(FORTE 2003, Berlin, September 29 - October 2),
LNCS 2767,
pages 193-208,
2003.
Springer-Verlag, Heidelberg.
doi:10.1007/978-3-540-39979-7_13
Keyword(s):
Formal Verification of Real-Time Systems
Publisher's Version
PDF
Abstract
In this paper we analyze the efficiency of binary decision diagrams (BDDs) and clock difference diagrams (CDDs) in the verification of timed automata. Therefore we present analytical and empirical complexity results for three communication protocols. The contributions of the analyses are: Firstly, they show that BDDs and CDDs of polynomial size exist for the reachability sets of the three protocols. This is the first evidence that CDDs can grow only polynomially for models with non-trivial state space explosion. Secondly, they show that CDD-based tools, which currently use at least exponential space for two of the protocols, will only find polynomial-size CDDs if they use better variable orders, as the BDD-based tool Rabbit does. Finally, they give insight into the dependency of the BDD and CDD size on properties of the model, in particular the number of automata and the magnitude of the clock values.BibTeX Entry
@inproceedings{FORTE03, author = {Dirk Beyer and Andreas Noack}, title = {Can Decision Diagrams Overcome State Space Explosion in Real-Time Verification?}, booktitle = {Proceedings of the 23rd IFIP International Conference on Formal Techniques for Networked and Distributed Systems (FORTE~2003, Berlin, September 29 - October 2)}, editor = {H.~K{\"o}nig and M.~Heiner and A.~Wolisz}, pages = {193-208}, year = {2003}, series = {LNCS~2767}, publisher = {Springer-Verlag, Heidelberg}, isbn = {3-540-20175-0}, doi = {10.1007/978-3-540-39979-7_13}, url = {}, pdf = {https://www.sosy-lab.org/research/pub/2003-FORTE.Can_Decision_Diagrams_Overcome_State_Space_Explosion_in_Real-Time_Verification.pdf}, abstract = {In this paper we analyze the efficiency of binary decision diagrams (BDDs) and clock difference diagrams (CDDs) in the verification of timed automata. Therefore we present analytical and empirical complexity results for three communication protocols. The contributions of the analyses are: Firstly, they show that BDDs and CDDs of polynomial size exist for the reachability sets of the three protocols. This is the first evidence that CDDs can grow only polynomially for models with non-trivial state space explosion. Secondly, they show that CDD-based tools, which currently use at least exponential space for two of the protocols, will only find polynomial-size CDDs if they use better variable orders, as the BDD-based tool Rabbit does. Finally, they give insight into the dependency of the BDD and CDD size on properties of the model, in particular the number of automata and the magnitude of the clock values.}, keyword = {Formal Verification of Real-Time Systems}, annote = {Analysis of the efficiency of binary decision diagrams (BDDs) and clock difference diagrams (CDDs) in the verification of timed automata. Analytical and empirical complexity results for three communication protocols.}, }Additional Infos
Analysis of the efficiency of binary decision diagrams (BDDs) and clock difference diagrams (CDDs) in the verification of timed automata. Analytical and empirical complexity results for three communication protocols. -
Rabbit: A Tool for BDD-Based Verification
of Real-Time Systems.
In W. A. Hunt and
F. Somenzi, editors,
Proceedings of the 15th International Conference on
Computer Aided Verification (CAV 2003, Boulder, CO, July 8-12),
LNCS 2725,
pages 122-125,
2003.
Springer-Verlag, Heidelberg.
doi:10.1007/978-3-540-45069-6_13
Keyword(s):
Formal Verification of Real-Time Systems
Publisher's Version
PDF
Abstract
This paper gives a short overview of a model checking tool for real-time systems. The modeling language are timed automata extended with concepts for modular modeling. The tool provides reachability analysis and refinement checking, both implemented using the data structure BDD. Good variable orderings for the BDDs are computed from the modular structure of the model and an estimate of the BDD size. This leads to a significant performance improvement compared to the tool RED and the BDD-based version of Kronos.BibTeX Entry
@inproceedings{CAV03, author = {Dirk Beyer and Claus Lewerentz and Andreas Noack}, title = {Rabbit: A Tool for {BDD}-Based Verification of Real-Time Systems}, booktitle = {Proceedings of the 15th International Conference on Computer Aided Verification (CAV~2003, Boulder, CO, July 8-12)}, editor = {W.~A.~Hunt and F.~Somenzi}, pages = {122-125}, year = {2003}, series = {LNCS~2725}, publisher = {Springer-Verlag, Heidelberg}, isbn = {3-540-40524-0}, doi = {10.1007/978-3-540-45069-6_13}, url = {}, pdf = {https://www.sosy-lab.org/research/pub/2003-CAV.Rabbit_A_Tool_for_BDD-Based_Verification_of_Real-Time_Systems.pdf}, abstract = {This paper gives a short overview of a model checking tool for real-time systems. The modeling language are timed automata extended with concepts for modular modeling. The tool provides reachability analysis and refinement checking, both implemented using the data structure BDD. Good variable orderings for the BDDs are computed from the modular structure of the model and an estimate of the BDD size. This leads to a significant performance improvement compared to the tool RED and the BDD-based version of Kronos.}, keyword = {Formal Verification of Real-Time Systems}, annote = {Online: <a href="http://springerlink.metapress.com/openurl.asp?genre=article&issn=0302-9743&volume=2725&spage=122"> http://springerlink.metapress.com/openurl.asp?genre=article&issn=0302-9743&volume=2725&spage=122</a> <BR> A description of the BDD-based tool's main features.}, }Additional Infos
Online: http://springerlink.metapress.com/openurl.asp?genre=article&issn=0302-9743&volume=2725&spage=122
A description of the BDD-based tool's main features. -
CrocoPat: Efficient Pattern Analysis
in Object-Oriented Programs.
In Proceedings of the 11th IEEE International Workshop on
Program Comprehension (IWPC 2003, Portland, OR, May 10-11),
pages 294-295,
2003.
IEEE Computer Society Press, Los Alamitos (CA).
doi:10.1109/WPC.2003.1199220
Keyword(s):
Structural Analysis and Comprehension
Publisher's Version
PDF
Supplement
Abstract
Automatic pattern-based recognition of design weakness is a research topic since almost 10 years. Reports about experiments with existing approaches reveal two major problems: A notation for easy and flexible specification of the pattern is missing; only a restricted set of patterns is applicable because of the limitations of the specification language. Performance improvement is needed, because the computation time of existing tools is to high to be acceptable for large real-world systems.
The tool CrocoPat satisfies the following three requirements: (1) The analysis is done automatically by the tool, i.e. without user interaction. (2) The properties of a system are specified in an easy and flexible way because the patterns are described by relational expressions. On demand the user is able to define new patterns he is interested in, or to change existing patterns to solve specific problems. (3) The tool is able to analyze large object-oriented programs (1'000 to 10'000 classes) in acceptable time.BibTeX Entry
@inproceedings{IWPC03, author = {Dirk Beyer and Claus Lewerentz}, title = {{{\sc CrocoPat}}: Efficient Pattern Analysis in Object-Oriented Programs}, booktitle = {Proceedings of the 11th IEEE International Workshop on Program Comprehension (IWPC~2003, Portland, OR, May 10-11)}, pages = {294-295}, year = {2003}, publisher = {IEEE Computer Society Press, Los Alamitos~(CA)}, isbn = {0-7695-1883-4}, doi = {10.1109/WPC.2003.1199220}, url = {http://www.sosy-lab.org/~dbeyer/CrocoPat/}, pdf = {https://www.sosy-lab.org/research/pub/2003-IWPC.CrocoPat_Efficient_Pattern_Analysis_in_Object-Oriented_Programs.pdf}, abstract = {Automatic pattern-based recognition of design weakness is a research topic since almost 10 years. Reports about experiments with existing approaches reveal two major problems: A notation for easy and flexible specification of the pattern is missing; only a restricted set of patterns is applicable because of the limitations of the specification language. Performance improvement is needed, because the computation time of existing tools is to high to be acceptable for large real-world systems. <BR> The tool CrocoPat satisfies the following three requirements: (1) The analysis is done automatically by the tool, i.e. without user interaction. (2) The properties of a system are specified in an easy and flexible way because the patterns are described by relational expressions. On demand the user is able to define new patterns he is interested in, or to change existing patterns to solve specific problems. (3) The tool is able to analyze large object-oriented programs (1'000 to 10'000~classes) in acceptable time.}, keyword = {Structural Analysis and Comprehension}, annote = {Introduction of a BDD-based tool for pattern analysis and a short overview of the main features of CrocoPat.}, }Additional Infos
Introduction of a BDD-based tool for pattern analysis and a short overview of the main features of CrocoPat.
2001
-
Efficient Reachability Analysis and
Refinement Checking of Timed Automata using BDDs.
In T. Margaria and
T. F. Melham, editors,
Proceedings of the 11th IFIP
Advanced Research Working Conference on
Correct Hardware Design and Verification Methods
(CHARME 2001, Livingston, September 4-7),
LNCS 2144,
pages 86-91,
2001.
Springer-Verlag, Heidelberg.
doi:10.1007/3-540-44798-9_6
Keyword(s):
Formal Verification of Real-Time Systems
Publisher's Version
PDF
Abstract
For the formal specification and verification of real-time systems we use the modular formalism Cottbus Timed Automata (CTA), which is an extension of timed automata [AD94]. Matrix-based algorithms for the reachability analysis of timed automata are implemented in tools like Kronos, Uppaal, HyTech and Rabbit. A new BDD-based version of Rabbit, which supports also refinement checking, is now available.BibTeX Entry
@inproceedings{CHARME01, author = {Dirk Beyer}, title = {Efficient Reachability Analysis and Refinement Checking of Timed Automata using {BDD}s}, booktitle = {Proceedings of the 11th IFIP Advanced Research Working Conference on Correct Hardware Design and Verification Methods (CHARME~2001, Livingston, September 4-7)}, editor = {T.~Margaria and T.~F.~Melham}, pages = {86-91}, year = {2001}, series = {LNCS~2144}, publisher = {Springer-Verlag, Heidelberg}, isbn = {3-540-42541-1}, doi = {10.1007/3-540-44798-9_6}, url = {}, pdf = {https://www.sosy-lab.org/research/pub/2001-CHARME.Efficient_Reachability_Analysis_and_Refinement_Checking_of_Timed_Automata_using_BDDs.pdf}, abstract = {For the formal specification and verification of real-time systems we use the modular formalism Cottbus Timed Automata (CTA), which is an extension of timed automata [AD94]. Matrix-based algorithms for the reachability analysis of timed automata are implemented in tools like Kronos, Uppaal, HyTech and Rabbit. A new BDD-based version of Rabbit, which supports also refinement checking, is now available.}, keyword = {Formal Verification of Real-Time Systems}, annote = {Online: <a href="http://link.springer.de/link/service/series/0558/bibs/2144/21440086.htm"> http://link.springer.de/link/service/series/0558/bibs/2144/21440086.htm</a> <BR> Decribes how the tool checks refinement via simulation relation.}, }Additional Infos
Online: http://link.springer.de/link/service/series/0558/bibs/2144/21440086.htm
Decribes how the tool checks refinement via simulation relation. -
Improvements in BDD-Based Reachability Analysis
of Timed Automata.
In J. N. Oliveira and
P. Zave, editors,
Proceedings of the Tenth International Symposium of
Formal Methods Europe (FME 2001, Berlin, March 12-16):
Formal Methods for Increasing Software Productivity,
LNCS 2021,
pages 318-343,
2001.
Springer-Verlag, Heidelberg.
doi:10.1007/3-540-45251-6_18
Keyword(s):
Formal Verification of Real-Time Systems
Publisher's Version
PDF
Abstract
To develop efficient algorithms for the reachability analysis of timed automata, a promising approach is to use binary decision diagrams (BDDs) as data structure for the representation of the explored state space. The size of a BDD is very sensitive to the ordering of the variables. We use the communication structure to deduce an estimation for the BDD size. In our experiments, this guides the choice of good variable orderings, which leads to an efficient reachability analysis. We develop a discrete semantics for closed timed automata to get a finite state space required by the BDD-based representation and we prove the equivalence to the continuous semantics regarding the set of reachable locations. An upper bound for the size of the BDD representing the transition relation and an estimation for the set of reachable configurations based on the communication structure is given. We implemented these concepts in the verification tool Rabbit [BR00]. Different case studies justify our conjecture: Polynomial reachability analysis seems to be possible for some classes of real-time models, which have a good-natured communication structure.BibTeX Entry
@inproceedings{FME01, author = {Dirk Beyer}, title = {Improvements in {BDD}-Based Reachability Analysis of Timed Automata}, booktitle = {Proceedings of the Tenth International Symposium of Formal Methods Europe (FME~2001, Berlin, March 12-16): Formal Methods for Increasing Software Productivity}, editor = {J.~N.~Oliveira and P.~Zave}, pages = {318-343}, year = {2001}, series = {LNCS~2021}, publisher = {Springer-Verlag, Heidelberg}, isbn = {3-540-41791-5}, doi = {10.1007/3-540-45251-6_18}, url = {}, pdf = {https://www.sosy-lab.org/research/pub/2001-FME.Improvements_in_BDD-Based_Reachability_Analysis_of_Timed_Automata.pdf}, abstract = {To develop efficient algorithms for the reachability analysis of timed automata, a promising approach is to use binary decision diagrams (BDDs) as data structure for the representation of the explored state space. The size of a BDD is very sensitive to the ordering of the variables. We use the communication structure to deduce an estimation for the BDD size. In our experiments, this guides the choice of good variable orderings, which leads to an efficient reachability analysis. We develop a discrete semantics for closed timed automata to get a finite state space required by the BDD-based representation and we prove the equivalence to the continuous semantics regarding the set of reachable locations. An upper bound for the size of the BDD representing the transition relation and an estimation for the set of reachable configurations based on the communication structure is given. We implemented these concepts in the verification tool Rabbit [BR00]. Different case studies justify our conjecture: Polynomial reachability analysis seems to be possible for some classes of real-time models, which have a good-natured communication structure.}, keyword = {Formal Verification of Real-Time Systems}, annote = {Online: <a href="http://link.springer.de/link/service/series/0558/bibs/2021/20210318.htm"> http://link.springer.de/link/service/series/0558/bibs/2021/20210318.htm</a> <BR> Discretization of Timed Automata, BDD-based representation, proof of an upper bound for the BDD of the transition relation, BDD variable ordering, heuristics for efficient verification, contains the proof of the equivalence of our integer semantics to the continuous semantics regarding reachable locations.}, }Additional Infos
Online: http://link.springer.de/link/service/series/0558/bibs/2021/20210318.htm
Discretization of Timed Automata, BDD-based representation, proof of an upper bound for the BDD of the transition relation, BDD variable ordering, heuristics for efficient verification, contains the proof of the equivalence of our integer semantics to the continuous semantics regarding reachable locations. -
Impact of Inheritance on Metrics for
Size, Coupling, and Cohesion in Object Oriented Systems.
In R. Dumke and
A. Abran, editors,
Proceedings of the Tenth International Workshop on
Software Measurement (IWSM 2000, Berlin, October 4-6):
New Approaches in Software Measurement,
LNCS 2006,
pages 1-17,
2001.
Springer-Verlag, Heidelberg.
doi:10.1007/3-540-44704-0_1
Keyword(s):
Structural Analysis and Comprehension
Publisher's Version
PDF
Abstract
In today's engineering of object oriented systems many different metrics are used to get feedback about design quality and to automatically identify design weaknesses. While the concept of inheritance is covered by special inheritance metrics its impact on other classical metrics (like size, coupling or cohesion metrics) is not considered; this can yield misleading measurement values and false interpretations. In this paper we present an approach to work the concept of inheritance into classical metrics (and with it the related concepts of overriding, overloading and polymorphism). This is done by some language dependent flattening functions that modify the data on which the measurement will be done. These functions are implemented within our metrics tool Crocodile and are applied for a case study: the comparison of the measurement values of the original data with the measurement values of the flattened data yields interesting results and improves the power of classical measurements for interpretation.BibTeX Entry
@inproceedings{IWSM00, author = {Dirk Beyer and Claus Lewerentz and Frank Simon}, title = {Impact of Inheritance on Metrics for Size, Coupling, and Cohesion in Object Oriented Systems}, booktitle = {Proceedings of the Tenth International Workshop on Software Measurement (IWSM~2000, Berlin, October 4-6): New Approaches in Software Measurement}, editor = {R.~Dumke and A.~Abran}, pages = {1-17}, year = {2001}, series = {LNCS~2006}, publisher = {Springer-Verlag, Heidelberg}, isbn = {3-540-41727-3}, doi = {10.1007/3-540-44704-0_1}, url = {}, pdf = {https://www.sosy-lab.org/research/pub/2000-IWSM.Impact_of_Inheritance_on_Metrics.pdf}, abstract = {In today's engineering of object oriented systems many different metrics are used to get feedback about design quality and to automatically identify design weaknesses. While the concept of inheritance is covered by special inheritance metrics its impact on other classical metrics (like size, coupling or cohesion metrics) is not considered; this can yield misleading measurement values and false interpretations. In this paper we present an approach to work the concept of inheritance into classical metrics (and with it the related concepts of overriding, overloading and polymorphism). This is done by some language dependent <i>flattening</i> functions that modify the data on which the measurement will be done. These functions are implemented within our metrics tool <i>Crocodile</i> and are applied for a case study: the comparison of the measurement values of the original data with the measurement values of the flattened data yields interesting results and improves the power of classical measurements for interpretation.}, keyword = {Structural Analysis and Comprehension}, annote = {Online: <a href="http://link.springer.de/link/service/series/0558/bibs/2006/20060001.htm"> http://link.springer.de/link/service/series/0558/bibs/2006/20060001.htm</a> <BR>}, }Additional Infos
-
Rabbit: Verification of Real-Time Systems.
In P. Pettersson and
S. Yovine, editors,
Proceedings of the Workshop on Real-Time Tools
(RT-TOOLS 2001, Aalborg, August 20),
pages 13-21,
2001.
Keyword(s):
Formal Verification of Real-Time Systems
PDF
Abstract
This paper gives a short overview of a model checking tool for Cottbus Timed Automata, which is a modular modeling language based on timed and hybrid automata. For timed automata, the current version of the tool provides BDD-based verification using an integer semantics. Reachability analysis as well as refinement checking is possible. To find good variable orderings it uses the component structure of the model and an upper bound for the BDD size. For hybrid automata, reachability analysis based on the double description method is implemented.BibTeX Entry
@inproceedings{RT-TOOLS01, author = {Dirk Beyer}, title = {Rabbit: Verification of Real-Time Systems}, booktitle = {Proceedings of the Workshop on Real-Time Tools (RT-TOOLS~2001, Aalborg, August 20)}, editor = {P.~Pettersson and S.~Yovine}, pages = {13-21}, year = {2001}, isbn = {}, pdf = {https://www.sosy-lab.org/research/pub/2001-RT-TOOLS.Rabbit_Verification_of_Real-Time_Systems.pdf}, abstract = {This paper gives a short overview of a model checking tool for Cottbus Timed Automata, which is a modular modeling language based on timed and hybrid automata. For timed automata, the current version of the tool provides BDD-based verification using an integer semantics. Reachability analysis as well as refinement checking is possible. To find good variable orderings it uses the component structure of the model and an upper bound for the BDD size. For hybrid automata, reachability analysis based on the double description method is implemented.}, keyword = {Formal Verification of Real-Time Systems}, address = {Uppsala}, annote = {}, doinone = {DOI not available}, } -
Efficient Verification of Timed Automata using BDDs.
In S. Gnesi and
U. Ultes-Nitsche, editors,
Proceedings of the Sixth International ERCIM Workshop on
Formal Methods for Industrial Critical Systems
(FMICS 2001, Paris, July 16-17),
pages 95-113,
2001.
INRIA, Paris.
Keyword(s):
Formal Verification of Real-Time Systems
PDF
Abstract
This paper investigates the efficient reachability analysis of timed automata. It describes a discretization of time which preserves the reachability properties. The discretization allows to represent sets of configurations of timed automata as binary decision diagrams (BDDs). Further techniques, like computing good variable orderings, are applied to use the full potential of BDDs as compact and canonical representation of large sets. We implemented these concepts within the tool Rabbit. The highly improved performance is shown for some example models. For additional speedup we used an on-the-fly algorithm and refinement checking for large models.BibTeX Entry
@inproceedings{FMICS01, author = {Dirk Beyer and Andreas Noack}, title = {Efficient Verification of Timed Automata using {BDD}s}, booktitle = {Proceedings of the Sixth International ERCIM Workshop on Formal Methods for Industrial Critical Systems (FMICS~2001, Paris, July 16-17)}, editor = {S.~Gnesi and U.~Ultes-Nitsche}, pages = {95-113}, year = {2001}, publisher = {INRIA, Paris}, isbn = {}, pdf = {https://www.sosy-lab.org/research/pub/2001-FMICS.Efficient_Verification_of_Timed_Automata_using_BDDs.pdf}, abstract = {This paper investigates the efficient reachability analysis of timed automata. It describes a discretization of time which preserves the reachability properties. The discretization allows to represent sets of configurations of timed automata as binary decision diagrams (BDDs). Further techniques, like computing good variable orderings, are applied to use the full potential of BDDs as compact and canonical representation of large sets. We implemented these concepts within the tool Rabbit. The highly improved performance is shown for some example models. For additional speedup we used an on-the-fly algorithm and refinement checking for large models.}, keyword = {Formal Verification of Real-Time Systems}, annote = {}, doinone = {DOI not available}, } -
Different Strategies for
BDD-Based Reachability Analysis of Timed Automata.
In C. Rattray,
M. Sveda, and
J. Rozenblit, editors,
Proceedings of the Second IEEE/IFIP Joint Workshop on
Formal Specifications of
Computer-Based Systems (FSCBS 2001, Washington, D.C., April 20),
pages 89-98,
2001.
Keyword(s):
Formal Verification of Real-Time Systems
BibTeX Entry
@inproceedings{FSCBS01b, author = {Dirk Beyer and Andy Heinig}, title = {Different Strategies for {BDD}-Based Reachability Analysis of Timed Automata}, booktitle = {Proceedings of the Second IEEE/IFIP Joint Workshop on Formal Specifications of Computer-Based Systems (FSCBS~2001, Washington, D.C., April 20)}, editor = {C.~Rattray and M.~Sveda and J.~Rozenblit}, pages = {89-98}, year = {2001}, isbn = {}, keyword = {Formal Verification of Real-Time Systems}, address = {Stirling}, annote = {}, doinone = {DOI not available}, } -
Cottbus Timed Automata: Formal Definition and Semantics.
In C. Rattray,
M. Sveda, and
J. Rozenblit, editors,
Proceedings of the Second IEEE/IFIP Joint Workshop on
Formal Specifications of
Computer-Based Systems (FSCBS 2001, Washington, D.C., April 20),
pages 75-87,
2001.
Keyword(s):
Formal Verification of Real-Time Systems
PDF
Abstract
We present a formalism for modular modelling of hybrid systems, the Cottbus Timed Automata. For the theoretical basis, we build on work about timed and hybrid automata. We use concepts from concurrency theory to model communication of separately defined modules, but we extend these concepts to be able to express explicitly read- and write-access to signals and variables.BibTeX Entry
@inproceedings{FSCBS01a, author = {Dirk Beyer and Heinrich Rust}, title = {{C}ottbus {T}imed {A}utomata: Formal Definition and Semantics}, booktitle = {Proceedings of the Second IEEE/IFIP Joint Workshop on Formal Specifications of Computer-Based Systems (FSCBS~2001, Washington, D.C., April 20)}, editor = {C.~Rattray and M.~Sveda and J.~Rozenblit}, pages = {75-87}, year = {2001}, isbn = {}, pdf = {https://www.sosy-lab.org/research/pub/2001-FSCBS.Cottbus_Timed_Automata_Formal_Definition_and_Compositional_Semantics.revised.pdf}, abstract = {We present a formalism for modular modelling of hybrid systems, the Cottbus Timed Automata. For the theoretical basis, we build on work about timed and hybrid automata. We use concepts from concurrency theory to model communication of separately defined modules, but we extend these concepts to be able to express explicitly read- and write-access to signals and variables.}, keyword = {Formal Verification of Real-Time Systems}, address = {Stirling}, annote = {The pdf is a revised version of the original paper. <BR> The full formal definition and semantics of CTA.}, doinone = {DOI not available}, }Additional Infos
The pdf is a revised version of the original paper.
The full formal definition and semantics of CTA.
2000
-
A Tool for Modular Modelling and Verification
of Hybrid Systems.
In A. Crespo and
J. Vila, editors,
Proceedings of the 25th IFAC/IFIP Workshop on
Real-Time Programming (WRTP 2000, Palma, May 17-19),
pages 169-174,
2000.
Elsevier Science, Oxford.
doi:10.1016/s1474-6670(17)39950-0
Keyword(s):
Formal Verification of Real-Time Systems
Publisher's Version
BibTeX Entry
@inproceedings{WRTP00, author = {Dirk Beyer and Heinrich Rust}, title = {A Tool for Modular Modelling and Verification of Hybrid Systems}, booktitle = {Proceedings of the 25th IFAC/IFIP Workshop on Real-Time Programming (WRTP~2000, Palma, May 17-19)}, editor = {A.~Crespo and J.~Vila}, pages = {169-174}, year = {2000}, publisher = {Elsevier Science, Oxford}, isbn = {0-08-043686-2}, doi = {10.1016/s1474-6670(17)39950-0}, url = {}, keyword = {Formal Verification of Real-Time Systems}, annote = {Also as preprint: Proc. WRTP'00, pages 181-186, Valencia, 2000. <BR> The reference for the first version of the tool using the double decription method (DDM) for hybrid systems.}, }Additional Infos
Also as preprint: Proc. WRTP'00, pages 181-186, Valencia, 2000.
The reference for the first version of the tool using the double decription method (DDM) for hybrid systems. -
BDD-basierte Verifikation von Realzeit-Systemen.
In J. Grabowski and
S. Heymer, editors,
Tagungsband Formale Beschreibungstechniken für
verteilte Systeme (FBT 2000, Lübeck, June 22-23),
pages 79-89,
2000.
Shaker Verlag, Aachen.
Keyword(s):
Formal Verification of Real-Time Systems
PDF
Abstract
Diese Arbeit behandelt die effiziente Erreichbarkeitsanalyse von Timed Automata. Wir beschreiben eine Erreichbarkeitseigenschaften erhaltende Diskretisierung der Zeit. Diese ermöglicht es, Konfigurationsmengen von Timed Automata als Binary Decision Diagrams (BDDs) darzustellen. Die kompakte BDD-Repräsentation großer Mengen erfordert geeignete Variablenordnungen. Zur deren Bestimmung nutzen wir Strukturinformationen aus der Modellierungsnotation Cottbus Timed Automaton. Wir belegen die erzielten Effizienzverbesserungen durch Meßwerte.BibTeX Entry
@inproceedings{FBT00, author = {Dirk Beyer and Andreas Noack}, title = {{BDD}-basierte {V}erifikation von {R}ealzeit-{S}ystemen}, booktitle = {Tagungsband Formale Beschreibungstechniken f{\"u}r verteilte Systeme (FBT~2000, L{\"u}beck, June 22-23)}, editor = {J.~Grabowski and S.~Heymer}, pages = {79-89}, year = {2000}, publisher = {Shaker Verlag, Aachen}, isbn = {}, pdf = {https://www.sosy-lab.org/research/pub/2000-FBT.BDD-basierte_Verifikation_von_Realzeit-Systemen.pdf}, abstract = {Diese Arbeit behandelt die effiziente Erreichbarkeitsanalyse von Timed Automata. Wir beschreiben eine Erreichbarkeitseigenschaften erhaltende Diskretisierung der Zeit. Diese ermöglicht es, Konfigurationsmengen von Timed Automata als Binary Decision Diagrams (BDDs) darzustellen. Die kompakte BDD-Repräsentation großer Mengen erfordert geeignete Variablenordnungen. Zur deren Bestimmung nutzen wir Strukturinformationen aus der Modellierungsnotation Cottbus Timed Automaton. Wir belegen die erzielten Effizienzverbesserungen durch Meßwerte.}, keyword = {Formal Verification of Real-Time Systems}, annote = {}, doinone = {DOI not available}, } -
Modular Modelling and Verification with
Cottbus Timed Automata.
In C. Rattray and
M. Sveda, editors,
Proceedings of the IEEE/IFIP Joint Workshop on
Formal Specifications of
Computer-Based Systems (FSCBS 2000, Edinburgh, April 6-7),
pages 17-24,
2000.
Keyword(s):
Formal Verification of Real-Time Systems
Abstract
A new modelling notation and a verification tool for hybrid systems is introduced: The Cottbus Timed Automaton (CTA). In contrast to existing modelling concepts, the new formalism has the advantage to be capable of modelling hybrid systems as a modular structure of components which communicate through the elements of an explicitly defined interface. The interface consists of signals and variables declared with different access modes. This paper describes how to model a system and how to verify it. The current version of the tool using the double description method to represent the regions is presented.BibTeX Entry
@inproceedings{FSCBS00, author = {Dirk Beyer and Heinrich Rust}, title = {Modular Modelling and Verification with {C}ottbus {T}imed {A}utomata}, booktitle = {Proceedings of the IEEE/IFIP Joint Workshop on Formal Specifications of Computer-Based Systems (FSCBS~2000, Edinburgh, April 6-7)}, editor = {C.~Rattray and M.~Sveda}, pages = {17-24}, year = {2000}, isbn = {}, abstract = {A new modelling notation and a verification tool for hybrid systems is introduced: The Cottbus Timed Automaton (CTA). In contrast to existing modelling concepts, the new formalism has the advantage to be capable of modelling hybrid systems as a modular structure of components which communicate through the elements of an explicitly defined interface. The interface consists of signals and variables declared with different access modes. This paper describes how to model a system and how to verify it. The current version of the tool using the double description method to represent the regions is presented.}, keyword = {Formal Verification of Real-Time Systems}, address = {Stirling}, annote = {}, doinone = {DOI not available}, } -
Modelling and Analysing a Railroad Crossing in a Modular Way.
In S. Gnesi,
I. Schieferdecker, and
A. Rennoch, editors,
Proceedings of the Fifth International ERCIM Workshop on
Formal Methods for Industrial Critical Systems
(FMICS 2000, Berlin, April 3-4),
pages 287-303,
2000.
Keyword(s):
Formal Verification of Real-Time Systems
PDF
Abstract
One problem of modelling hybrid systems with existing notations of hybrid automata is that there is no modular structure in the model. We introduce an extended modelling notation which allows the modelling of a system as a hierarchical structure of modules. The modules are capable of communicating through the elements of an explicitly defined interface. The interface consists of signals and variables declared with different access modes. This paper describes a model of the railroad crossing example and how to verify it. The current version of a tool for reachability analysis using the double description method to represent symbolically the sets of reachable configurations is presented.BibTeX Entry
@inproceedings{FMICS00, author = {Dirk Beyer and Claus Lewerentz and Heinrich Rust}, title = {Modelling and Analysing a Railroad Crossing in a Modular Way}, booktitle = {Proceedings of the Fifth International ERCIM Workshop on Formal Methods for Industrial Critical Systems (FMICS~2000, Berlin, April 3-4)}, editor = {S.~Gnesi and I.~Schieferdecker and A.~Rennoch}, pages = {287-303}, year = {2000}, isbn = {}, pdf = {https://www.sosy-lab.org/research/pub/2000-FMICS.Modelling_and_Analysing_a_Railroad_Crossing_in_a_Modular_Way.pdf}, abstract = {One problem of modelling hybrid systems with existing notations of hybrid automata is that there is no modular structure in the model. We introduce an extended modelling notation which allows the modelling of a system as a hierarchical structure of modules. The modules are capable of communicating through the elements of an explicitly defined interface. The interface consists of signals and variables declared with different access modes. This paper describes a model of the railroad crossing example and how to verify it. The current version of a tool for reachability analysis using the double description method to represent symbolically the sets of reachable configurations is presented.}, keyword = {Formal Verification of Real-Time Systems}, address = {Berlin}, annote = {Describes a case study for modeling and analysis using the DDM-based representation.}, doinone = {DOI not available}, }Additional Infos
Describes a case study for modeling and analysis using the DDM-based representation.
1999
-
Concepts of Cottbus Timed Automata.
In K. Spies and
B. Schätz, editors,
Tagungsband Formale Beschreibungstechniken für
verteilte Systeme (FBT 1999, München, June 17-18),
pages 27-34,
1999.
Herbert Utz Verlag, München.
Keyword(s):
Formal Verification of Real-Time Systems
PDF
Abstract
Today, many industrial production cells are controlled by software. Many such systems have to deal with requirements which the developer has to guarantee. Because of the complexity of the implementation one of the main problems for developing the software for reactive systems is to be sure that such properties are fulfilled. One way to handle the problems is to use formal methods: This means to develop a formal model which is used to prove the properties of the specification with tool support.
There are many different methods to model such reactive systems. Some of these abstract from real-time aspects of the system. We chose a problem area where we have real-time requirements, for example the throughput of the modelled production cell. So we have to use formal methods which support models of real-time systems.BibTeX Entry
@inproceedings{FBT99, author = {Dirk Beyer and Heinrich Rust}, title = {Concepts of {C}ottbus {T}imed {A}utomata}, booktitle = {Tagungsband Formale Beschreibungstechniken f{\"u}r verteilte Systeme (FBT~1999, M{\"u}nchen, June 17-18)}, editor = {K.~Spies and B.~Sch{\"a}tz}, pages = {27-34}, year = {1999}, publisher = {Herbert Utz Verlag, M{\"u}nchen}, isbn = {}, pdf = {https://www.sosy-lab.org/research/pub/1999-FBT.Concepts_of_Cottbus_Timed_Automata.pdf}, abstract = {Today, many industrial production cells are controlled by software. Many such systems have to deal with requirements which the developer has to guarantee. Because of the complexity of the implementation one of the main problems for developing the software for reactive systems is to be sure that such properties are fulfilled. One way to handle the problems is to use formal methods: This means to develop a formal model which is used to prove the properties of the specification with tool support. <BR> There are many different methods to model such reactive systems. Some of these abstract from real-time aspects of the system. We chose a problem area where we have real-time requirements, for example the throughput of the modelled production cell. So we have to use formal methods which support models of real-time systems.}, keyword = {Formal Verification of Real-Time Systems}, annote = {}, doinone = {DOI not available}, }
1998
-
Modeling a Production Cell as a Distributed
Real-Time System with Cottbus Timed Automata.
In H. König and
P. Langendörfer, editors,
Tagungsband Formale Beschreibungstechniken für
verteilte Systeme (FBT 1998, Cottbus, June 4-5),
pages 148-159,
1998.
Shaker Verlag, Aachen.
Keyword(s):
Formal Verification of Real-Time Systems
PDF
Abstract
We build on work in designing modeling languages for hybrid systems in the development of CTA, the Cottbus Timed Automata. Our design features a facility to specify a hybrid system modulary and hierarchically, communication through CSP-like synchronizations but with special support to specify explicitly different roles which the interface signals and variables of a module play, and to instantiate recurring elements serveral times from a template. Continuous system components are modeled with analogue variables having piecewise constant derivatives. Discrete system aspects like control modes are modeled with the discrete variables and the states of a finite automaton. Our approach to specifying distributed hybrid systems is illustrated with the specification of a component of a production cell, a transport belt.BibTeX Entry
@inproceedings{FBT98, author = {Dirk Beyer and Heinrich Rust}, title = {Modeling a Production Cell as a Distributed Real-Time System with {C}ottbus {T}imed {A}utomata}, booktitle = {Tagungsband Formale Beschreibungstechniken f{\"u}r verteilte Systeme (FBT~1998, Cottbus, June 4-5)}, editor = {H.~K{\"o}nig and P.~Langend{\"o}rfer}, pages = {148-159}, year = {1998}, publisher = {Shaker Verlag, Aachen}, isbn = {}, pdf = {https://www.sosy-lab.org/research/pub/1998-FBT.Modeling_a_Production_Cell_as_a_Distributed_Real-Time_System_with.Cottbus_Timed_Automata.pdf}, abstract = {We build on work in designing modeling languages for hybrid systems in the development of CTA, the Cottbus Timed Automata. Our design features a facility to specify a hybrid system modulary and hierarchically, communication through CSP-like synchronizations but with special support to specify explicitly different roles which the interface signals and variables of a module play, and to instantiate recurring elements serveral times from a template. Continuous system components are modeled with analogue variables having piecewise constant derivatives. Discrete system aspects like control modes are modeled with the discrete variables and the states of a finite automaton. Our approach to specifying distributed hybrid systems is illustrated with the specification of a component of a production cell, a transport belt.}, keyword = {Formal Verification of Real-Time Systems}, annote = {The first published paper where we introduce the concepts of Cottbus Timed Automata, i.e. modules, interfaces and a modeling example.}, doinone = {DOI not available}, }Additional Infos
The first published paper where we introduce the concepts of Cottbus Timed Automata, i.e. modules, interfaces and a modeling example.
Disclaimer:
This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All person copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.