We are hiring new doctoral researchers, student research assistants, and tutors. Apply now!
2 papers accepted at ASE 2024: BenchCloud and CoVeriTeam GUI

Publications about Competition on Software Testing (Test-Comp)

Articles in journal or book chapters

  1. Dirk Beyer and Thomas Lemberger. Six Years Later: Testing vs. Model Checking. International Journal on Software Tools for Technology Transfer (STTT), 2024. Link to this entry Keyword(s): Benchmarking, Competition on Software Verification (SV-COMP), Competition on Software Testing (Test-Comp), Software Model Checking, Software Testing Funding: DFG-COOP PDF
    Artifact(s)
    Abstract
    Six years ago, we performed the first large-scale comparison of automated test generators and software model checkers with respect to bug-finding capabilities on a benchmark set with 5 693 C programs. Since then, the International Competition on Software Testing (Test-Comp) has established standardized formats and community-agreed rules for the experimental comparison of test generators. With this new context, it is time to revisit our initial question: Model checkers or test generators—which tools are more effective in finding bugs in software? To answer this, we perform a comparative analysis on the tools and existing data published by two competitions, the International Competition on Software Verification (SV-COMP) and Test-Comp. The results provide two insights: (1) Almost all test generators that participate in Test-Comp use hybrid approaches that include formal methods, and (2) while the considered model checkers are still highly competitive, they are now outperformed by the bug-finding capabilities of the considered test generators.
    BibTeX Entry
    @article{TestStudy-STTT, author = {Dirk Beyer and Thomas Lemberger}, title = {Six Years Later: Testing vs. Model Checking}, journal = {International Journal on Software Tools for Technology Transfer (STTT)}, volume = {}, number = {}, pages = {}, year = {2024}, doi = {}, url = {}, pdf = {https://www.sosy-lab.org/research/pub/2024-STTT.Six_Years_Later_Testing_vs_Model_Checking.pdf}, presentation = {}, abstract = {Six years ago, we performed the first large-scale comparison of automated test generators and software model checkers with respect to bug-finding capabilities on a benchmark set with 5 693 C programs. Since then, the International Competition on Software Testing (Test-Comp) has established standardized formats and community-agreed rules for the experimental comparison of test generators. With this new context, it is time to revisit our initial question: Model checkers or test generators—which tools are more effective in finding bugs in software? To answer this, we perform a comparative analysis on the tools and existing data published by two competitions, the International Competition on Software Verification (SV-COMP) and Test-Comp. The results provide two insights: (1) Almost all test generators that participate in Test-Comp use hybrid approaches that include formal methods, and (2) while the considered model checkers are still highly competitive, they are now outperformed by the bug-finding capabilities of the considered test generators.}, keyword = {Benchmarking, Competition on Software Verification (SV-COMP), Competition on Software Testing (Test-Comp), Software Model Checking, Software Testing}, artifact = {10.5281/zenodo.10232648}, funding = {DFG-COOP}, }
  2. Dirk Beyer. First International Competition on Software Testing. International Journal on Software Tools for Technology Transfer (STTT), 23(6):833-846, 2021. doi:10.1007/s10009-021-00613-3 Link to this entry Keyword(s): Competition on Software Testing (Test-Comp), Competition on Software Testing (Test-Comp Report), Software Testing Funding: DFG-COOP Publisher's Version PDF Supplement
    BibTeX Entry
    @article{TestComp19-STTT, author = {Dirk Beyer}, title = {First International Competition on Software Testing}, journal = {International Journal on Software Tools for Technology Transfer (STTT)}, volume = {23}, number = {6}, pages = {833-846}, year = {2021}, doi = {10.1007/s10009-021-00613-3}, sha256 = {cd82a853fbbf65de7f95a9e7de4f36118bb35fb516db87421a0aa38ccc863031}, url = {https://www.sosy-lab.org/research/pub/2021-STTT.First_International_Competition_on_Software_Testing.pdf}, pdf = {}, presentation = {}, abstract = {}, keyword = {Competition on Software Testing (Test-Comp),Competition on Software Testing (Test-Comp Report),Software Testing}, funding = {DFG-COOP}, issn = {1433-2787}, }

Articles in conference or workshop proceedings

  1. Dirk Beyer, Po-Chun Chien, and Marek Jankola. BenchCloud: A Platform for Scalable Performance Benchmarking. In Proc. ASE, pages 2386-2389, 2024. ACM. doi:10.1145/3691620.3695358 Link to this entry Keyword(s): Benchmarking, Competition on Software Verification (SV-COMP), Competition on Software Testing (Test-Comp) Funding: DFG-CONVEY, DFG-COOP Publisher's Version PDF Presentation Video Supplement
    Artifact(s)
    Abstract
    Performance evaluation is a crucial method for assessing automated-reasoning tools. Evaluating automated tools requires rigorous benchmarking to accurately measure resource consumption, including time and memory, which are essential for understanding the tools' capabilities. BenchExec, a widely used benchmarking framework, reliably measures resource usage for tools executed locally on a single node. This paper describes BenchCloud, a solution for elastic and scalable job distribution across hundreds of nodes, enabling large-scale experiments on distributed and heterogeneous computing environments. BenchCloud seamlessly integrates with BenchExec, allowing BenchExec to delegate the actual execution to BenchCloud. The system has been employed in several prominent international competitions in automated reasoning, including SMT-COMP, SV-COMP, and Test-Comp, underscoring its importance in rigorous tool evaluation across various research domains. It helps to ensure both internal and external validity of the experimental results. This paper presents an overview of BenchCloud's architecture and high- lights its primary use cases in facilitating scalable benchmarking.
    Demonstration video: https://youtu.be/aBfQytqPm0U
    Running system: https://benchcloud.sosy-lab.org/
    BibTeX Entry
    @inproceedings{ASE24a, author = {Dirk Beyer and Po-Chun Chien and Marek Jankola}, title = {{BenchCloud}: {A} Platform for Scalable Performance Benchmarking}, booktitle = {Proc.\ ASE}, pages = {2386-2389}, year = {2024}, series = {}, publisher = {ACM}, doi = {10.1145/3691620.3695358}, url = {https://benchcloud.sosy-lab.org/}, pdf = {https://www.sosy-lab.org/research/pub/2024-ASE24.BenchCloud_A_Platform_for_Scalable_Performance_Benchmarking.pdf}, presentation = {https://www.sosy-lab.org/research/prs/2024-10-30_ASE_BenchCloud_A_Platform_for_Scalable_Performance_Benchmarking_Po-Chun.pdf}, abstract = {Performance evaluation is a crucial method for assessing automated-reasoning tools. Evaluating automated tools requires rigorous benchmarking to accurately measure resource consumption, including time and memory, which are essential for understanding the tools' capabilities. BenchExec, a widely used benchmarking framework, reliably measures resource usage for tools executed locally on a single node. This paper describes BenchCloud, a solution for elastic and scalable job distribution across hundreds of nodes, enabling large-scale experiments on distributed and heterogeneous computing environments. BenchCloud seamlessly integrates with BenchExec, allowing BenchExec to delegate the actual execution to BenchCloud. The system has been employed in several prominent international competitions in automated reasoning, including SMT-COMP, SV-COMP, and Test-Comp, underscoring its importance in rigorous tool evaluation across various research domains. It helps to ensure both internal and external validity of the experimental results. This paper presents an overview of BenchCloud's architecture and high- lights its primary use cases in facilitating scalable benchmarking. <br> Demonstration video: <a href="https://youtu.be/aBfQytqPm0U">https://youtu.be/aBfQytqPm0U</a> <br> Running system: <a href="https://benchcloud.sosy-lab.org/">https://benchcloud.sosy-lab.org/</a>}, keyword = {Benchmarking, Competition on Software Verification (SV-COMP), Competition on Software Testing (Test-Comp)}, artifact = {10.5281/zenodo.13742756}, funding = {DFG-CONVEY, DFG-COOP}, video = {https://youtu.be/aBfQytqPm0U}, }
  2. Dirk Beyer. Software Testing: 5th Comparative Evaluation: Test-Comp 2023. In L. Lambers and S. Uchitel, editors, Proceedings of the 26th International Conference on Fundamental Approaches to Software Engineering (FASE 2023, Paris, France, April 22-27), LNCS 13991, pages 309-323, 2023. Springer. doi:10.1007/978-3-031-30826-0_17 Link to this entry Keyword(s): Competition on Software Testing (Test-Comp), Competition on Software Testing (Test-Comp Report), Software Testing Funding: DFG-COOP Publisher's Version PDF Supplement
    BibTeX Entry
    @inproceedings{FASE23, author = {Dirk Beyer}, title = {Software Testing: 5th Comparative Evaluation: {Test-Comp 2023}}, booktitle = {Proceedings of the 26th International Conference on Fundamental Approaches to Software Engineering (FASE~2023, Paris, France, April 22-27)}, editor = {L. Lambers and S. Uchitel}, pages = {309--323}, year = {2023}, series = {LNCS~13991}, publisher = {Springer}, isbn = {}, doi = {10.1007/978-3-031-30826-0_17}, sha256 = {7110c26bf3c9311f84346a108a59318687bdadde4879f83d047f1a0fc546b630}, url = {https://test-comp.sosy-lab.org/2023/}, keyword = {Competition on Software Testing (Test-Comp),Competition on Software Testing (Test-Comp Report),Software Testing}, _pdf = {https://www.sosy-lab.org/research/pub/2023-FASE.Software_Testing_5th_Comparative_Evaluation_Test-Comp_2023.pdf}, funding = {DFG-COOP}, }
  3. Dirk Beyer. Advances in Automatic Software Testing: Test-Comp 2022. In E. B. Johnsen and M. Wimmer, editors, Proceedings of the 25th International Conference on Fundamental Approaches to Software Engineering (FASE 2022, Munich, Germany, April 2-7), LNCS 13241, pages 321-335, 2022. Springer. doi:10.1007/978-3-030-99429-7_18 Link to this entry Keyword(s): Competition on Software Testing (Test-Comp), Competition on Software Testing (Test-Comp Report), Software Testing Funding: DFG-COOP Publisher's Version PDF Supplement
    BibTeX Entry
    @inproceedings{FASE22b, author = {Dirk Beyer}, title = {Advances in Automatic Software Testing: {Test-Comp 2022}}, booktitle = {Proceedings of the 25th International Conference on Fundamental Approaches to Software Engineering (FASE~2022, Munich, Germany, April 2-7)}, editor = {E.~B.~Johnsen and M.~Wimmer}, pages = {321-335}, year = {2022}, series = {LNCS~13241}, publisher = {Springer}, isbn = {}, doi = {10.1007/978-3-030-99429-7_18}, sha256 = {3f921c8f232a5c970f678889de8c402313049522a5dfa69ca68cd01d9dd9fce3}, url = {https://test-comp.sosy-lab.org/2022/}, abstract = {}, keyword = {Competition on Software Testing (Test-Comp),Competition on Software Testing (Test-Comp Report),Software Testing}, _pdf = {https://www.sosy-lab.org/research/pub/2022-FASE.Advances_in_Automatic_Software_Testing_Test-Comp_2022.pdf}, funding = {DFG-COOP}, }
  4. Dirk Beyer. Status Report on Software Testing: Test-Comp 2021. In E. Guerra and M. Stoelinga, editors, Proceedings of the 24th International Conference on Fundamental Approaches to Software Engineering (FASE 2021, Luxembourg, Luxembourg, March 27 - April 1), LNCS 12649, pages 341-357, 2021. Springer. doi:10.1007/978-3-030-71500-7_17 Link to this entry Keyword(s): Competition on Software Testing (Test-Comp), Competition on Software Testing (Test-Comp Report), Software Testing Funding: DFG-COOP Publisher's Version PDF Supplement
    Abstract
    This report describes Test-Comp 2021, the 3rd edition of the Competition on Software Testing. The competition is a series of annual comparative evaluations of fully automatic software test generators for C programs. The competition has a strong focus on reproducibility of its results and its main goal is to provide an overview of the current state of the art in the area of automatic test-generation. The competition was based on 3 173 test-generation tasks for C programs. Each test-generation task consisted of a program and a test specification (error coverage, branch coverage). Test-Comp 2021 had 11 participating test generators from 6 countries.
    BibTeX Entry
    @inproceedings{FASE21, author = {Dirk Beyer}, title = {Status Report on Software Testing: {Test-Comp 2021}}, booktitle = {Proceedings of the 24th International Conference on Fundamental Approaches to Software Engineering (FASE~2021, Luxembourg, Luxembourg, March 27 - April 1)}, editor = {E.~Guerra and M.~Stoelinga}, pages = {341-357}, year = {2021}, series = {LNCS~12649}, publisher = {Springer}, isbn = {978-3-030-71500-7}, doi = {10.1007/978-3-030-71500-7_17}, sha256 = {113b44c5be9f6d773ebd1a5cad91e8dc66f06d7af0b8c648c9dcea8d6bbc7e3d}, url = {https://test-comp.sosy-lab.org/2021/}, abstract = {This report describes Test-Comp 2021, the 3rd edition of the Competition on Software Testing. The competition is a series of annual comparative evaluations of fully automatic software test generators for C programs. The competition has a strong focus on reproducibility of its results and its main goal is to provide an overview of the current state of the art in the area of automatic test-generation. The competition was based on 3 173 test-generation tasks for C programs. Each test-generation task consisted of a program and a test specification (error coverage, branch coverage). Test-Comp 2021 had 11 participating test generators from 6 countries.}, keyword = {Competition on Software Testing (Test-Comp),Competition on Software Testing (Test-Comp Report),Software Testing}, funding = {DFG-COOP}, }
  5. Dirk Beyer. Second Competition on Software Testing: Test-Comp 2020. In Proceedings of the 23rd International Conference on Fundamental Approaches to Software Engineering (FASE 2020, Dublin, Ireland, April 25-30), LNCS 12076, pages 505-519, 2020. Springer. doi:10.1007/978-3-030-45234-6_25 Link to this entry Keyword(s): Competition on Software Testing (Test-Comp), Competition on Software Testing (Test-Comp Report), Software Testing Publisher's Version PDF Supplement
    Artifact(s)
    BibTeX Entry
    @inproceedings{FASE20, author = {Dirk Beyer}, title = {Second Competition on Software Testing: {Test-Comp 2020}}, booktitle = {Proceedings of the 23rd International Conference on Fundamental Approaches to Software Engineering (FASE~2020, Dublin, Ireland, April 25-30)}, pages = {505-519}, year = {2020}, series = {LNCS~12076}, publisher = {Springer}, doi = {10.1007/978-3-030-45234-6_25}, sha256 = {296b4caf885ae029e388c2ef8fd032f1ab55c07d5e8ea1064f2e50c08f5d6919}, url = {https://test-comp.sosy-lab.org/2020/}, abstract = {}, keyword = {Competition on Software Testing (Test-Comp),Competition on Software Testing (Test-Comp Report),Software Testing}, artifact1 = {10.5281/zenodo.3678250}, artifact2 = {10.5281/zenodo.3678264}, artifact3 = {10.5281/zenodo.3678275}, artifact4 = {10.5281/zenodo.3574420}, }
  6. Dirk Beyer. International Competition on Software Testing (Test-Comp). In Proceedings of the 25th International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS 2019, Prague, Czech Republic, April 6-11), part 3, LNCS 11429, pages 167-175, 2019. Springer. doi:10.1007/978-3-030-17502-3_11 Link to this entry Keyword(s): Competition on Software Testing (Test-Comp), Competition on Software Testing (Test-Comp Report), Software Testing Publisher's Version PDF Supplement
    BibTeX Entry
    @inproceedings{TACAS19c, author = {Dirk Beyer}, title = {International Competition on Software Testing (Test-Comp)}, booktitle = {Proceedings of the 25th International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS~2019, Prague, Czech Republic, April 6-11), part 3}, pages = {167-175}, year = {2019}, series = {LNCS~11429}, publisher = {Springer}, doi = {10.1007/978-3-030-17502-3_11}, sha256 = {80ba1d656e40b44c40e756010ccd32db5aad71820cd746b264f70244477fc737}, url = {https://test-comp.sosy-lab.org/2019/}, keyword = {Competition on Software Testing (Test-Comp),Competition on Software Testing (Test-Comp Report),Software Testing}, }

Disclaimer:

This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All person copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.

Last modified: Wed Dec 04 15:42:47 2024 UTC