The paper "Domain-Independent Interprocedural Program Analysis using Block-Abstraction Memoization" has been accepted at ESEC/FSE 2020!
We provide additional material:
Whenever a new software-verification technique is developed, additional effort is necessary to extend the new program analysis to an interprocedural one, such that it supports recursive procedures. We would like to reduce that additional effort. Our contribution is an approach to extend an existing analysis in a modular and domain-independent way to an interprocedural analysis without large changes: We present interprocedural block-abstraction memoization (BAM), which is a technique for procedure summarization to analyze (recursive) procedures. For recursive programs, a fix-point algorithm terminates the recursion if every procedure is sufficiently unrolled and summarized to cover the abstract state space.
BAM Interprocedural works for data-flow analysis and for model checking, and is independent from the underlying abstract domain. To witness that our interprocedural analysis is generic and configurable, we defined and evaluated the approach for three completely different abstract domains: predicate abstraction, explicit values, and intervals. The interprocedural BAM-based analysis is implemented in the open-source verification framework CPAchecker. The evaluation shows that the overhead for modularity and domain-independence is not prohibitively large and the analysis is still competitive with other state-of-the-art software-verification tools.
We provide a reproduction artifact that contains all data, tables, used benchmark programs, and configurations as well as a ready-to-run version of CPAchecker.
LICENSE.txt: specifies the license
README.md: provides more details on benchmark execution
benchmarks/: benchmark definition(s) for different configurations of CPAchecker
results/: results of the execution of CPAchecker with different domains on recursive tasks
*.xml.bz2: XML results from BenchExec
*.logfiles.zip: output of CPAchecker
results-verified/: results of verification runs for recursive tasks, extracted from the results archive of SV-COMP 2020 (https://doi.org/10.5281/zenodo.3630205)
scripts/: utility scripts to execute benchmarks and generate tables
sv-benchmarks/: verification tasks extracted from the sv-benchmark repository used in SV-COMP 2020 (https://zenodo.org/record/3633334)
tables/: generated HTML tables and a LaTeX file for the user
tools/: tools BenchExec and CPAchecker to evaluate the results and generate tables
All benchmarks were executed based on the following components:
README in the reproduction artifact
contains further information and describes the single-task execution of CPAchecker,
some steps towards executing a whole benchmark for CPAchecker and multiple tasks,
and the scripts that can be used for an automated re-evaluation of the benchmarks
that were used for the paper.