|
12th International Conference on Tests & ProofsToulouse, June 27 - June 29, 2018 Part of the STAF Event 2018(parallel ECMFA, ICGT, ICMT, SEFM, TAP, TTC) Invited Talk:Dirk Beyer, Software and Computational Systems Lab, LMU MunichTitle: Cooperative Verification: The Art of Combining Verification ToolsAbstract: The first part of this talk presents a solution where testing helps model checking in terms of certifying a verification result. The research community made enormous progress in the past years in developing algorithms for verifying software, as shown by verification competitions (SV-COMP). However, the ultimate goal is to design Certifying Algorithms, which produce for a given input not only the output but in addition a witness. This makes it possible to validate that the output is a correct solution for the input problem. But existing witness validators are based on model-checking technology --- which does not solve the problem of reducing the trust base. Testing can close this gap, and we elaborate on practical solutions based on program execution: One solution is to extend witness validation by generating a test vector from an error path that is reconstructed from the witness and executing it using the original program. The second part of the talks addresses the problem that both software testing and software model checking have weaknesses, and sometimes fail to find bugs. An evaluation can quickly reveal that no technique is superior than the other, and that any practical solution needs to combine the two techniques. The automatic assembly of Conditional Model Checkers by the recently proposed Reducer construction is a possible solution for this: A model checker starts with the verification work and verifies all it can, while on termination it outputs a condition that describes the state space that is already verified. Then a reducer takes the condition and the program, and produces a residual program that represents the parts of the program that is not yet verified. Testing can be used to cover the state space that the model checker has left untouched. Thanks to the reducer, we do not need "conditional" testers, but can take off-the-shelf testers.Invited Tutorial:Sébastien Bardin and Nikolai Kosmatov, CEA, ListTitle: Specify and Measure, Cover and Unmask: A Proof-Friendly View of Advanced Test Coverage CriteriaAbstract: Automatic test data generation (ATG) is a major topic in software engineering. A large amount of research effort has been invested to automate white-box testing. While a wide range of different and sometimes heterogeneous code-coverage criteria have been proposed, testing techniques still lack a generic formalism to describe them all, and available test automation tools usually support only a small subset of them.This tutorial brings participants to a journey into the world of white-box testing criteria and their automated support. We try to give a convenient proof-friendly view of coverage criteria and to bridge the gap between the coverage criteria supported by state-of-the-art white-box ATG technologies, such as Dynamic Symbolic Execution, and advanced coverage criteria found in the literature. The tutorial is articulated around labels, a recent specification mechanism for test objectives, and their effective support in automated testing tools. Labels are generic enough to specify many common testing criteria, and amenable to efficient automation making it possible to cover the corresponding test objectives and to measure the coverage level of a given test suite. We pro- pose several optimization techniques resulting in an effective support for labels in ATG tools. We also show how a combination of static analysis techniques can be efficiently applied to detect — unmask — infeasible test objectives that are re- sponsible for waste of test generation effort and imprecise coverage measurement. We demonstrate the LTest toolset for test automation with efficient support of labels. Finally, we present a recent extension of labels, called HTOL (Hyperlabel Test Objectives Language), capable to encode even most advanced test coverage criteria (such as variants of MCDC, dataflow criteria, non-interference proper- ties, etc.), including hyperproperties. This tutorial is based on a series of recent research and tool implementation efforts [ICST 2014, TAP 2014, ICST 2015, ICST 2017t, ICST 2017r, ICSE 2018]. |
|