CAV 2016 encourages authors of accepted papers where research findings include software, mechanized proofs, data sets, test suites, models, or any other digital artifacts to submit these to an optional artifact evaluation. The purpose of the evaluation is to provide a service by the community to help authors provide more substantial supplements to their papers so future researchers can more effectively build on and compare with previous work. The Artifact Evaluation Committee (AEC) will read the paper and explore the artifact to give the authors third-party feedback about how well the artifact supports the paper and how easy it is for future researchers to use the artifact.
At least three members of the AEC will review an artifact with respect to the following criteria (if applicable):
The members of the AEC will return their feedback on these criteria, and consequently a submitted artifact will be deemed to meet (or exceed) the expectations set out by the corresponding paper accepted into CAV 2016, or fail to do so. Successful validations may make use of the above seal, and will be highlighted at the conference.
The CAV 2016 Artifact Evaluation is in its second edition, and follows the tradition espoused by many other conferences -- ESEC/FSE 2011, SAS 2013, PLDI 2014, ISSTA 2014, ISSTA 2015, as well as several other conferences.
Submission deadline: April 30th 2016 (Anywhere on Earth)
Author notification: May 20nd 2016
Upon notification of acceptance of their papers into CAV 2016, authors will be invited to submit via EasyChair an abstract describing their artifact and download instructions. The abstracts will only be used to facilitate the review process and will not be evaluated themselves. The authors should make an effort not to learn the identity of the reviewers, e.g., through logging.
High quality packaging of an artifact is as important as the quality of the artifact itself. Please keep in mind that the committee members will have limited time to review each artifact. We have some requirements for the artifact submission that will expedite the review process.
In order to ease the reproducibility of the experimental evaluation, we recommend the use of a provided virtual machine (VM). If for some reason you don’t want to use the VM, we strongly encourage to use VirtualBox. Please provide detailed instructions for the use of the artifact in the README file listed below. This includes platform requirements, installation instructions, external libraries and tools, etc.
This is a great HOWTO for packaging artifacts.
Aws Albarghouthi (University of Wisconsin-Madison)
Alain Mebsout (University of Iowa)
Ankush Desai (University of California, Berkeley)
Christian Dehnert (RWTH Aachen University)
Heidy Khlaaf (University College London)
Julien Henry (University of Wisconsin-Madison)
Kuldeep Meel (Rice University)
Marcelo Sousa (Oxford University)
Maria Svorenova (Oxford University)
Markus Rabe (University of California, Berkeley)
Mukund Raghothaman (University of Pennsylvania)
Navid Yaghmazadeh (University of Texas at Austin)
Nicola Paoletti (Oxford University)
Nimit Singhania (University of Pennsylvania)
Swen Jacobs (Saarland University)
Tushar Sharma (University of Wisconsin-Madison)
Xin Chen (University of Colorado Boulder)
Xin Zhang (Georgia Tech)
Yi Li (University of Toronto)
Yu Feng (University of Texas at Austin)