CAV 2016 Artifact Evaluation

View the Project on GitHub barghouthi/cav16-aec

CAV Artifact Evaluation

CAV 2016 encourages authors of accepted papers where research findings include software, mechanized proofs, data sets, test suites, models, or any other digital artifacts to submit these to an optional artifact evaluation. The purpose of the evaluation is to provide a service by the community to help authors provide more substantial supplements to their papers so future researchers can more effectively build on and compare with previous work. The Artifact Evaluation Committee (AEC) will read the paper and explore the artifact to give the authors third-party feedback about how well the artifact supports the paper and how easy it is for future researchers to use the artifact.

At least three members of the AEC will review an artifact with respect to the following criteria (if applicable):

The members of the AEC will return their feedback on these criteria, and consequently a submitted artifact will be deemed to meet (or exceed) the expectations set out by the corresponding paper accepted into CAV 2016, or fail to do so. Successful validations may make use of the above seal, and will be highlighted at the conference.

The CAV 2016 Artifact Evaluation is in its second edition, and follows the tradition espoused by many other conferences -- ESEC/FSE 2011, SAS 2013, PLDI 2014, ISSTA 2014, ISSTA 2015, as well as several other conferences.

Important Dates

Submission deadline: April 30th 2016 (Anywhere on Earth)

Author notification: May 20nd 2016

Submission Instructions

Upon notification of acceptance of their papers into CAV 2016, authors will be invited to submit via EasyChair an abstract describing their artifact and download instructions. The abstracts will only be used to facilitate the review process and will not be evaluated themselves. The authors should make an effort not to learn the identity of the reviewers, e.g., through logging.

EasyChair submission page

Packaging Guidelines

High quality packaging of an artifact is as important as the quality of the artifact itself. Please keep in mind that the committee members will have limited time to review each artifact. We have some requirements for the artifact submission that will expedite the review process.

In order to ease the reproducibility of the experimental evaluation, we recommend the use of a provided virtual machine (VM). If for some reason you don’t want to use the VM, we strongly encourage to use VirtualBox. Please provide detailed instructions for the use of the artifact in the README file listed below. This includes platform requirements, installation instructions, external libraries and tools, etc.

Resources

This is a great HOWTO for packaging artifacts.

Steps for packaging and submission:

  1. Download the virtual machine from CAV2016_AE_VM (mirror) Username: cav Password: ae
  2. Include your tool in the VM, namely create in the home directory a folder with the following items: 2.a) Your accepted paper 2.b) A detailed README file of how to run your tool 2.c) A directory containing the artifact (benchmarks + tool or proof scripts)
  3. If is not necessary to modify the VM in order to run the artifact, place a zipped directory (from 2) on a website. If it is necessary to modify the VM, then place the modified VM with a zipped directory (from 2) in the home directory.
  4. Submit to EasyChair and add the link to your VM (or zipped directory) in the abstract field. Please use the same title as for the CAV submission.

Organizers

Aws Albarghouthi (University of Wisconsin-Madison)

Artifact Evaluation Committee

Alain Mebsout (University of Iowa)

Ankush Desai (University of California, Berkeley)

Christian Dehnert (RWTH Aachen University)

Heidy Khlaaf (University College London)

Julien Henry (University of Wisconsin-Madison)

Kuldeep Meel (Rice University)

Marcelo Sousa (Oxford University)

Maria Svorenova (Oxford University)

Markus Rabe (University of California, Berkeley)

Mukund Raghothaman (University of Pennsylvania)

Navid Yaghmazadeh (University of Texas at Austin)

Nicola Paoletti (Oxford University)

Nimit Singhania (University of Pennsylvania)

Swen Jacobs (Saarland University)

Tushar Sharma (University of Wisconsin-Madison)

Xin Chen (University of Colorado Boulder)

Xin Zhang (Georgia Tech)

Yi Li (University of Toronto)

Yu Feng (University of Texas at Austin)