Skip to main content
 

Artifact Evaluation

For the second year, MODELS will implement a separate evaluation process to assess the quality of the artifacts supporting the work presented in accepted papers. With the artifact evaluation process, we want to acknowledge the considerable effort required to obtain high-quality artifacts, to foster a culture of experimental reproducibility, and to provide a peer review and archiving process for artifacts similar to that of research papers.

The goal of artifact archiving is to ensure that the artifacts stay available for a long time, that they can be located easily, and that they can be reused by other researchers. Additionally, archiving allows to designate exactly the version of the artifact that was used to produce the research results.

We aim to assess the artifacts themselves and to help improving them, and not to assess the quality of the research linked to the artifact. The process assumes that the quality of research has been already assessed and approved for MODELS’18 by the program committees. Thus the main goal of the review process is constructive: to improve the submitted artifacts, not to reject or filter them. A rejection may happen, if it is assessed that improving the artifact to sufficient quality is impossible in the given time frame, the artifact is not consistent with the paper’s results, or the artifact itself is not of sufficient relevance to the scope of the main research paper or to the MODELS community at large.

In a nutshell, a good artifact is:

  1. Consistent with the paper
  2. As complete as possible
  3. Well-documented
  4. Easy to (re)use

Submission to the artifact evaluation committee is optional and the result of the artifact evaluation process will not influence the final decision on accepted papers.

Note: If you think your artifact would be a good candidate for a tool demonstration at MODELS, please also consider submitting it to the Tools and Demonstrations Track!

Submission

Submission and review of artifacts will happen on GitHub. Authors of accepted papers will receive detailed instructions by e-mail.

Benefits

Authors of papers with accepted artifacts will be invited to include an official ACM Artifact Evaluation badge on the first page of the camera-ready version of their paper. This badge explicitly communicates to the paper’s readers that the authors have undergone a specific evaluation process for their artifact.

Important Dates

  • Thu 5 July 2018: Paper Notification - Foundation/Practice and Innovation Tracks; call for artifacts
  • Tue 11 July 2018: Artifact submission (via GitHub, see Submission)
  • 11–17 July 2018: Reviewing period
  • Thu 19 July 2018: Artifact notification
  • Sat 21 July 2018: Camera ready due; including AEC badges

Artifact Evaluation Chairs

  • Vadim Zaytsev, Raincode Labs, Belgium
  • Thomas Degueule, Centrum Wiskunde & Informatica, the Netherlands

Artifact Evaluation Committee

  • Mojtaba Bagherzadeh, Queen’s University, Canada
  • Francesco Basciani, University of L’Aquila, Italy
  • Juri Di Rocco, University of L’Aquila, Italy
  • Michael Herzberg, University of Sheffield, United Kingdom
  • Théo Le Calvar, University of Angers, France
  • Manuel Leduc, Inria, France
  • Hernán Ponce de León, fortiss GmbH, Germany
  • Nicolas Sannier, University of Luxembourg, Luxembourg
  • Stefan Sauer, Paderborn University, Germany
  • Nils Weidmann, Paderborn University, Germany
  • Thanos Zolotas, University of York, United Kingdom

Further Information

Questions should be addressed to the chairs of the Artifact Evaluation committee using the following e-mail address: models18aec@gmail.com

Artifact Evaluation Important Dates AoE (UTC-12)
Thu 5 July 2018
Paper Notification - Foundation/Practice and Innovation Tracks; Call for artifacts
Tue 11 July 2018
Artifact submission
11-17 July 2018
Reviewing period
Thu 19 July 2018
Artifact notification
Sat 21 July 2018
Camera ready due