May 23-25, 2018

Rome, Italy

SIGSIM PADS 2018 Computational Results Replication and Artifact Evaluation

SIGSIM PADS joins the ACM Reproducibility Initiative. Authors submitting contributions to the conference can decide to opt in the initiative.

High-Fidelity Battlefield Simulation Support Military Simulation Simulations Prepare Marine Corps for War Virtual Reality Simulation Virtual Reality Driving Simulator Virtual Reality Medical Simulator
themed object
ACM SIGSIM Conference on Principles of Advanced Discrete Simulation (PADS)
get in touch

SIGSIM PADS 2018 Computational Results

Replication and Artifact Evaluation


A paper consists of a constellation of artifacts that extend beyond the document itself: software, mechanized proofs, models, test suites, benchmarks, and so on. In some cases, the quality of these artifacts is as important as that of the document itself, yet our conferences offer no formal means to submit and evaluate anything but the paper. To address this, PADS will be joining the group of ACM conferences that have established an optional reviewing process to evaluate artifacts and replicate computational results.


Goals

The goal of the replicating computational results and artifact evaluation process is to increase the reusability of research presented and to reward authors who take the trouble to create useful documentation and artifacts beyond the paper. Sometimes the software tools that accompany the paper take years to build; in many such cases, authors who go to this trouble should be rewarded for setting high standards and creating systems that others in the community can build on.

In our process, the review process during which computational results are replicated and the artifacts evaluated is optional. When submitting a contribution, authors choose to undergo evaluation only after their paper has been accepted. Therefore, this process has no effect on the standard paper evaluation process.


Criteria

The evaluation criteria are ultimately simple. A paper sets up certain expectations of its artifacts based on its content. The Reproducibility Committee (RC) will read the paper and judge how well the artifacts match these criteria. Thus the RC’s decision will be that the artifacts do (or do not) “conform to the expectations set by the paper”. Ultimately, we expect artifacts to be:

      • permanently available for retrieval;
      • functional;
      • reusable;
      • associated with results that can be replicated.

Process

At the time of paper submission, the authors can check a specific checkbox on the submission page to opt in the initiative. To maintain a wall of separation between paper review and the artifacts, authors will be contacted by a nominated member of the RC only after their papers have been accepted. Of course, they can (and should!) prepare their artifacts well in advance. By no means, the Reproducibility Initiative will have an impact on the acceptance of papers, meaning that authors who opt in will not have higher changes of having their work accepted, nor failing at replicating the results will have an impact on the acceptance of the paper. This initiative is intended to work with the community to provide research results which are always of higher quality.

After artifact submission, the RC will download and install the artifact (where relevant), and evaluate it. Since we anticipate small glitches with installation and use, the RC members may communicate with authors to help resolve glitches. The RC will complete its evaluation and notify authors of the outcome.

The RC’s report will include a discussion of the artifact evaluation process. Papers with artifacts that “meet expectations” will be stamped with the following ACM badges. More than one badge could be applied to each paper, depending on what expectations on the artifacts are met. In accordance with the ACM Artifact Review and Badging and in accordance with replication and artifact evaluation strategies established by ACM Transactions on Modeling and Computer Simulation (TOMACS), some of the following ACM badges may be applied. The ACM badge descriptions given below are reproduced from the ACM website.


ACM Badge Description

Artifacts Evaluated
– Functional

The artifacts associated with the research are found to be documented, consistent, complete, exercisable, and include appropriate evidence of verification and validation.

  • Documented: At minimum, an inventory of artifacts is included, and sufficient description provided to enable the artifacts to be exercised.
  • Consistent: The artifacts are relevant to the associated paper, and contribute in some inherent way to the generation of its main results.
  • Complete: To the extent possible, all components relevant to the paper in question are included. (Proprietary artifacts need not be included. If they are required to exercise the package then this should be documented, along with instructions on how to obtain them. Proxies for proprietary data should be included so as to demonstrate the analysis.)
  • Exercisable: Included scripts and/or software used to generate the results in the associated paper can be successfully executed, and included data can be accessed and appropriately manipulated.

Artifacts Evaluated
– Reusable
The artifacts associated with the paper are of a quality that significantly exceeds minimal functionality. That is, they have all the qualities of the Artifacts Evaluated – Functional level, but, in addition, they are very carefully documented and well-structured to the extent that reuse and repurposing is facilitated.

Artifacts Available

Author-created artifacts relevant to this paper have been placed on a publically accessible archival repository. A DOI or link to this repository along with a unique identifier for the object is provided.
Notes

We do not mandate the use of specific repositories. Publisher repositories (such as the ACM Digital Library), institutional repositories, or open commercial repositories (e.g., figshare or Dryad) are acceptable. In all cases, repositories used to archive data should have a declared plan to enable permanent accessibility. Personal web pages are not acceptable for this purpose.


Results Replicated

This badge is applied to papers in which the main results of the paper have been successfully obtained by a person or team other than the author.

In particular, the main results of the paper have been obtained in a subsequent study by a person or team other than the authors, using, in part, artifacts provided by the author.


Results Reproduced

The main results of the paper have been independently obtained in a subsequent study by a person or team other than the authors, without the use of author-supplied artifacts.

NOTE: This badge will not be used by SIGSIM PADS 2018. It is included for the sake of completeness.

 

Artifact Details

To avoid excluding some papers, the RC will try to accept any artifact that authors wish to submit. These can be models, software, mechanized proofs, test suites, data sets, and so on. Obviously, the better the artifact is packaged, the more likely the RC can actually work with it.

Submission of an artifact does not contain tacit permission to make its content public. RC members will be instructed that they may not publicize any part of your artifact during or after completing evaluation, nor retain any part of it after evaluation. Thus, you are free to include models, data files, proprietary binaries, etc. in your artifact. We do strongly encourage that you anonymize any data files that you submit.

 

Submission Guidelines for the Artifacts

Please note that the SIGSIM PADS Replicated Computational Results (RCR) initiative is an open reviewing process in which both reviewers and authors are known to each other. The RCR reviewing will start after the research paper has been accepted. We expect the artifacts to be submitted within one week after the acceptance letter. Within the letter of acceptance, there will be further information on how to submit the material. To support an efficient process, authors should carefully follow the guidelines given below in preparing the supplementary material:

  • Indicate in the README file, which artifacts are associated with the paper, e.g., software, data, and simulation models (and indicate their license). Please provide detailed information about the required environment in which the artifact(s) can be executed and evaluated.
  • For software also give instructions to install compilers/interpreters and how to resolve dependencies if required.
  • Provide a script for every figure or table of the paper that contains results that shall be replicated.
  • For performance experiments additional scripts are required that record the time in a text file and describe your execution platform in detail.
  • Please submit the above information, plus scripts, plus artifacts, plus any other information you find helpful in replicating the computational results and evaluating the artifacts as supplementary material.

Reproducibility Committee

  • Adelinde M. Uhrmacher (Chair)
  • Francesco Quaglia (Co-Chair)
  • Alessandro Pellegrini
  • Nura Abubakar
  • Nael Abu-Ghazaleh
  • Anastasia Anagnostou
  • Philipp Andelfinger
  • Lachlan Birdsey
  • Sounak Gupta
  • Amine Hamri
  • Mauro Ianni
  • Yun Kim
  • Till Köster
  • Elvis Liu
  • Romolo Marotta
  • Chukwudi Nwogu
  • Jinsoo Park
  • Andreas Ruscheinski
  • Claudia Szabo
  • Anthony Ventresque
  • Barry Williams
  • Aznam Yacoub

The Reproducibility Committee (RC) consists of mainly graduate students, postdocs, and researchers, identified with the help of the PADS Steering Committee and Program Committee.

Qualified graduate students are often in a much better position than many researchers to handle the diversity of systems expectations we will encounter. In addition, these graduate students represent the future of the community, so involving them in this process early will help pushing this process forward. However, participation in the RC can provide useful insight into both the value of artifacts, the process of artifact evaluation, and help establishing community norms for artifacts. We therefore seek to include a broad cross-section of the PADS community on the RC.

Naturally, the RC chairs will devote considerable attention to both mentoring and monitoring the junior members of the RC, helping to educate the students on their responsibilities and privileges.

 

For inquiries, please contact the Reproducibility Committee Chair Prof. Dr. Adelinde M. Uhrmacher.

 

 

slide up button