The computational artifacts of a paper include all the elements that support the reproducibility of its experiments, such as software, datasets, environment configuration, mechanized proofs, benchmarks, test suites with scripts, etc. Authors can choose any version-controlled software and data repositories to share their artifacts, such as Zenodo, FigShare, Dryad, Software Heritage, GitHub, or GitLab.
The description of the artifacts, in addition to documenting the computational artifacts, will also include links to the required repositories. If needed, README files can also be attached to the computational artifacts, either containing the same information as in the artifact description or complementing it, for instance, providing further and more detailed instructions and documentation. As a general rule, authors should try to do their best to simplify the reproducibility process, to save committee members the burden of reverse-engineering the authors’ intentions. For example, a tool without a quick tutorial is generally very difficult to use. Similarly, a dataset is useless without some explanation on how to browse the data. For software artifacts, the artifact description and the README should—at a minimum—provide instructions for installing and running the software on relevant inputs. For other types of artifacts, describe your artifact and detail how to “use” it in a meaningful way.
Importantly, make your claims about your artifacts concrete. This is especially important if you think that these claims differ from the expectations set up by your paper. The reproducibility Committee is still going to evaluate your artifacts relative to your paper, but your explanation can help to set expectations up front, especially in cases that might frustrate the evaluators without prior notice.
Packaging Methods
Configuring software dependencies of artifacts can require a significant amount of time for reproducibility purposes. Therefore, to alleviate it, authors should consider one of the following methods to package the software components of their artifacts:
- Source Code: If your artifact has few dependencies and can be installed easily on several operating systems, you may submit source code and build scripts. However, if your artifact has a long list of dependencies, please use one of the other formats below.
- Virtual Machine/Container: A virtual machine or Docker image containing the software application already set up with the right toolchain and intended runtime environment. For example:
-
- For raw data, the VM would contain the data and the scripts used to analyze it.
- For a mobile phone application, the VM would have a phone emulator installed.
- For mechanized proofs, the VM would contain the right version of the relevant theorem prover. We recommend using a format that is easy for Committee members to work with, such as OVF or Docker images. An AWS EC2 instance is also possible.
- Binary Installer: Indicate exactly which platform and other run-time dependencies your artifact requires.
- Live Instance on the Web: Ensure that it is available for the duration of the artifact evaluation process.
- Internet-accessible Hardware: If your artifact requires special hardware (e.g., GPUs or clusters), or if your artifact is actually a piece of hardware, please make sure that Committee members can somehow access the device. VPN-based access to the device might be an option.
Preparation Sources
There are several sources of good advice about preparing artifacts for evaluation:
- HOWTO for AEC Submitters, by Dan Borowy, Charlie Cursinger, Emma Tosch, John Vilk, and Emery Berger
- Artifact Evaluation: Tips for Authors, by Rohan Padhye
- SIGOPS articles on award winning artifacts [1] and [2]
- Github CSArtifacts Resources