Call for artifacts
For the first time in its history, the Euro-Par conference series encourages authors to participate to the Artifact Evaluation Process (AEP). The authors of papers accepted at Euro-Par 2018 will be formally invited to submit their support material (e.g., source code, tools, benchmarks, datasets, models) to the AEP to assess the reproducibility of the experimental results presented in the paper. The artifact will undergo a completely independent review process, run by a separate committee of experts who will assess the quality of the artifact, the reproducibility of the experimental results shown in the paper, and the usefulness of the material and guidelines provided along with the artifact.
The papers whose artifacts will be accepted, will receive a seal of approval printed on the first page of the papers as they appear in the final proceedings published by Springer. The artifact material will be made publicly available.
Although warmly advised, the artifact evaluation process is completely optional and, in any case, will not affect the acceptance decision already made on the Euro-Par papers.
Important dates
- Artifact submission deadline
- May 10, 2018
- Technical clarification window
- May 16-18, 2018
- Notification of the decision
- May 25, 2018
- Final version of the artifact to be uploaded
- May 30, 2018
Submission guidelines
If your paper is accepted at Euro-Par 2018, you can submit your artifact before the deadline using this EasyChair link.
You must use as title and authors of the artifact submission the same title and authors of the accepted paper. Your artifact submission will take one of two forms:
- A document with inside a URL pointing to a single ZIP file containing the artifact, plus an md5 hash of that file (use the md5 or md5sum command-line tool to generate the hash).
- Direct upload: the artifact uploaded directly to EasyChair (if it’s less than 50MB).
In the first case, the URL must be a Google Drive or Dropbox URL, to help protect the anonymity of the reviewers.
Your artifact must contain an Overview Document as described below in PDF format. You will upload the document separately as part of your submission, but you must also include the document within the artifact itself.
A valid type of artifacts is a working copy of software (and dependencies) that supports the paper’s conclusions. The ZIP file includes, along with the Overview Document, README files, datasets, examples, benchmarks and case studies needed to reproduce the results contained in the accepted paper. All necessary packages, dependencies and any additional software required to run the artifact must be explicitly listed in the Overview Document and possibly included in the artifact evaluation ZIP file. Artifacts that need proprietary software released under non-open source licenses or that cannot be freely (and anonymously) downloadable will not be evaluated by the committee.
All artifacts will receive one review. The review will consist of few comments stating whether the evaluation was successful or not and possibly providing hints for improving the Overview Document. During the technical clarification window the reviewers can, anonymously, ask the corresponding authors of the artifact to solve technical issues encountered. The issues must be clarified within few days, otherwise the artifact will not be accepted.
Overview Document for the artifact
The Overview Document (that should be just a few pages) must contain all the exact steps to install, compile and execute the artifact. Notably, the document must include comprehensive guidelines to assess the quality of the execution’s outcome and how to interpret the results with respect to the Euro-Par accepted paper. Your overview document should consist of two parts or sections:
- a Getting Started Guide, and
- Step-by-Step Instructions on how to reproduce the results (with appropriate connections to the relevant sections of your paper).
The Getting Started Guide should contain setup instructions including the additional software to install with their exact versions and basic testing of your artifact. It is expected that this phase should require no more than 30 minutes to complete. You should write your Getting Started Guide to be as simple and straightforward as possible, and yet it should stress the key elements of your artifact. If well written, anyone who has successfully completed the Getting Started Guide should not have any technical difficulties with the rest of your artifact.
The Step-by-Step Instructions should explain how to reproduce any experiments or other activities that support the conclusions in your paper in full detail. Write this part in a way that it is useful for future researchers who have a deep interest in your work and want to compare with it or improve your results. In this section, you have to indicate the exact platform you have used for your tests, and for each input dataset that has to be used to reproduce your experiments, the execution time it took on your system.
If for running the artifact to reproduce your experiments it takes several hours, please clearly state this at the beginning of the Step-by-Step section and clearly point out ways to run it on smaller inputs to reduce the execution time (yet obtaining qualitatively acceptable results). Artifacts requiring only long-running executions to produce meaningful results will not be evaluated.
Where appropriate, include descriptions of each test and link to files (included in the ZIP) that represent expected outputs, e.g., the log files expected to be generated by your tool on the given inputs, or expected results for each input file.
For performance experiments, it is understood that results may not match perfectly those in the paper, due to differences in the reviewers’ hardware. However, the artifact evaluators should be able to reproduce the same qualitative outcomes contained in the paper.
Where possible, please automate data extraction and the production of plots, so that wherever possible the experiments run using the artifact produce figures matching those figures in the paper.
Selection Criteria
The criteria used for the evaluation are as follows:
- Artifacts should be consistent with the paper
- Artifacts should be as much self-contained as possible
- The documentation provided must give clear guidelines on how to validate and verify the results
- Artifacts should be easy to reuse and facilitating further research
- Artifacts requiring only long running execution will not be evaluated
- Artifacts requiring specialized hardware and/or complex network topologies/infrastructures and/or large cluster configurations will not be evaluated.
The ideal target platform for evaluating the artifact should be a small cluster (1-3 nodes) of standard multicore servers equipped with one GPU and interconnected via a standard switched Ethernet network. The reference OS is Linux. Artifacts needing specific non-commodity hardware not in the availability of the Evaluation Committee will not be evaluated.