/Reproducibility of Results
Reproducibility of Results2018-09-27T06:34:49+00:00

Reproducibility of Results

Participants were asked to submit a detailed description of how their forecasts were made and a source, or execution file, for reproducing the forecasts for 100 randomly selected series.

Given the critical importance of objectivity and reproducibility, such description and file was highly recommended for participating in the Competition.

The prerequisite for the Full Reproducibility Prize was that the code used for generating the forecasts, with the exception of companies providing forecasting services and those claiming proprietary software, would be put on GitHub, not later than 10 days after the end of the competition (i.e., the 10th of June, 2018). In addition, there had to be instructions on how to exactly reproduce the M4 submitted forecasts. In this regard, individuals and companies will be able to use the code and the instructions provided, crediting the person/group that has developed them, to improve their organizational forecasts.

Companies providing forecasting services and those claiming proprietary software had to provide the organisers with a detailed description of how their forecasts were made and a source, or execution file, for reproducing their forecasts for 100 randomly selected series. An execution file could be submitted in case that the source program needs to be kept confidential, or, alternatively, a source program with a termination date for running it.

The code for reproducing the results of the 4Theta method, submitted by the Forecasting & Strategy Unit, was put on GitHub before 27-12-2017. This method was not considered for any of the Prizes.

The GitHub repository, including the code for reproducing the results of the benchmarks and the participating methods, as well as for evaluating their results, can be found here.