Accurate predictions and the correct assessment of uncertainty are indispensable for all types of future oriented decisions: from determining appropriate inventory levels and predicting the amount of sales of companies to forecasting revenues and costs and the formulation of a long-term strategy. The purpose of the Makridakis, or M, Competitions has been to provide empirical evidence to guide organizations on how to improve the accuracy of their forecasts and how to assess future uncertainty as realistically as possible.
The M Competitions started more than 40 years ago and have attained global coverage and respectable reputation in both the academic world and among practitioners. It all started in 1979 when I published a paper with Michel Hibon in the Journal of the Royal Statistical Society. It was followed by the first M Competition whose results were published in 1982 and included 111 series, the M2 which was published in 1993 and aimed at incorporating judgmental inputs to the statistical forecasts while the M3, was published in 2000, included 3003 series to base its findings, and make its recommendations. Each competition added to our knowledge about forecasting and provided concrete evidence on how firms can benefit by the more scientific use of forecasting. The last M Competition that ended five months ago has expanded its coverage and scope by including 100,000 series and having 49 individual participants and teams from around the world predicting these series and providing estimates of uncertainty.
The findings of the M4 Competition have provided a wealth of theoretical and practical information for improving the accuracy of forecasts, for assessing uncertainty more precisely while also proving that new forecasting methodologies can be utilized successfully, including a new, hybrid approach that significantly improved both accuracy and uncertainty. The 100,000 series used in this competition cover six application domains (macro, micro, demographic, industry, financial and others) and six time frequencies (yearly, quarterly, monthly, weekly, daily and hourly) that allow comparisons and suggesting how to select the more accurate methods for each of the subcategories of applications and time frequencies. Organizations can benefits by following the conclusions of the competition that proves empirically the most appropriate method to use for the specific forecasting requirements of business organizations.
A paper describing the initial findings of the M4 Competition can be downloaded free from Science Direct https://www.sciencedirect.com/science/article/pii/S0169207018300785
A special issue of the International Journal of Forecasting, covering all aspects of the M4 Competition is under preparation and will be ready at the next International Symposium of Forecasting to be held in Thessaloniki, Greece in June next year. Below is a Table from the detailed paper describing the M4 and its findings, currently under review to be included in the special issue, that shows the accuracy among the four M Competitions and some interesting statistics demonstrating the consistency of the results over a period that spans nearly four decades.
Table: The accuracy of four standard forecasting methods across the four M competitions, plus the best one in each competition
In order to disseminate the findings of the M4 Competition to as wide an audience as possible, I am organizing a conference to elaborate on its findings, discuss their practical implications as well as explain how they can be applied by business and other organizations in their effort to improve forecasting accuracy and correctly assess future uncertainty.
The M4 Conference Program includes distinguished speakers from major software/ technology companies (Google, Microsoft, Amazon, Uber and SAS) as well as known academics from top-level universities. It features the presentation of the three most accurate methods of the M4 Competition personally by their developers, including the hybrid approach developed by Slawek Smyl of Uber that achieved the top spot, also discussing how their methods can be implemented, as their code is available on GitHub. The conference will covers all critical areas of forecasting, including combining methods and introducing judgmental adjustments, paying special emphasis on the comparisons of Machine Learning and Statistical forecasting methods as well as how to assess uncertainty as precisely as possible.
The rich conference program includes a keynote address by Nassim Nicholas Taleb (the author of the Black Swan and Skin in the Game) who will discuss uncertainty in forecasting and myself who will present the major findings of the M4 Competition and explain how organizations can benefit by such findings. There are also two invited speaker, Professor Scott Armstrong of Wharton who will talk about “Data Models Versus Knowledge Models in Forecasting” and Andrea Pasqua, from Uber, who will talk about Forecasting at Uber. In addition, to the distinguished speakers there will also be panel discussions covering major forecasting issues.
The Conference will be held at the elegant venue of Tribeca Rooftop with an excellent view of Manhattan.