The first Makridakis Competition, held in 1982, and known in the forecasting literature as the M-Competition, used 1001 time series and 15 forecasting methods (with another nine variations of those methods included). According to a later paper by the authors, the following were the main conclusions of the M-Competition (Makridakis et al., 1982):

  1. Statistically sophisticated or complex methods do not necessarily provide more accurate forecasts than simpler ones.
  2. The relative ranking of the performance of the various methods varies according to the accuracy measure being used.
  3. The accuracy when various methods are combined outperforms, on average, the individual methods being combined and does very well in comparison to other methods.
  4. The accuracy of the various methods depends on the length of the forecasting horizon involved.

The findings of the study have been verified and replicated through other competitions and new methods by other researchers.

Newbold (1983) was critical of the M-competition, and argued against the general idea of using a single competition to attempt to settle the complex issue.

Before the first competition, the Makridakis–Hibon Study
Before the first M-Competition, Makridakis and Hibon (Makridakis and Hibon, 1979) published in the Journal of the Royal Statistical Society (JRSS) an article showing that simple methods perform well in comparison to the more complex and statistically sophisticated ones. Statisticians at that time criticized the results claiming that they were not possible. Their criticism motivated the subsequent M, M2 and M3 Competitions that prove beyond the slightest doubt those of the Makridakis and Hibon Study.