The second competition, called the M-2 Competition or M2-Competition (Makridakis et al., 1993), was conducted on a grander scale. A call to participate was published in the International Journal of Forecasting, announcements were made in the International Symposium of Forecasting, and a written invitation was sent to all known experts on the various time series methods. The M2-Competition was organized in collaboration with four companies and included six macroeconomic series, and was conducted on a real-time basis. Data was from the United States. The results of the competition were published in a 1993 paper. The results were claimed to be statistically identical to those of the M-Competition.
The M2-Competition used much fewer time series than the original M-competition. Whereas the original M-competition had used 1001 time series, the M2-Competition used only 29, including 23 from the four collaborating companies and 6 macroeconomic series. Data from the companies was obfuscated through the use of a constant multiplier in order to preserve proprietary privacy. The purpose of the M2-Competition was to simulate real-world forecasting better in the following respects:
- Allow forecasters to combine their statistically based forecasting method with personal judgment.
- Allow forecasters to ask additional questions requesting data from the companies involved in order to make better forecasts.
- Allow forecasters to learn from one forecasting exercise and revise their forecasts for the next forecasting exercise based on the feedback.
The competition was organized as follows:
- The first batch of data was sent to participating forecasters in summer 1987.
- Forecasters had the option of contacting the companies involved via an intermediary in order to gather additional information they considered relevant to making forecasts.
- In October 1987, forecasters were sent updated data.
- Forecasters were required to send in their forecasts by the end of November 1987.
- A year later, forecasters were sent an analysis of their forecasts and asked to submit their next forecast in November 1988.
- The final analysis and evaluation of the forecasts was done starting April 1991 when the actual, final values of the data including December 1990 were known to the collaborating companies.
In addition to the published results, many of the participants wrote short articles describing their experience participating in the competition and their reflections on what the competition demonstrated. Chris Chatfield praised the design of the competition but said that despite the organizers’ best efforts, he felt that forecasters still did not have enough access to the companies from the inside as he felt people would have in real-world forecasting. Fildes and Makridakis (1995) in an article have argued that despite the evidence produced by these competitions, the implications continued to be ignored, to a great extent, by theoretical statisticians.