Ran into an interesting piece just past peer review and scheduled for publication, the first three pages of which I've included here. In a nutshell, the paper seeks to defines the utility of forecasting multivarieate phenomena, and at what point a prediction hinders rather than helps public policy planning. The entire piece can be read here:
http://www.forecastingprinciples.com/Public_Policy/WarmAudit31.pdfTurns out there is an entire website and budding science devoted to the propogation of effective forecasting tools. "Global warming" forecasts don't fare to well under the empiric gaze used here:
http://www.forecastingprinciples.com/The piece follows bellow:
Global Warming: Forecasts by Scientists versus Scientific Forecasts*
Kesten C. Green, Business and Economic Forecasting Unit, Monash University
c/o PO Box 10800, Wellington 6143, New Zealand.
kesten@kestencgreen.com; T +64 4 976 3245; F +64 4 976 3250
J. Scott Armstrong†, The Wharton School, University of Pennsylvania
747 Huntsman, Philadelphia, PA 19104, USA.
armstrong@wharton.upenn.edu
(This paper is a draft of an article that is forthcoming in Energy and Environment.)
Version 43 – July 10, 2007
Abstract
In 2007, the Intergovernmental Panel on Climate Change’s Working Group One, a panel of
experts established by the World Meteorological Organization and the United Nations
Environment Programme, issued its updated, Fourth Assessment Report, forecasts. The Report
was commissioned at great cost in order to provide policy recommendations to governments. It
included predictions of dramatic and harmful increases in average world temperatures over the
next 92 years. Using forecasting principles as our guide we asked, are these forecasts a good basis
for developing public policy? Our answer is “no.”
To provide forecasts of climate change that are useful for policy-making, one would need
to forecast (1) global temperature, (2) the effects of any temperature changes, (3) the effects of
alternative policies, and (4) whether the best policy would be successfully implemented. Proper
forecasts of all four are necessary for rational policy making.
The IPCC Report was regarded as providing the most credible long-term forecasts of
global average temperatures by 31 of the 51 scientists and others involved in forecasting climate
change who responded to our survey. We found no references to the primary sources of
information on forecasting methods despite the fact these are easily available in books, articles,
and websites. In our audit of Chapter 8 of the IPCC’s WG1 Report, we found enough information
to make judgments on 89 out of a total of 140 forecasting principles. The forecasting procedures
that were described violated 72 principles. Many of the violations were, by themselves, critical.
We concluded that the forecasts in the Report were not the outcome of scientific
procedures. In effect, they were the opinions of scientists transformed by mathematics and
obscured by complex writing. Research on forecasting has shown that experts’ predictions are not
useful. Instead, policies should be based on forecasts from scientific forecasting methods. We
have been unable to identify any scientific forecasts of global warming. Claims that the Earth will
get warmer have no more credence than saying that it will get colder.
Keywords: accuracy, audit, climate change, evaluation, expert judgment, mathematical models,
public policy.
*Neither of the authors received funding for this paper.
† Information about J. Scott Armstrong can be found on Wikipedia.
“A trend is a trend,
But the question is, will it bend?
Will it alter its course
Through some unforeseen force
And come to a premature end?”
Alec Cairncross, 1969
Research on forecasting has been conducted since the 1930s. Of particular value are comparative
empirical studies to determine which methods are most accurate in given situations. The findings,
along with the evidence, were first summarized in Armstrong (1978, 1985). The forecasting
principles project, begun in the mid-1990s, summarized knowledge as evidence-based principles
(condition-action statements) to provide guidance on which methods to use in a given situation.
The project led to the Principles of Forecasting handbook (Armstrong 2001), which involved 40
authors (all internationally known experts on forecasting methods) along with 123 reviewers (also
leading experts on forecasting methods). The summarizing process alone required a four-year
effort.
Efforts have been made to ensure that these principles are easy to find. They have been freely
available on forecastingprinciples.com, a site that has been first on Google searches for
“forecasting” for many years. The directors’ objective for the site is to summarize all useful
knowledge on forecasting methods. There is no other source that provides evidence-based
forecasting principles. The site is often updated, and a recent update of evidence on some of the
key principles was published in Armstrong (2006).
Many of the principles go beyond common sense, and some are counter-intuitive. As a result,
those who forecast in ignorance of the research literature are unlikely to produce useful
predictions. For example, here are some of the well-established generalizations for situations
involving long-term forecasts of complex issues where the causal factors are subject to
uncertainty (as with climate):
• Unaided judgmental forecasts by experts have no value. This applies whether the
opinions are expressed in words, spreadsheets, or mathematical models. It also
applies regardless of how much scientific evidence is possessed by the experts.
Among the reasons for this are:
a) Complexity: People cannot assess complex relationships through
unaided observations.
b) Coincidence: People confuse correlation with causation.
c) Feedback: People making judgmental predictions typically do not
receive unambiguous feedback they can use to improve
their forecasting.
d) Bias: People have difficulty in obtaining or using evidence that
contradicts their initial beliefs. This problem is especially
serious for people who view themselves as experts.
• Agreement among experts is weakly related to accuracy. This is especially true
when the experts communicate with one another and when they work together to
solve problems, as is the case with the IPCC process.
• Complex models (those involving nonlinearities and interactions) harm accuracy
because their errors multiply. Ascher (1978), refers to the Club of Rome’s 1972
forecasts where, unaware of the research on forecasting, the developers proudly
proclaimed, “in our model about 100,000 relationships are stored in the computer.
Complex models also tend to fit random variations in historical data well, with the
consequence that they forecast poorly and provide misleading conclusions about the
uncertainty of the outcome. Finally, when complex models are developed there are
many opportunities for errors and the complexity means the errors are difficult to
find. Craig, Gadgil, and Koomey (2002) came to similar conclusions in their review
of long-term energy forecasts for the US made between 1950 and 1980.
• Given even modest uncertainty, prediction intervals are enormous. For example,
prediction intervals (ranges outside which outcomes are unlikely to fall) expand
rapidly as time horizons increase, so that one is faced with enormous intervals even
when trying to forecast a straightforward thing such as automobile sales for General
Motors over the next five years.
• When there is uncertainty in forecasting, forecasts should be conservative.
Uncertainty arises when data contain measurement errors, when the series are
unstable, when knowledge about the direction of relationships is uncertain, and
when a forecast depends upon forecasts of related (causal) variables. For example,
forecasts of no change were found to be more accurate than trend forecasts for
annual sales when there was substantial uncertainty in the trend lines (e.g., Schnaars
and Bavuso 1986). This principle also implies that forecasts should revert to long-
term trends when such trends have been firmly established, do not waver, and there
are no firm reasons to suggest that they will change. Finally, trends should be
damped toward no change as the forecast horizon increases.
These conclusions were drawn from the forecasting principles in the edited handbook on
forecasting (Armstrong 2001) and they are described at forecastingprinciples.com. A summary of
the principles, now numbering 140, is provided in the Forecasting Audit on the site, where they
are presented as a checklist.
The Forecasting Problem
In determining the best policies to deal with the climate of the future, a policy maker first has to
select an appropriate statistic to use to represent the changing climate. By convention, the statistic
is the averaged global temperature as measured with thermometers at ground stations throughout
the world, though in practice this is a far from satisfactory metric (e.g., Essex et al., 2007).
It is then necessary to obtain forecasts and prediction intervals for each of the following:
1. What will happen to the mean global temperature in the long-term (say 20 years or
longer)?
2. If accurate forecasts of mean global temperature changes can be obtained and these
changes are substantial, then it would be necessary to forecast the effects of the
changes on the health of living things and on the health and wealth of humans. The
concerns about changes in global mean temperature are based on the assumption that
the earth is currently at the optimal temperature and that variations over years (unlike
variations within years) are undesirable. For a proper assessment, costs and benefits
must be comprehensive. (For example, policy responses to Rachel Carson’s Silent
Spring should have been based in part on forecasts of the number of people who
might die from malaria if DDT use were reduced).
3. If reliable forecasts of the effects of the temperature changes on the health of living
things and on the health and wealth of humans can be obtained and the forecasts are
for substantial harmful effects, then it would be necessary to forecast the costs and
benefits of alternative policy proposals.
4. If reliable forecasts of the costs and benefits of alternative policy proposals can be
obtained and at least one proposal is predicted to lead to net benefits, then it would be
necessary to forecast whether the policy changes can be implemented successfully.
If reliable forecasts of policy implementation can be obtained and the forecasts clearly support net
benefits for the policy, and the policy can be successfully implemented, then the policy proposal
should be implemented. A failure to obtain scientifically validated forecasts at any stage would
render subsequent stages irrelevant. Thus, we focus on the first of the four forecasting problems.
Is it necessary to use scientific forecasting methods? In other words, to use methods that have
been shown by empirical validation to be relevant to the types of problems involved with climate
forecasting? Or is it sufficient to have leading scientists examine the evidence and make
forecasts? We address this issue before moving on to our audits.