Network News

X My Profile
View More Activity
The new Washington
Post Weather website
Jump to CWG's
Latest Full Forecast
Outside now? Radar, temps
and more: Weather Wall
Follow us on Twitter (@capitalweather) and become a fan on Facebook
Posted at 11:15 AM ET, 05/ 8/2008

In Defense of Meteorologists: Part 2

By Steve Tracton

By Steve Tracton

In my post last week, I commented upon the misleading (and a few flat-out wrong) statements about weather forecasting that appeared in a post on the New York Times Freakonomics blog. These statements were made as part of a study by a father and his fifth-grade daughter of temperature and precipitation forecasts by Kansas City broadcast meteorologists and the National Weather Service.

The study was done with good intentions and apparent rigor, yet there were significant flaws that made the results and conclusions highly questionable. Furthermore, the post included comments by some of the broadcasters and their station managers that were exaggerated, easily misconstrued or invalid, and were generalized to erroneously describe all weather forecasters and the science of meteorology in general.

Let's take a closer look at some of this misinformation, the most glaring of which was this comment credited to one of the TV weathercasters: "We have no idea what's going to happen (in the weather) beyond three days out."

Keep reading for further defense of meteorologists. See our full forecast through the weekend, and UnitedCast for the forecast for tonight's game at RFK.

Any true assessment of forecast skill must take into account what is being predicted (e.g., temperature, clouds, precipitation), location and scale.

You may have heard the mantra that today's five-day forecasts are as good as three-day forecasts were about 20 years ago. This is generally true, for example, for temperatures and winter storms. And in fact, predictability of larger-scale features of the atmosphere -- such as jet stream position and location of low-pressure and high-pressure areas in the middle to upper atmosphere -- might eventually extend to two weeks or so (from five to seven days currently).

But for smaller-scale features such as summertime showers, thunderstorms and associated rainfall amounts, predictability likely will never exceed 12-24 hours (from around a few to 12 hours now).

Probably the only thing certain about weather forecasting (and other fields, too, such as economics) is that there will always be some uncertainty. The degree of uncertainty may range from being too small to make a difference for most purposes to so large that it renders a forecast no more valuable than a random number generator.

Too often, however, forecasts are provided as "deterministic," single-valued quantities, such as a predicted high temperature of 75 degrees for seven days from now, without any measure of forecast confidence (CWG being a notable exception, of course!) or range of possible error (e.g., +/- 3°).

The uncertainty at longer ranges is usually larger (wider envelope of possible scenarios, akin to the "cone of uncertainty" used in hurricane track forecasts) than for the shorter range (narrower envelop). Generally speaking, the larger (smaller) the uncertainty, the less (more) likely a particular scenario within the envelope will be correct. Thus, it should not be surprising that a single-value forecast several (five to seven) days ahead will usually differ and be less accurate than one closer (two to three days) to the time in question, as was noted in the study posted on Freakonomics.

What's most important, though, is not whether a deterministic, longer-range forecast changes with time - which it almost invariably will - but whether the forecaster communicates the confidence associated with a prediction.

Forecast confidence would be easier to explain if it only depended on lead time. However, chaos theory, or shall we say the butterflies or perhaps an allergy-induced sneeze -- can shake things up in unexpected ways, such that sometimes a seven-day forecast can be as good as a three-day forecast, or, conversely, a three-day forecast may be as poor as a seven-day forecast. And the relative confidence at any given lead time can vary significantly from one weather scenario to another (for additional insight, refer to my presentation summarized in the latest D.C. Chapter of the American Meteorological Society newsletter.

The scenario-dependent level of confidence can be estimated based on the comparison of output from different forecast models, forecaster experience and ensemble predictions, which are multiple runs of the same forecast model using slightly different initial conditions in each run. Naturally, one hopes the confidence level applied to forecasts are reliable -- that is, higher-confidence forecasts are more likely to verify well as compared to lower -- confidence forecasts.

So, what about these two conclusions from the Freakonomics post?:

"No forecaster is ever better than just assuming it won't rain."

"For all days beyond the next day out, viewers would be better off flipping a coin to predict rainfall."

Sure, the former may be true when forecasting for San Diego during the dry season, when one can be close to 100-percent confident in predicting no rain day after day, regardless of whether the forecast is for tomorrow or for seven days from now. However, one could certainly not get away with such a forecast for D.C. (or Kansas City). Predicting precipitation in these locations, especially in summer, is one of the most difficult challenges encountered by forecasters.

In acknowledgment of this difficulty, precipitation forecasts have long been expressed in probabilistic terms. Used properly, probabilities retain more skill than a deterministic yes/no forecast. As for predicting rainfall by flipping a coin, it turns out that forecasting the climatological probability (about 30% in the D.C. area) of precipitation, even when confidence is low, is a much better approach than the yes/no forecast given by a coin toss. Beware, though, a 30 percent chance of rain is intended to mean there is a 30 percent chance that a given location will receive .01" or more of precipitation. This is a pretty low threshold, which could be produced by anything ranging from a period of light drizzle to a storm that drops several inches of rain in a couple hours.

Bottom line: From the perspective of both research and application, meteorology is one of the most challenging of all sciences. We've come a long way, but are still far from the understanding and modeling capabilities necessary for coming even close to the ultimate objective of practical forecast skill reaching theoretical limitations. Forecasts are not now, nor will they ever be, perfect in all respects. But even an imperfect forecast can be of significant value to users if accompanied by appropriate confidence and uncertainty information.

The fact of the matter is, current capabilities of weather forecasting can't be accurately represented by the kind of blanket statements and conclusions put forth by the Freakonomics weather study. While researchers and forecasters alike readily acknowledge there is much still to learn, we should not underestimate our increasing knowledge of the atmosphere, nor undermine the credibility of the vast majority of professional forecasters.

Forecasters can do their part by characterizing and communicating forecast confidence and uncertainty using all of the information and technology at their disposal. Meanwhile, hopefully the average weather forecast consumer is able to discriminate between erroneous and exaggerated pronouncements versus realistic expectations and limitations of weather forecasts.

Such is essential for maximizing the value of forecasts in making weather-dependent decisions, including whether to wrap their kids up in rain gear, planning a weekend outing, or preparing for potential snowstorm or hurricane.

The author is chair of the D.C. Chapter of the American Meteorological Society, and is working on a book about the source and nature of weather forecast uncertainty, and how users can optimize the value of forecasts by factoring uncertainty information into risk analysis and decision-making.

By Steve Tracton  | May 8, 2008; 11:15 AM ET
Categories:  Science, Tracton  
Save & Share:  Send E-mail   Facebook   Twitter   Digg   Yahoo Buzz   Del.icio.us   StumbleUpon   Technorati   Google Buzz   Previous: UnitedCast: Cloudy With Thundershowers Possible
Next: CommuteCast: Flooding Rains Possible Overnight

No comments have been posted to this entry.

The comments to this entry are closed.

 
 
RSS Feed
Subscribe to The Post

© 2012 The Washington Post Company