The Tallahassee Weather Service Office (NWS) is responsible for forecasts and warnings for 48 counties spread across north Florida, southwest Georgia, and southeast Alabama. We start with where precipitation is measured. The official rainfall record in the Tallahassee area is taken at the Tallahassee Regional Airport. An official measurement of rain occurs when 0.01 inches of rain is recorded at the airport. The NWS office produces a map of the Probability of Precipitation, or POP, for every 2.5 km within its 48 counties of responsibility, but the forecasts are verified only at six specific locations: Tallahassee airport, Panama City, Cross City, Valdosta, Dothan and Albany. So for this area, it is what happens at the airport that counts in terms of determining forecast skill. It could be raining cats and dogs all over town, but if not at the airport, then – no official rain is recorded. During the passage of Fay, the official record was something like 11. 89 inches spread over 5 days, although many places east of town got nearly 20 inches or more, because many of the heaviest cells went east of the airport in eastern Leon County and Jefferson County. The average amount for the city was probably somewhere between these values. The NOAA NWS Office uses several pieces of information to formulate their rain forecast. They know the climatologically expected amount of rain for the month, and they know from years of personal experience about the local character of rain in their forecast area, and they also receive computer model and MOS (Model Output Statistics) products, one of which is the POP the Probability Of Precipitation). Using all the information above, the forecasters at the Tallahassee office draw a map of POP, on a 2.5 km grid that is representative of areas that might cover, say, several counties, but the validation is only at the airports mentioned above. But they have the flexibility to have the areas be as big or small as they think is correct for that probability and weather situation. They do this for the entire 48 county area that they are responsible for.
But why say 70% chance of rain instead of 100%, and when should you carry an umbrella? Experience shows that most people think a 70% chance of rain indicates that rain is almost certain and expect rain, while a POP of 30% or less is taken as a “no rain” day. But what 70% really means is that for each point where the weather service says that the chance of rain is 70%, that 7 out of 10 days it will rain. Generally, the probability of rain is the same over a wide area, so that a 70% chance of rain at the airport is also a 70% chance of rain at the courthouse in Monticello and at my house or yours. And while not officially measured, it is probable that on 7 out of 10 days at your house when the forecast is for 70% chance of rain, it will rain, and three days it will not. Those 7 days do not have to be the same 7 days that it rains at the airport, although I suspect they usually are. Remember, this has NOTHING to do with how hard it will rain. And it does not mean that it will rain over 70% of the area.
So, the next question is, how well does the NWS do? To examine this, I looked at the forecasts for the month of June, 2008, a month where it rained some days and not others. The verification conducted is either 0% or 100%; that is, either it rained or did not rain. A standard way of assessing forecast skill is the Brier Score, which is the square of the difference between the forecast percentage and the verification expressed as a decimal. Thus a perfect forecast of rain or no rain (100% or 0% respectively) would yield a zero error as the Brier’s score. Conversely if you were wrong every day, the average would be 1; so the range is between zero (perfect) and one (worse than poor). The Tallahassee office got a score of 0.166. To put this in context, we can look at several other scenarios. If they had used climatology, the score would have been 0.267 (not as good). If they went out on a limb and said it was either going to rain (a POP greater than 50% becomes a POP of 100%) or not rain (a POP less than 50% become a POP of 0%), the score would have been an intermediate 0.233, which is not as good as a more refined estimate – so hedging your bets works. It they had used persistence, that is, forecast rain if it rained the day before, and no rain if it did not rain the day before, then the score would have (coincidently) also been 0.233. If they had just guessed 50% chance every day, then the score would have been 0.25. The last three numbers require no skill, so the value of .166 is an indication of skill. The forecast scores will vary from month to month and from forecast office to forecast office; just think how “difficult” it would be in Phoenix where it rarely rains. Our office does well in one of the more challenging forecast areas of the country.
However, I really don’t like the squaring of the difference, and prefer just taking the absolute value of the difference between what happened and the forecast, and computing the average error. Then the average forecast error (properly stated) was 35%, and if you use as “yes/no” forecast it would have been 23% (which is better, but no hedging your bets). The error using climatology would have been 50% (not good). Further analysis showed that for this month, on the average, they were 10% too optimistic for the occurrence for rain.
If you want to check what the POP or any other forecast variable, you can go to their website, http://www.srh.noaa.gov/TLH/ , and check the forecast by clicking anywhere on the map on the front page, and find the POP (and temp, winds, clouds) for a specific location.