(continued from part 1 and 2... these posts discuss some of what humans do when creating a river forecast. The previous posts talked about how forecasters would use models and then figure out what other information is useful)
By this stage, I, the forecaster, would have a mental image of what I “really think” the future might be. The next question is then “Is there some other information I could include to make this forecast more useful?” People love to hear analogues, such as “This flood is going to be like the one in 2009”. It helps put the forecast in context… In 2009, the flood came up to the top of the doorway on the first floor and stayed that way for about six hours before dropping. Therefore, I should move my valuables to the second floor and pack a day’s worth of food. As Mark Twain said, though “History does not repeat itself, but it does rhyme”. Therefore, while analogues are useful, they have limitations.
This supporting information mentioned above helps people put the forecast in context, but it can also build trust in the forecast. Nearly all weather forecasts can be linked somehow to a “briefing discussion”. This can be a few non-technical words about the weather situation, or it can be a full-blown jargon and acronym filled discussion of the thought process that went into the forecasts. Strangely, such written discussion is commonly in all capitals and might read like “FOR SON THROUGH FMA WE USED CCA - OCN - TRENDS AND COMPOSITES. THE WEIGHT OF THE LA NINA COMPOSITES WAS GREATLY REDUCED FOR FMA AND MAM - ESPECIALLY FOR PRECIPITATION - AND WAS NOT USED THEREAFTER.”
Armed with this kind of information, the user can decide if he accepts the forecaster’s rationale. Perhaps more importantly he can decide if the forecast could be improved by adding extra information. For example, if I was a user in Manila, I would be thinking that the previous typhoon saturated the soils and so the next Typhoon is going to produce a lot of water. Do the forecasts already have that built in? Would nudging up the official forecasts be double-dipping? That kind of thing happened to me as a forecaster, but in the opposite direction. My first year of forecasting was the year after an epic drought and the soils were parched. My models didn’t have that built in and so I lowered the forecasts from what they would be otherwise. Then the people that were delivering the forecasts were recommending to users something to the effect of, “the official forecast is this, but we don’t know just how big the soil moisture effect is going to be, so you might consider preparing for an even lower number”.
There is much less of a tradition of such written “thinking out loud” briefing discussions by hydrologists than there is among weather forecasters. Personally I feel that hydrologists should be more transparent about their human adjustments to the final products. In my experience, however, this does not happen, not because the forecasters have anything to hide, it’s just something that they don’t have time for (or at least the forecast documentation process is not currently streamlined enough to make it practical).
The final question is “Could a tweak to the forecast result in people making a better decision?” This is the most controversial question of all and there is a very wide range of opinions in the forecasting community about this. The ultimate goal of nearly all forecasting is to have a positive effect on people’s decisions. However, distorting or exaggerating a forecast to catch people’s attention is a major risk.
In his seminal paper on “what is a good forecast?” Alan Murphy said that a good forecast is one that is identical to the forecaster’s internal belief. In other words, if anyone was to say “yeah, that’s the official forecast, but what do you really think is going to happen?” the best answer is “there’s no difference”. Murphy’s context was in coming up with a score to say how good forecasts were and how there shouldn’t be any way that the hydrologist could “game” the forecasts to get a better score. For example, if you were scored on how often you “cried wolf”, you could get a perfect score by always keeping your mouth shut, even if a pack of wolves were in the process of viciously devouring you.
Consider the case of Manila though. What would I do if I were in a situation where a direct hit to a major metropolitan city would be catastrophic, but the rainfall forecast was for it to be a near-miss?… And last year a similar situation lead to a forecast bust and someone like me was demoted? What would I be thinking? Would I be tempted to nudge the model output up a bit in forecasts for major cities, just because the potential for loss was so great? What if I thought people payed attention only when the forecasts were very high? What if, in my heart-of-hearts I didn’t think the river would get to the critical level, but I would feel consumed with guilt and regret if it did get that high?
A seasoned hydrologist once told me that there is a “big, big difference between flood forecasting and flood warning”. In essence, forecasting is trying to get the numbers right, warning is trying to make a difference in people’s lives.
Saint Thomas Aquinas also has a (paraphrased) quote that throughout our lives we shout our message out to the hillside. We cannot know how it echoes around in each valley or how it sounds to whatever listeners are out there. Similarly, if I was to nudge the forecast towards Manila, I would be nudging it away from those in the true likely path. What about them? Shouldn’t they also be warned? There are costs to over-preparing and we’re talking about real money here. Worse yet, what if I nudged the forecast so high that the citizens decided there was no hope against such a big flood and abandoned neighborhoods that could have been defended?
No comments:
Post a Comment