in ,

A pre-hurricane climate change analysis gets major revision after the storm, Ars Technica

A pre-hurricane climate change analysis gets major revision after the storm, Ars Technica

      Just a bit outside –


Effort had predicted half of Hurricane Florence’s rainfall was due to warming.





**********************It was the first scientific attempt of its kind — assessing the impact of climate change on a hurricanebeforethe storm had even made landfall. And the results (whichwe covered

Increased rainfall would hardly be a surprise. Results from many previous tropical cyclones have found that a warmer atmosphere, which holds more moisture, is expected to boost storm precipitation totals. But (percent would be exceptional, as previous studies had fallen somewhere between 6 and 82 percent, depending on the storm .

The scientists weren’t able to explain why they got that high number at the time, considering they had only a few days to get the model forecast simulations run and out the door. With the benefit of time, the scientists have now published an evaluation of their groundbreaking effort. Unfortunately, it shows that mistakes were made.

The initial work was based on 15 simulations each of two versions of the world: the actual conditions at the time and a counterfactual world with the warming trend removed (in this case, taking 0. ° C off of ocean surface temperatures in the area). The difference between these “actual” and “counterfactual” runs was the influence attributed to climate change, with the differences between the runs providing some error bars.

To revisit this, the researchers repeated the experiment but with 728 simulations for each scenario. Collections of repeat simulations, called “ensembles,” are done by varying some of the uncertain parameters in the model. The more combinations of parameters you have, the more you fill out the range of possible outcomes. This firms up the error bars and ensures you aren’t missing part of what the model is predicting.

With this done, the researchers could compare the model forecast scenarios to what actually happened when Hurricane Florence dumped torrential rains on the Carolinas in September . The “actual” forecast simulations did their job, matching the timing and location of landfall. The precipitation forecast was also good, with maximum precipitation totals averaging 95 .3 centimeters (

**************************************** (.6 inches), compared to the 82 .3 centimeters ( .9 inches (measured in the real world.)

However, the researchers discovered a problem with the way their “counterfactual” simulations had originally been set up. An error caused their intended 0. 90 ° C cooling of ocean-surface temperatures to grow an additional 1-3 ° C off the Carolinas. That set up a much larger contrast with the current world, and it turns out to be the reason the numbers they released back in seemed so extreme.

After fixing that error, their “counterfactual” simulations show a much smaller influence of climate change. Rather than something like 56 percent of the rainfall being the result of a warmer world, the models actually show about (five percent(and that’s ± 5). And rather than a storm that is 95 kilometers wider because of climate change, it was about nine kilometers (± 6) wider.

Obvious “Oops!” Aside, there is one more thing the researchers learned from this analysis. To test out the impact of only running simulations instead of 728, they ran the numbers on many random sets of 16. While the averages obviously tended to be similar, the error bars on a set of are much wider.


****************************** (percent confidence range on storm size due to climate change using all the simulations is 3.1 to (**********************************************************************. 3 kilometers, for example. That range when using simulations grows to -8.6 kilometers to 28. 5 kilometers (that is, some would predict the storm would actually be smaller). So at least in this case, not having enough time to run more simulations means you’ll be stuck with obnoxiously large error bars.

The researchers do point out that each situation is a little different, and it’s not as simple as saying that (x) simulations are required. It may take more examples to work out a recommended approach to these ultra-quick assessments.

They also put a somewhat surprisingly happy face on their results. The researchers write:

We demonstrated that a forecasted attribution analysis using a conditional attribution framework allows for credible communication to be made on the basis of sound scientific reasoning. Post-event expansion of the ensemble size and analysis demonstrated it to be reasonable, albeit with some quantitative modification to the best estimates and the opportunity to more rigorously evaluate the significance of the analysis.

After all, the big mistake here was avoidable, although more likely in the rush. And while the error bars would be large, the method can at least say something interesting. If there’s sufficient value in getting a less reliable answer faster is another question.

Science Advances, (******************************************. DOI:

************************************************************. ************************** / sciadv.aaw 01575879(

What do you think?

Leave a Reply

Your email address will not be published. Required fields are marked *

GIPHY App Key not set. Please check settings

IRS drops longstanding promise not to compete against TurboTax, Ars Technica

IRS drops longstanding promise not to compete against TurboTax, Ars Technica

Deal: Realme announces New Year sale, discounts many devices – news –,

Deal: Realme announces New Year sale, discounts many devices – news –,