To assess the danger of utmost weather events in a community, policymakers first depend on global climate models, which may be run a long time and even centuries upfront, but only at a rough resolution. For example, these models could possibly be used to estimate future climate conditions within the northeastern United States, but not specifically for Boston.

To estimate the longer term risk of utmost weather events like flooding in Boston, policymakers can mix the large-scale predictions of a coarse-grained model with a finer-resolution model tuned to estimate how often Boston is prone to experience such events because the climate warms damaging floods will come. However, this risk evaluation is barely as accurate because the predictions of this primary, coarser climate model.

“If you get the data fallacious for large-scale environments, you miss every part meaning what extreme events will appear like at smaller scales, corresponding to over individual cities,” says Themistoklis Sapsis, William I. Koch Professor and director of the Center for Ocean Engineering the Department of Mechanical Engineering at MIT.

Sapsis and his colleagues have now developed a way to “correct” the predictions from rough climate models. By combining machine learning with dynamical systems theory, the team’s approach “transforms” a climate model’s simulations into more realistic patterns over large scales. Combined with smaller models to predict specific weather events, corresponding to tropical cyclones or floods, the team’s approach provided more accurate predictions of how often certain locations will likely be affected by these events over the subsequent few a long time, in comparison with forecasts without the correction scheme.

According to Sapsis, the brand new correction scheme is general and may be applied to any global climate model. Once corrected, the models might help determine where and the way often extreme weather events will occur as global temperatures rise in the approaching years.

“Climate change will impact every aspect of human life and each style of life on the planet, from biodiversity to food security to the economy,” says Sapsis. “If we’re in a position to know exactly how extreme weather conditions will change, particularly in specific locations, this could make a giant difference when it comes to preparation and the fitting technology to develop solutions. This is the strategy that may pave the way in which there.”

The team’s results appear today within the . MIT co-authors of the study include postdoc Benedikt Barthel Sorensen and Alexis-TzianniCharalampoulos SM ’19, PhD ’23, together with Shixuan Zhang, Bryce Harrop and Ruby Leung from the Pacific Northwest National Laboratory in Washington state.

Over the hood

Today’s large-scale climate models simulate weather features corresponding to average temperature, humidity and precipitation across the globe on a grid-by-grid basis. Running simulations of those models requires enormous computing power, and to simulate how weather features interact and evolve over periods of a long time or longer, the models average the features roughly every 100 kilometers.

“It is a really demanding calculation that requires supercomputers,” notes Sapsis. “But these models still don’t resolve very essential processes corresponding to clouds or storms that occur at smaller scales of a kilometer or less.”

To improve the resolution of those coarse climate models, scientists have typically attempted to correct a model’s underlying dynamical equations, which describe how phenomena within the atmosphere and oceans should physically interact.

“People have tried to investigate climate model codes developed over the past 20 to 30 years, which is a nightmare because you may lose a number of stability in your simulation,” explains Sapsis. “What we do is a very different approach in that we are usually not attempting to correct the equations, but as an alternative correct the output of the model.”

The team’s latest approach takes the output or simulation of a model and overlays it with an algorithm that pushes the simulation toward something that higher reflects real-world conditions. The algorithm is predicated on a machine learning scheme that takes data, corresponding to historical information on temperature and humidity world wide, and learns relationships inside the data that represent fundamental dynamics between weather features. The algorithm then uses these learned associations to correct a model’s predictions.

“We’re attempting to correct for dynamics, corresponding to what an extreme weather feature, corresponding to wind speeds during a Hurricane Sandy event, will appear like within the rough model in comparison with reality,” says Sapsis. “The method learns dynamics, and dynamics is universal. The right dynamics ultimately result in correct statistics, for instance the frequency of rare extreme events.”

Climate correction

As a primary test of their latest approach, the team used the machine learning scheme to correct simulations created by the Energy Exascale Earth System Model (E3SM), a U.S. Department of Energy climate model that simulates climate patterns world wide at 110 resolution kilometers. The researchers used eight years of historical data on temperature, humidity and wind speed to coach their latest algorithm, which learned dynamic relationships between the measured weather features and the E3SM model. They then ran the climate model for about 36 years and applied the trained algorithm to the model’s simulations. They found that the corrected version produced climate patterns that were more consistent with real observations from the last 36 years that weren’t used for training.

“We’re not talking about big differences in absolute numbers,” says Sapsis. “An extreme event within the uncorrected simulation could possibly be 105 degrees Fahrenheit, versus 115 degrees with our corrections. But for individuals who experience this, it makes a giant difference.”

When the team then combined the corrected coarse model with a selected, finer-resolution model of tropical cyclones, they found that the approach accurately reproduced the frequency of utmost storms in specific locations world wide.

“We now have a rough model that may offer you the fitting frequency of events for the present climate. “It’s gotten rather a lot higher,” says Sapsis. “Once we correct the dynamics, it is a relevant correction even when the worldwide average temperature is different, and it may possibly be used to grasp what wildfires, floods and warmth waves will appear like in a future climate.” The focus of our ongoing work lies on the evaluation of future climate scenarios.”

“The results are particularly impressive because the strategy shows promising results for E3SM, a state-of-the-art climate model,” says Pedram Hassanzadeh, an associate professor who leads the Climate Extremes Theory and Data group on the University of Chicago and was not a part of the study involved. “It could be interesting to see what climate change projections this framework provides when future greenhouse gas emissions scenarios are included.”

This work was supported partially by the US Defense Advanced Research Projects Agency.

This article was originally published at news.mit.edu