Friday , November 27 2020

How Google researchers used neural networks to make weather forecasts, Ars Technica


      The rain in Maine, explained –

             

Google says its forecasts are better than existing methods — but only for 6 hours.

      

      Jan (************************************************************, (4:) **************************************************** (UTC UTC)

  

        ********************

A research team at Google has
developeda deep neural network that can make fast, detailed rainfall forecasts.

The researchers say their results are a dramatic improvement over previous techniques in two key ways. One is speed. Google says that leading weather forecasting models today take one to three hours to run, making them useless if you want a weather forecast an hour in the future. By contrast, Google says its system can produce results in less than 10 minutes — including the time to collect data from sensors around the United States.

This fast turnaround time reflects one of the key advantages of neural networks. While such networks take a long time to train, it takes much less time and computing power to apply a neural network to new data.

A second advantage: higher spatial resolution. Google’s system breaks the United States down into squares 1km on a side. Google notes that in conventional systems, by contrast, “computational demands limit the spatial resolution to about 5 kilometers.”

Put these together and you could have a forecasting system that’s much more useful for short-term decision-making. If you’re thinking about going for a bike ride, for example, you’d be able to look up a minute-by-minute rainfall forecast for your specific route. Today’s conventional weather forecast, by contrast, might just tell you that there’s a – percent chance of precipitation in your town over the next couple of hours.

This animation compares a real-world weather pattern (center) to a conventional weather forecast (left) and Google’s own forecast (right). Google’s forecast has significantly more detail in both time and space.

Google******************

Google says that its forecasts are more accurate than conventional weather forecasts, at least for time periods under six hours.

“At these short timescales, the evolution is dominated by two physical processes: ((advection) ************ for the cloud motion, and convection

for cloud formation, both of which are significantly affected by local terrain and geography, “Google writes.

Beyond that, however, things start to break down. For longer time periods, conventional physics-based modeling still produces more accurate forecasts, Google admits.

How Google’s neural network works

Interestingly, Google’s model is “physics-free”: it isn’t based on any a priori knowledge of atmospheric physics. The software does try to simulate atmospheric variables like pressure, temperature, or humidity. Instead, it treats precipitation maps as images and tries to predict the next few images in the series based on previous snapshots.

It does this using convolutional neural networks, the same technology that allows computers to correctly label images. You can read our deep dive on CNNshere.

Specifically, it uses a popular neural network architecture called a U-Net

that was first developed for diagnosing medical images. The U-net has several layers that downsample images from its initial – by – 640 shape, producing a lower-resolution image where each “pixel” represents a larger region of the original image. Google doesn’t explain the exact parameters, but a typical U-Net might convert a – by – (grid to a) **************************************************** – by – 128 grid, then convert that to a – by – (grid, and finally a****************************************** (- by – 32 grid. While the number of pixels is declining, the number of “channels” —variables that capture data about each pixel — is growing.

Experience has shown that this downsampling process helps a neural network identify high-level features of an image. Values ​​inside a neural network are never easy to interpret explicitly, but this – by – 90 pixel grid might implicitly capture important variables like temperature or wind speed in each region of the image.

The second half of the U-Net then upsamples this compact representation — converting back to , 256, and finally – pixel representations. At each step, the network copies over the data from the corresponding downsampling step. The practical effect is that the final layer of the network has both the original full-resolution imageand (summary data reflecting high-level features inferred by the neural network.)

To produce a weather forecast, the network takes an hour’s worth of previous precipitation maps as inputs. Each map is a “channel” in the input image, just as a conventional image has red, blue, and green channels. The network then tries to output a series of precipitation maps reflecting the precipitation over the next hour.

Like any neural network, this one is trained with past real-world examples. Thousands of past real-world weather patterns are fed into the network, and the training software tweaks the network’s many parameters to more closely approximate the correct results for each training example. After repeating this process millions of times, the network gets pretty good at approximating future precipitation patterns for data it hasn’t seen before.

                                                     **************************

About admin

Check Also

Here’s what to expect for US winter weather after a cool October, Ars Technica

Here’s what to expect for US winter weather after a cool October, Ars Technica

Enlarge / The Kincade Fire burns north of California's Bay Area on October 29.Each month, the US National Oceanic and Atmospheric Administration (NOAA) puts out an analysis of the previous month’s weather after the final tally. The most recent update covers an odd October for the US, where the month included some terrifying wildfire conditions…

Leave a Reply

Your email address will not be published. Required fields are marked *