Browse the glossary using this index

Special | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | ALL

Page: (Previous)   1  2
  ALL

M

Minimum Error Variance

by Brian Etherton - Thursday, July 20, 2006, 2:56 PM
 
One of the prime equations for data assimlation is the Kalman Filter Equation. This equation is:

x(a)-x(f) = PH(T){HPH(T)+R}(-1)[d-Hx(f)]

Where x(f) is the forecast, x(a) is the analysis, P is the background error covariance matrix, and R is the observation error covariance matrix. x(a) and x(f) are vectors, containing every model variable at every model gridpoint.

P gives the information regarding errors of the model first guess field
R gives information about the errors of observations (instrument and representativeness errors).

This equation, and the notion of 'covariance matrices', is often a little overwhemling. It can be simpler to consider the case of trying to best know the value of one variable.

For just one variable...
T(a) - T(f) = s(f)**2/(s(o)**2+s(f)**2)*[T(o)-T(f)]

Each term in the equation for one variable has a match with the full blown Kalman Filter equation. T(a) is the analysis value (best estimate), as is x(a), T(f) is the model first guess, as is x(f). And so on.

Consider the temperature of the air at 500mb over Denver Colorado.

To estimate this value, T(a) (a for 'analysis') we have two estimates:

#1 is the value as reported from the DNR sounding.
An observation

I'll call that T(o) (o for 'observed') Using known history, the average error of these observations has an error variance s(o)**2

#2 is the value from the model first guess field
Model First Guess
I'll call that "T(f)" (f for 'first guess') Using known history, the average error of these observations has an error variance s(f)**2

Here is where the graphic comes in!

The graphic I am envisioning is rather similar to this one:

Two become one

This graph would be re-worked such that instead of FCST A and FCST B, we have First Guess (T(f)) and Observation (T(o))

The applet I am envisioning is something that can be changed from the image on the left to the image on the right.

Sliders (as well as text entires) will allow the user to adjust both the values of T(f) and T(o) as well as the error variances s(f)**2 and s(o)**2. Also there would be a box showing the value of T(a).

What if our forecast model was always perfect? In that case, the error variance of the observations, s(f)**2, would be zero. In that case, the equation would collapse to T(a)-T(f) = [T(o)-T(f)]*(0), or T(a)-T(f)=0. At this time, the image would show that T(a) and T(f) are the same value, and that the spread (the tails of the Gaussian curves in the figure above) would be zero.

Consider now the other extreme, where the observations are perfect (but the forecast is not). In that case, the equation becomes T(a)-T(f) = [T(o)-T(f)]*(s(f)**2/s(f)**2), or T(a)-T(f)=T(o)-T(f), and thus, T(a)=T(o).

Users could adjust the values of the error variances. The 'analysis value', T(a) would move towards whichever value (T(a) or T(f)) had a smaller error variance associated with it. As the error variances were increased, the picture would look more like the right side of the above image. If error variances were decreased, the image would look more like the figure on the left of the above image.

Included with the estimate of T(a), the actual value, there would be a Gaussian shape around it with variance equal to [1/(s(o)**2+s(f)**2)](-1), showing that when the error variances of the two estimates are small, the error variance (uncertainty) of the analysis is also small.

This module will show the correlation between error statistics of the two estimates (first guess and observation) and the analysis value.

N

NWP Misconceptions

by Dave Dempsey - Thursday, July 20, 2006, 4:59 PM
 
COMET Module: "Ten Common NWP Misconceptions" (2002-2003)

Ten common misconceptions about the way NWP models work and hence how they can be interpreted, plus explanatory corrections of those misconceptions, plus quite a bit of more or less related material.

Appropriate level: Advanced undergraduates and above.

Summary comments: This module is heavily illustrated, has audio narration, and is very light on text. (A print version substitutes text for the audio narration; animation is lost but static color graphics remain.) A "Test Your Knowledge" section, with feedback, ends the presentation of each misconception.

Each misconception is a kind of Trojan horse; the misconception is addressed directly, but it is also used as an excuse to present quite a bit of less narrowly (but still at least broadly) relevant material.

The misconceptions vary in their degree of obscurity, but at least some of them look potentially useful for use in a basic NWP course that doesn't focus exclusively on theory. However, to appreciate the misconceptions and the corrections to them, students need already to know the basics about how numerical models are formulated and used, and in some cases more than that--some of the misconceptions are quite specific, as the titles below probably suggest.

At least one of the misconceptions (I didn't examine all of them closely), "A 20 km Grid Accurately Depicts 40 km Features" (#3 below), makes important points about model resolution, but its examples cite values relevant to 2002-2003 era models and so are not as directly relevant today as they were then. Several "misconceptions" refer to the eta model, which few students henceforth will recognize. The general concepts remain highly relevant, however.

Misconceptions addressed include:
  1. The Analysis Should Match Observations
    (Presents a summary of observational platforms in a nice conceptual diagram. Raises the concept of assimilation cycling.)

  2. High Resolution Fixes Everything
    (Makes the point that model components work synergistically; improving resulution alone won't guarantee a better forecast.)

  3. A 20 km Grid Accurately Depicts 40 km Features
    (In addition to discussion of spatial resolution, this section includes an animated graphic showing effects of finite differencing on phase speeds of sine waves of various wavelengths and speeds.)

  4. Surface Conditions are Accurately Depicted
    (Contains a long table summarizing the eta and Canadian GEM model surface fields; it would be nice if this could be updated to the WRF-NMM (the current NAM). Another section describes the effects of vegetation in a single-column model.)

  5. CP Schemes 1: Convective Precipitation is Directly Parameterized
    (Explanation of a convective sequence in nature and one in a non-convection-resolving model without a convective parameterization, using schematic soundings superimposed on cloud drawings; a comparison of adjustment and mass flux convective parameterization schemes (Betts-Miller-Janic and Kain-Frisch in particular).

  6. CP Schemes 2: A Good Synoptic Forecast Implies a Good Convective Forecast
    (Brief discussion of resolution, illustrated with a couple of diagrams; fine-tuning convective parameterization [CP] schemes; over- and under-active CP schemes; different schemes in the same model)

  7. Radiation Effects are Well-Handled in the Absence of Clouds
    Radiation processes in the atmosphere and the earth's surface(Discussion of the complexity of representing radiative processes in a model, including the effects of clouds and clear-sky biases in the eta model--hope this gets updated! Brief summary of how models address radiation procesess in general.)

  8. NWP Models Directly Forecast Near-Surface Variables
    (Adjustment of temperature from the lowest model level to 2 meters in the GFS model, illustrated; how this is done in other models, mentioned in very general terms; problems that can arise with this adjustment process; effect of vertical coordinate on the adjustment, including as examples the eta model, which is now out of date, and the Canadian GEM model, which uses a terrain-following sigma coordinate and is therefore still relevant to the WRF-NNM [the current NAM] below about 400 mb; effect of terrain representation--envelope, silhouette, mean--on the adjustment process.)

  9. MOS Forecasts Improve with Model Improvements
    (An introduction to MOS, including its development and implementation. Issues with rarer types of events; smoothing; advantages and disadvantages of MOS schemes; situations when MOS might produce a poor forecast.)
    MOS Development and Implementation Schematic Diagram

  10. Full-Resolution Model Data are Always Required
    (Comparisons of fields produced by AWIPS at 22, 40, and 80 km resolutions from eta model output; resulution vs. scale of atmospheric features, with animated graphic of sine waves; the issue of smoothing, illustrated with plots of specific humidity overlaid by temperature contours at 40 km resolution [unsmoothed] and 80 km resolution [smoothed]; vertical resolution, illustrated with tephigrams, which take some work to understand.)


O

JH

Operational Model Matrix

by John Horel - Thursday, July 20, 2006, 2:57 PM
 
COMET Module: Operational Models Matrix

Appropriate Level: Upper division and graduate level

Overview

An excellent resource to contrast the basic features of U.S. (and one Canadian) operational models. Links to relevant COMET modules and other on-line resources are provided.

Information on many characteristics of the NMM-NAM remain to be added.

R

Resolution Applet

by Brian Etherton - Thursday, July 20, 2006, 11:00 AM
 

A significant concept to numerical weather prediction is resolution. The notion is that at a higher resolution, more features can be represented by a model. However, this increased resolution comes at additional computational expense.

Take, for example, a tropical cyclone. Below is an image of Hurricane Fran, at 1km resolution.

Fran 1km

Note the structure that is visible: the details of the eye, for example.

I am envisioning an applet that allows the user to choose the resolution via some sort of slider. The image will be 'radar' like image. This image, at 1km resolution, covering a 512km by 512km domain, will clearly show the structure of the eye-wall and of the outer rain bands.

Using the slider, one can choose to degrade the resolution. Choices for resolution will be 1km, 2km, 4km, 8km, 16km, 32km, 64km, and 128km. Thus, at the coarses resolution, there are only 16 pixels, whereas at 1km resolution, there are 262144pixels (512**2)

When the slider is moved from one resolution to another, the "radar" image changes to that resolution. I believe that images can be made using IDV, but just changing how many pixels are shown. One can choose to use them all, every 2nd, every 4th, etc. (Kelvin demo for LEAD).

The changing image will show the consequences of resolution reduction: the eye wall structure will decay, and at somepoint, the eye will not be discernable.

In addition to the image changing, there will be a text readout of the number of pixels. For example, for a resolution of 4km, nx=128, ny=128, and the total number of pixels is 16834.

Beside this number of pixels, there will be a 'model computing time'. For example, we could assume that at 128km resolution, the model will take 1 minute to run. Assuming that a halving the resolution leads to 4x gridpoints, and making the time step 1/2 as long, a 64km run would then take 16 minutes. In the limit, the 1km resolution would take 16384 minutes, (over 11 hours). This model computing time would be expressed as a common clock. The number of time would be 'shaded'. For instance, if the model would take 1 hour to run, the clock would read 1pm, with the area of the clock from Noon to the hour-hand being shaded in.

The two dramatic visuals would be the image being sharper or more degraded, and the clock.

The idea is to incorporate, but improve upon, images such as this:

Resolution

An important point is to make sure that the false idea (misconception!) that a 1km model will resolve a 1km feature is not communicated. Not sure how to do that...


S

Pat

Spectral Wave Addition

by Pat Parrish - Thursday, July 20, 2006, 11:01 AM
 

A concept that is difficult to get across is how a seeming random wave can be deconstructed into a series of sine waves.

The image below is a little old, but gives a great visual of the process:

Wave Addition Example

For the 'mark 2' version, I envision an applet that looks like this, but has, perhaps, 4 different lines. The top 2 lines would be sin(x) and cos(x) on the top line (much like the sin(2x) and sin(5x) on the top line of the above image), sin(2x) and cos(2x) on the second line. The third would be the sum of the four waves above, similar to the bottom line on the above image.

Each of the top 4 waves would have an 'amplitude slider', where one could alter the amplitude of each of the waves. For instance, if one set the amplitude of 'sin(x)' to 1, and all the other waves (cos(x), sin(2x), cos(2x))to 0, one would get that same sine wave back on the bottom panel. By changing the amplitude of the 4 waves above, one can create increasingly complex patterns on the sum of the 4. I believe that the slider will need to be discretized, perhaps in increments of 0.25, to keep the number of possiblities low.

To make this even more interactive, on the bottom (4th) line, would beith some sort of pre-determinied structure. The goal would then be for the user to manipulate the amplitudes of the top 4 waves (sin(x), sin(2x), cos(x), cos(2x)) until the sum of those 4 waves matched the 4th line. We could have a 'easy', 'moderate', and 'hard' option on this 4th line, with the easy being something like sin(x)+0.5*cos(x), and the 'hard' being a combination of all four above waves.

The 'hard' one would try to match up with the observed longwave structure evidenced in the "Model Structure and Dynamics" module.

Sprectral decomposition

Thus, the 'hard' option of the three waves to try to match would be something resembling the wave in the above image. That might be hard to pull off with only 4 waves, but the motivation would be to show that a combination of something they can visualize (sin and cos) can represent the real, complicated, atmospheric flow field.


T

SC

Ten Misconceptions about NWP

by Sen Chiao - Thursday, July 20, 2006, 3:07 PM
 
Modules: Ten Common NWP Misconceptions
Top Ten Misconceptions about NWP Models

Appropriate Level: Advanced undergraduate and above.

General Comments:

These two modules are similar. Suggest to merge them together and keep it on METed website.


SC

Turbulent Processes

by Sen Chiao - Thursday, July 20, 2006, 11:14 AM
 

COMET Module: Influence of Model Physics on NWP Forecasts

This simple figure describes the planetary boundary layer processes in NWP models, which can be a supplement material this module.

PBL

(Source: WRF User's Workshop)

Another useful material about PBL parameterization can be found:

http://www.met.tamu.edu/class/metr452/models/2001/PBLproject.html



Page: (Previous)   1  2
  ALL