Browse the glossary using this index

Special | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | ALL

Page:  1  2  (Next)
  ALL

C

LP

Cloud Formation Processes

by Lawrence Pologne - Friday, July 21, 2006, 8:11 AM
 

Source: COMET Module, "Influence of Model Physics on NWP Forecasts"

Importance of cloud thickness and cloud parameterization processes on model precipitation output.

cloud processes

Additional Resources:


D

JH

Data Assimilation Process

by John Horel - Thursday, July 20, 2006, 2:55 PM
 
Source: COMET Module: "Understanding Data Assimilation"

Appropriate Level: Some graphics and concepts could be used in introductory classes but the core content is appropriate for upper division majors

Outline of Module

1. Data Assimilation Process. A yes/no question that can be skipped

2. Data Assimilation Wizard. Motivation for data assimilation that may appeal to some

3. Data/Observation Increment/Analysis. The core of the module is contained in these three sets of pages. Graphics and simple examples help to highlight critical concepts (a handful of busted links to NCEP operational resources in the Data section). Graphics shown below summarize the components of the data assimilation process.

Step 1. Observations received and processed
Observations

Step 2. Deviations between the observations and
background values are computed

Constructing Observation Increments

Step 3. Observation increments are used to adjust
the background

Objective Analysis

4. Operational Tips/Juding Analysis Quality. Brief review on some issues associated with assessing analysis quality.

5. Exercise/Assessment. Ordering the steps involved in data assimilation leads to the resulting graphic of the end-to-end process (below).
Data Assimilation Process


Other relevant COMET resources:

Operational Models Matrix: Brief overviews of operational data assimilation systems are listed at the bottom.

Good Data/Poor Analysis Case Study: Very nice illustration of some of the issues associated with objective analysis/data assimilation.

WRF Model WebCast: Provides very useful information on the Gridded Statistical Interpolation (GSI) assimilation approach used in the NCEP NAM.



E

JH

ECMWF Training on Data Assimilation

by John Horel - Thursday, July 20, 2006, 3:23 PM
 
Source: ECMWF Training Material

Level: Instructor

Technical information on data assimilation as applied at the ECMWF.

JH

Ensemble Filters

by John Horel - Thursday, July 20, 2006, 2:55 PM
 
COMET Module: None on this subject yet

Appropriate Level: Graduate level

Ensemble Filters for Geophysical Data Assimilation: A Tutorial by Jeff Anderson. Data Assimilation Research Testbed

This is an excellent, but technical, introduction to ensemble modeling.

Ensemble Forecasting

by Dave Dempsey - Thursday, July 20, 2006, 2:51 PM
 
COMET Module: "Ensemble Forecasting Explained" (2004)

"The assumptions made in constructing ensemble forecast systems (EPS) and the intelligent use of their output will be the primary focus of this module. In order for you to better understand EPS output, we will also look at some necessary background information on statistics, probability, and probabilistic forecasting."

Appropriate Level: Advanced undergraduate and above.

Summary comments: Lots of text, some of which can be dense (when the concepts described are relatively challenging). Simple, mostly static diagrams provide clear illustrations. Assessment exercises (multiple choice and multiple answer questions, with feedback) help. Quite a few ensemble forecast products and verification tools are described. Basic concepts of probability and statistics underly most of these (as befits the topic), and an attempt is made to provide some background about these, but a basic statistics course would be a valuable prerequisite to this material. A COMET webcast by NCEP's Dr. Bill Bua offers a simpler, one-hour introduction to a subset of the material in this module, "Introduction to Ensemble Prediction"; for most users unfamiliar with ensemble forecasting, I recommend viewing the webcast before completing the module.

Preassessment
. This module starts with a randomized, 15-item pre-assessment quiz, which the user repeats at the end of the module as a measure of what the user has learned. The pre-assessment quiz therefore should provide a preview of at least some of the module's content.

The mostly (entirely?) multiple-answer quiz questions include graph and weather map interpretations and verbal questions about aspects of ensemble forecasting and its applications. The questions, graphs and maps--and hence necessarily the module itself--use language and invoke concepts technical enough to be well out of the reach of beginning students. Here's an example of a graph with very limited explanation offered to help the user interpret it (which of course is the point--the user presumably will learn more about it in the module):

Hypothetical distributions of ensemble forecast, climatology, and observations

(The user is asked about the resolution and reliability of the two forecasts depicted in the graph.)

Depending on the degree of support provided by the instructor and the module itself, the preassessment suggests that upper division majors should be able to learn something from the module. Taking the preassessment quiz yourself is one way of judging whether the module content as it stands is appropriate in content and level for your students. Warning: although the preassessment strikes me as a useful pedagogical tool (for measuring learning progress and offering a preview of the module content), completing it can be discouraging unless users understand that they aren't expected to do well on it and that they should look forward to learning enough from the module to improve dramatically.

Introduction.
  1. Why Ensembles? A table summarizes the advantages of ensemble forecasts over of single forecasts.
  2. Chaos and NWP Models. A Flash animation of an 84-hour "control" 500 mb height forecast at 12-hour intervals and a second forecast with perturbed initial conditions are superimposed, together with color-filled contours of the difference between them. The accompanying text describes the animation and points out how the difference between the two forecasts tends to grows with forecast time. The idea of chaos and its manifestation in the atmosphere and numerical weather forecasting is introduced, usiing the same flash animation. The overarching content of the module is summarized
Generation. Brief, bulleted summary of possible methods used to create ensemble members, based on uncertainty in data or in the NWP model itself.
  1. Perturbation Sources. Uncertainty in initial conditions, and the range of that uncertainty (one static graphic similar to the animation in Section 2 of the Introduction); uncertainty in model formulation (including dynamical formulation and grid- and subgrid-scale physical parameterizations; illustrated with contour plots of precipitation plus a sounding); and boundary value uncertainty (text only).
  2. Implementations. Singular vectors (used by ECMWF); "breeding" cycle (one static graphic); reference to ensemble Kalman filter (EnKF).
Statistical Concepts. An overview of basic concepts and their relevance to ensemble forecasting, including the idea of probability density functions. Very little math; a few static graphs.
  1. Probability Distributions.
  2. Middleness.
  3. Variability.
  4. Shape.
  5. Using PDFs.
  6. Data Application. A spaghetti plot is introduced, with some basic guidance about how to read it. Measures of central tendency (mean, median, mode) and their limitations applied to interpreting ensemble forecasts.
  7. Exercises. Two multiple-choice questions with feedback and discussion. These strike me as pedagogically helpful.
Summarizing Data. Motivates the need for products that help forecasters digest the vast amount of information in an ensemble forecast.
  1. Products. Thirteen pages of examples with explanation, comparison, and discussion of various ensemble forecast analysis products, illustrated with static graphics.
  2. Product Interpretation. Relatively clear, simplified diagrams show how to intepret the mean and the spread of ensemble forecasts from spaghetti diagrams. One multiple choice question with feedback.
  3. Exercises. Three multiple choice questions with feedback, all involving interpretation of figures.
Verification. The problem of verifying ensemble forecasts.
  1. Concepts. Categorical and probabilistic forecasts and verification of the former. Skill score. Verification of probabilistic forecasts: "reliability" and "resolution".
  2. Tools. Six pages on verification tools. Some of these are relatively dense, but interesting.
  3. Applications. A few examples of how some verification tools are used at NCEP (an animated Talagrand diagram is shown). Comparison of NCEP's EPS to those of other forecast centers (links).
  4. Exercises. Three multiple answer questions with feedback.
Case Applications. Links to case studies and to the module quiz.



Ensemble Forecasting: Information Matrix (NCEP)

by Dave Dempsey - Thursday, July 20, 2006, 2:52 PM
 
COMET Module: "Ensemble Prediction System Matrix: Characterics of Operational Ensemble Prediction Systems (EPS)" (updated as needed)

"This matrix describes the operational configuration of NCEP EPS, including the method of perturbing the forecasts (initial condition or model), the configuration(s) of the model(s) used in the EPS, and other operationall significant matters."

Appropriate level: Advanced undergraduate and above.

Summary comments: This module is not an instructional module but is rather a matrix of hypertext links to information about aspects of NCEP's medium range and short range ensemble forecasting systems, including perturbation methods, characteristics of the models used, and postprocessing and verification. Some of the informational Web pages are illustrated with simple, clear diagrams, and some links lead to the "Ensemble Forecasting Explained" module (see entry in this glossary under "Ensemble Forecasting"). For students already familiar with the basics of how numerical weather prediction systems work and how numerical models are formulated, the matrix provides a useful way to investigate these topics in more detail, at least for NCEP's ensemble forecasting system. Some explanations are illustratrated with clear and easy to read, albeit static, graphics. Instructors might find some of these explanatory Web pages useful for instructional purposes, such as assigned reading.


Ensemble Forecasting: Webcast Introduction

by Dave Dempsey - Thursday, July 20, 2006, 2:55 PM
 
COMET Webcast: "Introduction to Ensemble Prediction" (2005)

This one-hour webcast by NCEP's William Bua "introduce[s] concepts in the COMET module Ensemble Forecasting Explained" (see the glossary entry for the module).

Appropriate level: Advanced undergraduates and above.

Summary comments: Dr. Bua's webcast presents a subset of the concepts presented in the Ensemble Forecasting Explained module. Generally speaking, the webcast treats its topics more simply than the module does. Quizzes with feedback, including a preassessment, are interspersed throughout. Viewing the webcast first should make completing the module, which is text intensive and sometimes conceptually challenging, easier.

F

Focus COMET NWP Case Studies Modules

by Rodney Jacques - Thursday, July 20, 2006, 2:41 PM
 

Water Vapor Loop

Appropriate Level: Advanced Forecaster

Outline of

1. Introduction - The snowstorm of January 6-7, 2002 failed to quantify the amount of snowfall, areal coverage of snowfall, and the timing of the snow event due to failed data assimilation and model dynamics. The ETA model and SREF model failed by not representing the following areas: 1. Shear and curvature vorticity. 2. Upper level diffluence and convection. 3) Model resolution, upper level dynamics, and data assimilation.

Questions:

Does an advanced forecaster have to review 4-5 COMET NWP modules to gain further understanding of this case study?

Can we FOCUS the NWP modules to gain insight on mid-upper level diabatic process within the NWP model?

Can this NWP case study provide me with a 3D visualization of the model underperforming or overperforming?

2. Discussion - The advanced weather forecaster assimilates and interrogates environmental data prior to displaying NWP models to provide a conceptual view of atmosphere. A thorough analysis is critical to asses the initial state of numerical models and to discover possible errors that may exist. A problem exists where the advanced forecaster does not have sufficient knowledge of NWP dynamics or structure. How can a forecaster obtain a better working knowledge of the inner working of a NWP model, "the black box".

3. Instructional Design - A suggestion would be to develop visualization software that can rerun the model and focus on the errors that this case study presents. (mid-upr level dynamics, convective processes, frozen precipitation processes). The advanced forecaster would have good and bad model runs from which to evaluate their impact on his forecast products. The case study area should finish this training scenario by providing educational content on how a forecaster adjusts model guidance using the digital forecast process.

4. Smart Tools - The digital forecast process is in place at NSW forecast offices. Weather forecasters account for errors in model output by running python scripts to adjust guidance. There are over 300 tools and most run basic routines that allow a forecaster to adjust or tweak guidance. These results are detached from the model. Most forecasters do not have complete understanding of the NWP models to understand the digital output in scenario driven situations.

5. Summary - The NWP case studies can be improved by providing 3D visualizations of good and bad model forecasts. A 3D visualization aids in graphically representing where a good model goes bad. Specific section sof COMET NWP models could be injected into the cast studies to focus the students learning and address the error in model guidance. Lastly, the NWP case studies needs to complete the end to end forecast training by providing content on the digital forecast process.

6. Resources -

  • Smart Tool Repository - http://www.nws.noaa.gov/mdl/prodgenbr.htm
  • Interactive Forecast Preparation System - http://www.nws.noaa.gov/mdl/icwf/IFPS_WebPages/indexIFPS.html
  • Forecast Verification Homepage - http://www.nws.noaa.gov/mdl/adappt/verification/verify.html
  • Meteorological Development Lab - http://www.nws.noaa.gov/mdl/
  • Integrated Data Viewer - http://www.unidata.ucar.edu/software/idv/






G

JH

Gridded Statistical Interpolation

by John Horel - Thursday, July 20, 2006, 2:56 PM
 
COMET Module: Technique mentioned only in passing

Level: Graduate level

Journal article (Wu et al. 2002 MWR) that provides a technical description of selected aspects of the gridded statistical interpolation used in the NCEP GFS model. A version of this 3-dimensional variational approach is also used in the 2-D real time mesoscale analysis (RTMA).

I

SC

Interactions of Parameterizations

by Sen Chiao - Thursday, July 20, 2006, 11:13 AM
 

Module: Influence of Model Physics on NWP Forecasts


A simple diagram illustrates the interactions of physical parameterizations for NWP models. Easy to understand the whole processes of model physics.

processes.JPG
(Source: WRF User's Workshop)


M

Minimum Error Variance

by Brian Etherton - Thursday, July 20, 2006, 2:56 PM
 
One of the prime equations for data assimlation is the Kalman Filter Equation. This equation is:

x(a)-x(f) = PH(T){HPH(T)+R}(-1)[d-Hx(f)]

Where x(f) is the forecast, x(a) is the analysis, P is the background error covariance matrix, and R is the observation error covariance matrix. x(a) and x(f) are vectors, containing every model variable at every model gridpoint.

P gives the information regarding errors of the model first guess field
R gives information about the errors of observations (instrument and representativeness errors).

This equation, and the notion of 'covariance matrices', is often a little overwhemling. It can be simpler to consider the case of trying to best know the value of one variable.

For just one variable...
T(a) - T(f) = s(f)**2/(s(o)**2+s(f)**2)*[T(o)-T(f)]

Each term in the equation for one variable has a match with the full blown Kalman Filter equation. T(a) is the analysis value (best estimate), as is x(a), T(f) is the model first guess, as is x(f). And so on.

Consider the temperature of the air at 500mb over Denver Colorado.

To estimate this value, T(a) (a for 'analysis') we have two estimates:

#1 is the value as reported from the DNR sounding.
An observation

I'll call that T(o) (o for 'observed') Using known history, the average error of these observations has an error variance s(o)**2

#2 is the value from the model first guess field
Model First Guess
I'll call that "T(f)" (f for 'first guess') Using known history, the average error of these observations has an error variance s(f)**2

Here is where the graphic comes in!

The graphic I am envisioning is rather similar to this one:

Two become one

This graph would be re-worked such that instead of FCST A and FCST B, we have First Guess (T(f)) and Observation (T(o))

The applet I am envisioning is something that can be changed from the image on the left to the image on the right.

Sliders (as well as text entires) will allow the user to adjust both the values of T(f) and T(o) as well as the error variances s(f)**2 and s(o)**2. Also there would be a box showing the value of T(a).

What if our forecast model was always perfect? In that case, the error variance of the observations, s(f)**2, would be zero. In that case, the equation would collapse to T(a)-T(f) = [T(o)-T(f)]*(0), or T(a)-T(f)=0. At this time, the image would show that T(a) and T(f) are the same value, and that the spread (the tails of the Gaussian curves in the figure above) would be zero.

Consider now the other extreme, where the observations are perfect (but the forecast is not). In that case, the equation becomes T(a)-T(f) = [T(o)-T(f)]*(s(f)**2/s(f)**2), or T(a)-T(f)=T(o)-T(f), and thus, T(a)=T(o).

Users could adjust the values of the error variances. The 'analysis value', T(a) would move towards whichever value (T(a) or T(f)) had a smaller error variance associated with it. As the error variances were increased, the picture would look more like the right side of the above image. If error variances were decreased, the image would look more like the figure on the left of the above image.

Included with the estimate of T(a), the actual value, there would be a Gaussian shape around it with variance equal to [1/(s(o)**2+s(f)**2)](-1), showing that when the error variances of the two estimates are small, the error variance (uncertainty) of the analysis is also small.

This module will show the correlation between error statistics of the two estimates (first guess and observation) and the analysis value.

N

NWP Misconceptions

by Dave Dempsey - Thursday, July 20, 2006, 4:59 PM
 
COMET Module: "Ten Common NWP Misconceptions" (2002-2003)

Ten common misconceptions about the way NWP models work and hence how they can be interpreted, plus explanatory corrections of those misconceptions, plus quite a bit of more or less related material.

Appropriate level: Advanced undergraduates and above.

Summary comments: This module is heavily illustrated, has audio narration, and is very light on text. (A print version substitutes text for the audio narration; animation is lost but static color graphics remain.) A "Test Your Knowledge" section, with feedback, ends the presentation of each misconception.

Each misconception is a kind of Trojan horse; the misconception is addressed directly, but it is also used as an excuse to present quite a bit of less narrowly (but still at least broadly) relevant material.

The misconceptions vary in their degree of obscurity, but at least some of them look potentially useful for use in a basic NWP course that doesn't focus exclusively on theory. However, to appreciate the misconceptions and the corrections to them, students need already to know the basics about how numerical models are formulated and used, and in some cases more than that--some of the misconceptions are quite specific, as the titles below probably suggest.

At least one of the misconceptions (I didn't examine all of them closely), "A 20 km Grid Accurately Depicts 40 km Features" (#3 below), makes important points about model resolution, but its examples cite values relevant to 2002-2003 era models and so are not as directly relevant today as they were then. Several "misconceptions" refer to the eta model, which few students henceforth will recognize. The general concepts remain highly relevant, however.

Misconceptions addressed include:
  1. The Analysis Should Match Observations
    (Presents a summary of observational platforms in a nice conceptual diagram. Raises the concept of assimilation cycling.)

  2. High Resolution Fixes Everything
    (Makes the point that model components work synergistically; improving resulution alone won't guarantee a better forecast.)

  3. A 20 km Grid Accurately Depicts 40 km Features
    (In addition to discussion of spatial resolution, this section includes an animated graphic showing effects of finite differencing on phase speeds of sine waves of various wavelengths and speeds.)

  4. Surface Conditions are Accurately Depicted
    (Contains a long table summarizing the eta and Canadian GEM model surface fields; it would be nice if this could be updated to the WRF-NMM (the current NAM). Another section describes the effects of vegetation in a single-column model.)

  5. CP Schemes 1: Convective Precipitation is Directly Parameterized
    (Explanation of a convective sequence in nature and one in a non-convection-resolving model without a convective parameterization, using schematic soundings superimposed on cloud drawings; a comparison of adjustment and mass flux convective parameterization schemes (Betts-Miller-Janic and Kain-Frisch in particular).

  6. CP Schemes 2: A Good Synoptic Forecast Implies a Good Convective Forecast
    (Brief discussion of resolution, illustrated with a couple of diagrams; fine-tuning convective parameterization [CP] schemes; over- and under-active CP schemes; different schemes in the same model)

  7. Radiation Effects are Well-Handled in the Absence of Clouds
    Radiation processes in the atmosphere and the earth's surface(Discussion of the complexity of representing radiative processes in a model, including the effects of clouds and clear-sky biases in the eta model--hope this gets updated! Brief summary of how models address radiation procesess in general.)

  8. NWP Models Directly Forecast Near-Surface Variables
    (Adjustment of temperature from the lowest model level to 2 meters in the GFS model, illustrated; how this is done in other models, mentioned in very general terms; problems that can arise with this adjustment process; effect of vertical coordinate on the adjustment, including as examples the eta model, which is now out of date, and the Canadian GEM model, which uses a terrain-following sigma coordinate and is therefore still relevant to the WRF-NNM [the current NAM] below about 400 mb; effect of terrain representation--envelope, silhouette, mean--on the adjustment process.)

  9. MOS Forecasts Improve with Model Improvements
    (An introduction to MOS, including its development and implementation. Issues with rarer types of events; smoothing; advantages and disadvantages of MOS schemes; situations when MOS might produce a poor forecast.)
    MOS Development and Implementation Schematic Diagram

  10. Full-Resolution Model Data are Always Required
    (Comparisons of fields produced by AWIPS at 22, 40, and 80 km resolutions from eta model output; resulution vs. scale of atmospheric features, with animated graphic of sine waves; the issue of smoothing, illustrated with plots of specific humidity overlaid by temperature contours at 40 km resolution [unsmoothed] and 80 km resolution [smoothed]; vertical resolution, illustrated with tephigrams, which take some work to understand.)


O

JH

Operational Model Matrix

by John Horel - Thursday, July 20, 2006, 2:57 PM
 
COMET Module: Operational Models Matrix

Appropriate Level: Upper division and graduate level

Overview

An excellent resource to contrast the basic features of U.S. (and one Canadian) operational models. Links to relevant COMET modules and other on-line resources are provided.

Information on many characteristics of the NMM-NAM remain to be added.

R

Resolution Applet

by Brian Etherton - Thursday, July 20, 2006, 11:00 AM
 

A significant concept to numerical weather prediction is resolution. The notion is that at a higher resolution, more features can be represented by a model. However, this increased resolution comes at additional computational expense.

Take, for example, a tropical cyclone. Below is an image of Hurricane Fran, at 1km resolution.

Fran 1km

Note the structure that is visible: the details of the eye, for example.

I am envisioning an applet that allows the user to choose the resolution via some sort of slider. The image will be 'radar' like image. This image, at 1km resolution, covering a 512km by 512km domain, will clearly show the structure of the eye-wall and of the outer rain bands.

Using the slider, one can choose to degrade the resolution. Choices for resolution will be 1km, 2km, 4km, 8km, 16km, 32km, 64km, and 128km. Thus, at the coarses resolution, there are only 16 pixels, whereas at 1km resolution, there are 262144pixels (512**2)

When the slider is moved from one resolution to another, the "radar" image changes to that resolution. I believe that images can be made using IDV, but just changing how many pixels are shown. One can choose to use them all, every 2nd, every 4th, etc. (Kelvin demo for LEAD).

The changing image will show the consequences of resolution reduction: the eye wall structure will decay, and at somepoint, the eye will not be discernable.

In addition to the image changing, there will be a text readout of the number of pixels. For example, for a resolution of 4km, nx=128, ny=128, and the total number of pixels is 16834.

Beside this number of pixels, there will be a 'model computing time'. For example, we could assume that at 128km resolution, the model will take 1 minute to run. Assuming that a halving the resolution leads to 4x gridpoints, and making the time step 1/2 as long, a 64km run would then take 16 minutes. In the limit, the 1km resolution would take 16384 minutes, (over 11 hours). This model computing time would be expressed as a common clock. The number of time would be 'shaded'. For instance, if the model would take 1 hour to run, the clock would read 1pm, with the area of the clock from Noon to the hour-hand being shaded in.

The two dramatic visuals would be the image being sharper or more degraded, and the clock.

The idea is to incorporate, but improve upon, images such as this:

Resolution

An important point is to make sure that the false idea (misconception!) that a 1km model will resolve a 1km feature is not communicated. Not sure how to do that...


S

Pat

Spectral Wave Addition

by Pat Parrish - Thursday, July 20, 2006, 11:01 AM
 

A concept that is difficult to get across is how a seeming random wave can be deconstructed into a series of sine waves.

The image below is a little old, but gives a great visual of the process:

Wave Addition Example

For the 'mark 2' version, I envision an applet that looks like this, but has, perhaps, 4 different lines. The top 2 lines would be sin(x) and cos(x) on the top line (much like the sin(2x) and sin(5x) on the top line of the above image), sin(2x) and cos(2x) on the second line. The third would be the sum of the four waves above, similar to the bottom line on the above image.

Each of the top 4 waves would have an 'amplitude slider', where one could alter the amplitude of each of the waves. For instance, if one set the amplitude of 'sin(x)' to 1, and all the other waves (cos(x), sin(2x), cos(2x))to 0, one would get that same sine wave back on the bottom panel. By changing the amplitude of the 4 waves above, one can create increasingly complex patterns on the sum of the 4. I believe that the slider will need to be discretized, perhaps in increments of 0.25, to keep the number of possiblities low.

To make this even more interactive, on the bottom (4th) line, would beith some sort of pre-determinied structure. The goal would then be for the user to manipulate the amplitudes of the top 4 waves (sin(x), sin(2x), cos(x), cos(2x)) until the sum of those 4 waves matched the 4th line. We could have a 'easy', 'moderate', and 'hard' option on this 4th line, with the easy being something like sin(x)+0.5*cos(x), and the 'hard' being a combination of all four above waves.

The 'hard' one would try to match up with the observed longwave structure evidenced in the "Model Structure and Dynamics" module.

Sprectral decomposition

Thus, the 'hard' option of the three waves to try to match would be something resembling the wave in the above image. That might be hard to pull off with only 4 waves, but the motivation would be to show that a combination of something they can visualize (sin and cos) can represent the real, complicated, atmospheric flow field.


T

SC

Ten Misconceptions about NWP

by Sen Chiao - Thursday, July 20, 2006, 3:07 PM
 
Modules: Ten Common NWP Misconceptions
Top Ten Misconceptions about NWP Models

Appropriate Level: Advanced undergraduate and above.

General Comments:

These two modules are similar. Suggest to merge them together and keep it on METed website.


SC

Turbulent Processes

by Sen Chiao - Thursday, July 20, 2006, 11:14 AM
 

COMET Module: Influence of Model Physics on NWP Forecasts

This simple figure describes the planetary boundary layer processes in NWP models, which can be a supplement material this module.

PBL

(Source: WRF User's Workshop)

Another useful material about PBL parameterization can be found:

http://www.met.tamu.edu/class/metr452/models/2001/PBLproject.html



Page:  1  2  (Next)
  ALL