Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

Workshop Solutions 5

Gridded Snowmelt Calibration

Question 1: How does computed SWE compare against observed SWE for the
SFJohnDayRv_S10 subbasin? Why is the computed SWE different than the observed SWE?
Try plotting precipitation, temperature, computed SWE, and observed SWE in the same window
(change the line styles of various curves to better visualize them). What temperature index
parameters should be changed to better match observed SWE?

A plot of the precipitation (blue line in the upper viewport), temperature (red line in the lower
viewport), computed SWE (blue line in the lower viewport), and observed SWE (dashed black
line in the lower viewport) for the “JohnDayRv_S70” subbasin are shown in Figure 1.

Figure 1. JohnDayRv_S70 Initial Results

The shape of the computed SWE curve largely matches the shape of the observed SWE curve.
However, the magnitude of the computed SWE curve is larger than the observed SWE curve.

1
WS – Gridded Snowmelt Calibration/Bartles
The computed SWE begins to deviate from the observed SWE in the beginning of December
2011. During this time, the computed SWE increases while the observed SWE decreases. Also,
during this time, the basin-average temperature is very close to the initial base and PX
temperatures of 32 deg F and 33 deg F, respectively. Since the shape of the two curves are very
similar while the magnitudes are different, only minor changes to the base and PX temperatures
are needed to match the observed data to an acceptable degree.

Question 2: How does computed SWE compare against observed SWE for the following
subbasins: NFJohnDayRv_S30, JohnDayRv_S50, and JohnDayRv_S20? Which subbasin/region
should receive more attention when calibrating temperature index parameters?

The initial results (computed = blue lines, observed = dashed black lines) for
NFJohnDayRv_S30, JohnDayRv_S50, and JohnDayRv_S20 are shown in Figure 2.

Figure 2. NFJohnDayRv_S30, JohnDayRv_S50, and JohnDayRv_S20 Initial Results

Within this figure, the computed SWE appears to not adequately match the observed SWE in any
of the subbasins. However, when the magnitude of the observed SWE in each subbasin is taken

2
WS – Gridded Snowmelt Calibration/Bartles
into account, it’s evident that the differences within the NFJohnDayRv_S30 subbasin are much
larger than the other two. The NFJohnDanRv_S30 subbasin has a peak observed SWE of
approximately 18 inches compared against 1.4 and 1.0 inches of SWE within the
JohnDayRv_S50 and JohnDayRv_S20 subbasins, respectively. For these reasons, much more
time should be spent adjusting parameters within the NFJohnDanRv_S30 subbasin (and other
similar subbasins) than the JohnDayRv_S50 and JohnDayRv_S20 subbasins.

Question 3: Using Figure 2 (from the Workshop instructions) and any other information that
you can gather, provide at least two reasons why NFJohnDayRv_S30 subbasin may have larger
SWE magnitudes than the other two subbasins mentioned in Question 2.

1. Predominant weather patterns may be causing more precipitation to reach the


NFJohnDayRv_S30 subbasin than other subbasins within the modeling domain.
2. As weather systems move west to east, they will follow the terrain. As mountains are
encountered, any air mass must rise. If the air mass contains moisture, precipitation will
likely fall. Temperatures also tend to decrease as elevation increases. Since the
NFJohnDayRv_S30 subbasin is one of the higher elevation subbasins within the
modeling domain, it likely is subject to orographic influences to a larger degree than most
other subbasins. Finally, when precipitation falls, the likelihood of it falling as snow is
amplified due to expected decreases in temperature.

Question 4: The most common metric used to ascertain the acceptability of SWE calibration is
comparing the peak computed SWE against the peak observed SWE. List at least three
additional ways in which SWE calibration can be measured/determined.

Additional metrics that could be used to establish the acceptability of SWE calibration include:

1. Date of peak observed and computed SWE


2. Melt out date (i.e. date when SWE is completely melted) of observed and computed SWE
3. Nash-Sutcliffe Efficiency (NSE)
4. Ratio of the Root Mean Square Error to the Standard Deviation Ratio (RSR)
5. Percent Bias (PBIAS)

Question 5: How close is “close enough” when it comes to comparing the HEC-HMS model
results against the SNODAS data?

It is important to keep the goals of the study, accuracy of the computed data, and accuracy of the
observed data in mind when assessing the acceptability of calibration. For instance, the
“observed” SWE data used within this workshop is actually derived from another model. As
such, inaccuracies within the SNODAS model processes and other data used in assimilation are
present. For this reason (and others), agreement to a few inches in magnitude and a few days
within the subbasins that develop the largest snowpack magnitudes is more than adequate. Also,
remember that calibrating SWE is not the same thing as calibrating both SWE and streamflow.

3
WS – Gridded Snowmelt Calibration/Bartles
There is a give-and-take relationship between the two results and computed streamflow is a
much more important parameter within the vast majority of the studies that USACE undertakes.

Question 6: When calibrating SWE within the aforementioned subbasins, did you notice any
parameters that could be regionalized?

The law of parsimony states that simpler solutions to a problem tend to be more appropriate than
complex solutions. In layman’s terms: the solution to a problem should be as complicated as
need be to adequately solve it but no more complex. An extension of this law to the realm of
hydrology would imply that simpler methods are more appropriate than complex methods.
Regionalization of model processes and parameters better adheres to this law. Some important
temperature index parameters that could be regionalized include:

 Range of PX and base temperature


 Base Temperature
 Wet Meltrate
 ATI-Meltrate
 ATI-Coldrate

All of these parameters can be regionalized, but may still vary a little bit throughout a region.
Keep in mind that any model is a simplification of the natural world.

Question 7: How close is “close enough” when it comes to comparing the HEC-HMS model
results against the daily streamflow data?

Comparing computed and observed peak flow, time of peak flow, and volume are common ways
used to assess model calibration. However, these metrics can be incomplete in that they don’t
assess model performance throughout the entirety of the hydrograph (i.e. not just the peak) and
aren’t necessarily tractable. Metrics presented within Moriasi, et al (2007) represent a more
complete way in which model calibration can be assessed. In general, for primary locations of
interest, NSE, RSR, and PBIAS should be within the ranges defined below:

Performance
NSE RSR PBIAS
Rating
Very Good 0.65<𝑁𝑆𝐸≤1.00 0.00<𝑅𝑆𝑅≤0.60 𝑃𝐵𝐼𝐴𝑆< ±15
Good 0.55<𝑁𝑆𝐸≤0.65 0.60<𝑅𝑆𝑅≤0.70 ±15≤𝑃𝐵𝐼𝐴𝑆<±20
Satisfactory 0.40<𝑁𝑆𝐸≤0.55 0.70<𝑅𝑆𝑅≤0.80 ±20≤𝑃𝐵𝐼𝐴𝑆<±30
Unsatisfactory 𝑁𝑆𝐸≤0.40 𝑅𝑆𝑅>0.80 𝑃𝐵𝐼𝐴𝑆≥±30

Question 8: When calibrating streamflow within the aforementioned subbasins, did you notice

4
WS – Gridded Snowmelt Calibration/Bartles
any parameters that could be regionalized?

There are a few parameters that could be regionalized. However, the most impactful to this
analysis are GW1 and GW2. These two parameters can be related to the watershed storage
coefficient (Clark “R”), which itself was determined from physically-measureable characteristics
of the individual subbasins. The relationship between these parameters should be maintained (to
a reasonable degree) within a region.

5
WS – Gridded Snowmelt Calibration/Bartles

You might also like