Child pages
  • Running MODE on HRRR-TLE members
Skip to end of metadata
Go to start of metadata

Motivation: Help HRRR-TLE development team start thinking about adding some "feature based" probabilities to what they provide now. Currently, the HRRR-TLE output is good for synoptic scale features, but neighborhood approach to processing results in too smooth fields that are not good at identifying mesoscale features (snowbands, lake effect snow). (Also mention of next year potentially providing a "probability of snowband" field)

We can look at feature-based methods with the HRRR-TLE members to see if banded structure is being identified in the deterministic runs.

Tara's notes on next steps (with information added by Jamie on progress to date):

  1. Work with Trevor or Curtis on getting access to the archived HRRR runs on GSDs HPSS. I think all we'd need to pull are the 03Z - 12Z runs... that's all they looked at during WWE.
    1. HRRRx data pulled from NCAR hpss (/RAPDMG/grib/HRRR-wrfnat) including initialization times 03-12 and forecast hours 0-18 for 25Jan16-19Feb16 (WWE dates). It is on dakota:/d4/projects/USWRP_ENSHAZ/data/HRRRx
    2. These inits/fhrs cover what would be required to create the HRRR-TLE products for 10, 11, and 12 UTC (those looked at during WWE), which go out 12 hours).
      1. For example, the forecasts initialized at 6, 7, and 8 UTC are used to create the 10 UTC HRRR-TLE product. So, i6+4hrs=10UTC+12hr fcst=16 hour forecast required (and the forecasts initialized at 8 UTC would need a 14 hour forecast)
  2. Start working on a MODE configuration to pull out the higher intensity features in the snowfall field and snow-rate field.  Contacted Brian Colle's student, Sara Ganetis, on how she set up MODE to look at snow-banding. She provided this ppt - and this update: I am changing the raw_thresh and the conv_thresh (equal to the same value) for each case by calculating the upper octile of reflectivity over the whole Northeast US domain for the entire cases' lifetime. For two test cases that was 26.40 dBZ and 27.83 dBZ. Everything else was still the same as provided in my PowerPoint.
  3. Data available in the HRRRx is documented here - pulled and processed for WWE days (20160125-20160219)
    1. Focus on ASNOW first (Used as the input to the HRRR-TLE probabilities of snow accumulation > 1, 3 or 6 inches in 6 hrs)
      1. Ran through pcp_combine first to create hourly field (it is a run-time total):  
        • Scripts: dakota:/d3/projects/USWRF_ENSHAZ/scripts/: gen_pcp_combine_HRRRx.ksh >& run_pcp_combine_HRRRx.ksh to generate commands and then run ./run_pcp_combine_HRRRx.ksh
        • Example command line: /d3/projects/MET/MET_releases/met-5.1/bin/pcp_combine -subtract /d4/projects/USWRP_ENSHAZ/data/HRRRx/2016012612/20160126_i12_f018_HRRR.grb2 'name="ASNOW"; level="L0";' /d4/projects/USWRP_ENSHAZ/data/HRRRx/2016012612/20160126_i12_f017_HRRR.grb2 'name="ASNOW"; level="L0";' /d3/projects/USWRP_ENSHAZ/metprd/pcp_combine/
        • Output under: /d3/projects/USWRP_ENSHAZ/metprd/pcp_combine
        • Issues:
          • All pcp_combine NetCDF output file for 20160215 and prior are missing "accum_time" information in their output.

            • wgrib2 -d 981 -V 2016021512/20160215_i12_f018_HRRR.grb2
              981:552585289:vt=2016021606:surface::ASNOW Total Snowfall [m]:
          • All output files for 20160216 and later have it:

            • ASNOW_01:accum_time = "010000" ;

            • ASNOW_01:accum_time_sec = 3600 ;

            • wgrib2 -d 1031 -V 2016021612/20160216_i12_f018_HRRR.grb2
              1031:543652098:vt=2016021706:surface:0-18 hour acc fcst:ASNOW Total Snowfall [m]:
          • So, add this to the processing: (did this on ys originally because couldn't find ncatted on dakota. It is here: /usr/local/nco/bin/ncatted)
            • Copied original data to /d3/projects/USWRP_ENSHAZ/metprd/pcp_combine/bad_accum
            • yslogin3:/glade/scratch/jwolff/met>foreach i (201601*) - run for 20160125-20160215
              foreach? ncatted -O -a level,ASNOW_01,o,c,A1 $i
              foreach? ncatted -O -a accum_time,ASNOW_01,c,c,010000 $i
              foreach? ncatted -O -a accum_time_sec,ASNOW_01,c,l,3600 $i
              foreach? echo $i
              foreach? end
          • This data is in meters! So, 0.0127 m = 12.7 mm = 0.5 in
      2. Ran MODE:
        • Scripts: dakota:/d3/projects/USWRF_ENSHAZ/scripts/: ./gen_mode_HRRRx.ksh >& run_mode_HRRRx.ksh to generate commands, then ./run_mode_HRRRx.ksh
        • Configuration file: /d3/projects/USWRF_ENSHAZ/scripts/met_config/MODEConfig_HRRRx_ASNOW
          • Original run using: conv_radius = 5; conv_thresh = >=0.002; (Again, data is in m so this is about 0.08")
          • Second run using: conv_radius = 0; conv_thresh = >=0.0127; (So this can line up with the HRRRTLE field plotted in the next step of probability of exceedance of 0.5")
          • Third run using: conv_radius = 5; conv_thresh = >=0.0127
          • Fourth run using: conv_radius = 5; conv_thresh = >=0.006 (about 0.28")
        • Example command line: /d3/projects/MET/MET_releases/met-5.1/bin/mode /d3/projects/USWRP_ENSHAZ/metprd/pcp_combine/ /d3/projects/USWRP_ENSHAZ/metprd/pcp_combine/ scripts/met_config/MODEConfig_HRRRx_ASNOW -outdir /d3/projects/USWRP_ENSHAZ/metprd/
  4. Put together an NCL script to composite the objects for further analysis
    1. Plot HRRR-TLE probability of > x" snow field overlaid with 3 HRRRx members that went into product
      1. Script: dakota:/d3/projects/USWRF_ENSHAZ/scripts/run_stamp_hrrrtle_mode_hrrrx.ksh calls stamp_HRRRTLE_modeHRRRx.ncl
        • Plotting from HRRRTLE: TSRATE_P5_L1_GLC0(0,:,:) - prob >3.5e-06 (ms-1):probability forecast = prob >0.5 in:probability forecast
          • 1:0:d=2016011407:TSRATE:surface:10 hour fcst:prob >3.5e-06:probability forecast
          • 2:5715624:d=2016011407:TSRATE:surface:10 hour fcst:prob >7.1e-06:probability forecast
          • 3:11431248:d=2016011407:TSRATE:surface:10 hour fcst:prob >1.41e-05:probability forecast
          • Probabilities: green=>0-10%, blue=>10-20%, red=>20-30%, black=>30% 
        • Plotting from MODE output: fcst_obj_id
      2. Information on mapping HRRRx members to HRRRTLE product:
        • 10 UTC - uses initializations from 06, 07, and 08 UTC



        • 11 UTC - uses initializations from 07, 08, and 09 UTC


  • 12 UTC - uses initializations from 08, 09, and 10 UTC




Notes from meeting on 5/6 (Slides)

  • There should be six members for each HRRR-TLE product (not three)
  • Everyone agreed that the most representative plots were those with the HRRRx members run through MODE using conv_rad=0 and conv_thres=0.0127 (0.5"; to match the HRRR-TLE product)
  • Moving forward, how about we use the testbed (specifically FFAIR) to test verification approaches?
    • In order to facilitate this, a presentation will be put together to share with the participants regarding how HRRR-TLE is performing when looking at it in a variety of ways
    • Isidora/Trevor will show reliability, skill, etc. for a season that they have run HRRR-TLE for in retrospective mode (from 2015 I believe)
    • NCAR will run a few cases (to be identified by Isidora, and HRRR-TLE data will be provided) through MODE to show how the objects from HRRR-TLE compare to observations (MRMS - QPE/reflectivity?)
      • In addition to showing overall interest value, should show different attributes available in MODE
      • Isidora prefers not to call this verification because it is a probabilistic field against a thresholded observation -> she recommended something like "area of interest detection"
      • Trevor still is not sure that MODE is useful on a case-by-case basis for probablistic fields; however...
      • Julie and Jamie stressed that a forecaster on the desk for a particular day would remember how the tool helped them forecast a particular event on that day, not necessarily the aggregated skill over a season (where you can start comparing probabilities to frequency of occurrence)
      • I think both Isidora and Trevor agree that it is worth presenting this at FFAIR to see the reaction of the forecasters and to hopefully instill confidence in the product
      • It will be good to have Julie followup with the forecasters on this approach. Hopefully it intuitively follows how they do "verification" in their head on a case-by-case basis in real-time for probabilistic fields.


  • No labels