Refinement and Evaluation of Automated High-Resolution Ensemble-Based Hazard Detection Guidance Tools for Transition to NWS Operations
A proposal to U.S. Weather Research Program Research to Operations Transition Proposals
Principal Investigator: Stan Benjamin - NOAA/ESRL/GSD
Co-PIs: David Novak - NOAA/NWS/NCEP/WPC,
Stephen Weygandt - NOAA/ESRL/GSD
Curtis Alexander, Isidora Jankov - CI affiliated with NOAA/ESRL/GSD
Co-Investigators: Geoff DiMego - NOAA/NWS/NCEP/EMC
Josh Hacker, Tara Jensen, Julie Demuth - NCAR/JNT/MMM
Project Duration and Budget: Three years, $750K / $750K /$750K
This proposal describes work to expand a prototype ensemble post-processing system leading to an automated feature detection system to provide concise hazard guidance information for extreme precipitation and other significant weather events. This work will lead to the creation of forecaster tools that will facilitate the rapid interrogation of the myriad of available deterministic and ensemble model guidance. Thus, this work addresses FACETs components: 2 (guidance), 3 (forecaster), and 4 (tools) to enable probabilistic forecast services. The work will be performed through a collaborative effort involving scientists from ESRL/GSD and NCEP/EMC, who will refine the existing prototype system and create products, scientists from NCAR, who will apply novel feature-based methods to product verification and potentially product generation, social scientists from NCAR, who will evaluate the communication, interpretation, and use of the products, and NWS meteorologists, who will use and evaluate prototype versions of the guidance tools.
The system will build upon a prototype time-lagged ensemble package that was previously developed by ESRL/GSD to provide guidance for thunderstorms and other weather phenomena. Thunderstorm probability grids created from real-time application of this time-lagged HRRR ensemble system are currently being supplied to the NWS Aviation Weather Center (AWC) and the Storm Prediction Center (SPC). In the proposed work, a variety of post-processing approaches will be applied to optimally extract and present ensemble information about many aspects of precipitation, including regions exceeding specific precipitation thresholds, location and evolution of mesoscale precipitation bands (especially heavy snow or rainfall features), precipitation type boundaries, etc. Collaborative work throughout the project will focus on interpretation, use, and evaluation of these products by NWS operational forecasters at national centers and local forecast offices, with feedback to the product developers for further refinement of the products.
The overarching goal of this work is to transition a well-tested system for generation of ensemble post-processed hazard guidance products to operational status within the National Weather Service (product generation within NCEP and dissemination of hazard grids to NWS operational forecasters). A direct outcome of the project will be improved ensemble hazard guidance tools for operational forecasters that will reduce the ensemble information overload problem and enable a more efficient and accurate characterization of forecast uncertainty. Ultimately, the quality and usefulness of the weather guidance information provided by the NWS to the public will increase. Success in this project will also enable follow-up work to significantly expand the scope of ensemble hazard guidance product generation.
1. Description of Proposed Activities
1.1 Motivation and background
The weather ensemble era has benefited operational forecasters, providing them a much greater ability to assess the likelihood of specific hazardous weather phenomena. One downside, however, has been a significant increase in the volume of model guidance that must be synthesized, leading, at times, to information overload that can adversely affect the ability to extract the maximal amount of information from the ensemble guidance. While a variety of ensemble display tools have been developed (mean and spread, spaghetti-style plots, threshold exceedance plots, distribution diagrams, etc.), a strong need still exists for a coordinated research effort involving forecasters, ensemble model developers, and verification and communication experts to create improved ensemble guidance tools that help NWS forecasters communicate actionable forecast uncertainty information to their partners and the public.
As far back as 2008, a comprehensive survey of U.S. NWS operational forecast managers (Novak et al. 2008) identified a crucial need to make the description of uncertainty information in existing or new products and services a collaborative effort between ensemble developers, forecasters, academic partners, and users. One of the key findings of the survey was a desire to have an event- or feature-specific verification tool (e.g. Ebert and McBride 2000 and Davis et al. 2006) to assess value of existing ensemble prediction systems, particularly for high-impact events. The forecasters’ answers implied that if such a tool showed value added by ensemble systems over deterministic forecasts for high-impact events, they would rapidly adopt use of ensemble forecasting systems as the main guidance (Novak et al. 2008). Among operational forecasters participating in a 2011 survey, the highest-ranked issues with ensemble use related to the lack of ensemble tools/graphics ( http://itpa.somas.stonyb rook.edu/CSTAR/Surveys.html ).
We propose a well-coordinated collaboration among groups active in model development (ESRL/GSD, NCEP/EMC), operational forecasting (NWS/WPC) and local WFOs, verification (NCAR/JNT) and weather communication research (NCAR/MMM) to evolve and refine a set of existing ensemble guidance tools into a more comprehensive and effective high-resolution ensemble-based hazard detection guidance tool and transition it to become an NWS operational system. The proposed research focuses on developing and integrating two of the identified priority areas: (1) “improving operational model output guidance” with contributions from ESRL/GSD, NCEP/EMC and NCAR/JNT and (2) “communicating uncertainty”, led by the NCAR/MMM social science team. A recognized critical aspect of this proposed research effort is a strong interactive effort involving the guidance tool developers, the NWS national center and local office operational forecasters, and the verification and social science experts. By involving all of these parties, maximum benefit can be realized through the crucial feedback loop between the product users and developers. As the weather enterprise strives to find optimal ways to convey uncertainty information about a wide variety of specific weather hazards, we believe this research effort can help focus and advance the state-of-the-art in this critical area.
1.2 Objectives and Hazard Detection Guidance Tool Example
A main objective for this project is to expand an existing model-ensemble probabilistic product generation capability into a set of algorithms for creating a variety of different automated model ensemble-based hazard guidance tools that will be of maximum utility to NWS forecasters. The existing capability uses a time-lagged sequence of successive HRRR output grids (Alexander et al. 2014); more details on the algorithm and how it will be adapted in this project are included in the next section. Similar capabilities have been developed within NCEP/EMC, and as part of the collaborative development, elements of this package will be incorporated into the new system. We envision a continuum of automated hazard-specific guidance tools, ranging from pure probabilistic guidance (e.g., probability of rain or snow rate exceeding a certain threshold, probability of freezing rain accumulation exceeding a certain threshold, probability of thunderstorm surface winds exceeding a certain threshold, etc.) to automated application of more geometric feature detection algorithms (e.g., identifying persistent edges of heavy snow bands, identifying tracks of large values of updraft helicity indicative of rotating thunderstorms and a potentially enhanced risk of tornadoes, etc.). We note that the feature identification may be applied first, followed by probability generation for that feature (see updraft helicity example in Fig. 4) or feature detection within a probability field. (Figure 1 provides an example of this approach for the New England blizzard of 27 Jan. 2015.) The storm posed a significant forecast challenge as a very sharp western edge of the heavy snow shield was forecast across the densely populated New York City metropolitan area, with large variation among many different models. As can be seen in Fig. 1, the 12-h HRRR time-lagged ensemble (HRRR-TLE) probability forecasts of a heavy snowfall rates indicated the sharp western edge of the heavy snow, and, in turn, correctly predicted the western edge to the east of New York City.
Fig. 1 . Sample heavy snowfall hazard guidance tool for the 27 Jan. 2015 New England blizzard (12-h forecast valid 09 UTC).Top panels shows HRRR time-lagged ensemble probability of heavy snow rates (1” and 2” per hour). Dashed line and black ring indicate features that could be automatically detected by the proposed system. Bottom panel shows features overlaid on radar, indicating accurate depiction of western snow band edge.
1.3.1 Creation of Automated Ensemble-based Hazard Detection Guidance Tools
The existing HRRR Convective Probability Forecast (HCPF) algorithms will serve as a starting point for the creation of the new ensemble-based hazard detection guidance tools. The development will also leverage the previous ensemble generation work of the NCEP/EMC group, as we collaborate on best methods for creating these hazard guidance tools using time-lagged ensembles from the 3-km, hourly-updated HRRR model (Alexander et al. 2015), along with input from other models, especially the NAM CONUSnest, and eventually the NAM-RR.
The HRRR uses a specially configured version of the Advanced Research WRF (ARW) model and assimilates many novel and most conventional observation types on an hourly basis using the Gridpoint Statistical Interpolation (GSI) data assimilation system. Included in this assimilation is a procedure for initializing ongoing precipitation systems from observed radar reflectivity data, a cloud analysis to initialize stable layer clouds from METAR and satellite observations, and special techniques to enhance retention of surface observation information. Extensive subjective and objective evaluation of this WRF-ARW configuration has indicated it does especially well at accurately reproducing observed storm-structure, a key attribute for extracting useful storm-scale hazard detection features.
The HRRR provides the backbone for interrogating storm-scale to mesoscale features that can evolve on small spatial and temporal scales, such as convection and winter precipitation bands. To this end, it is possible to construct a forecast ensemble-of-opportunity using consecutive valid-time aligned HRRR model runs in a time-lagged ensemble (TLE). The TLE can be formulated as a union of time-lagged deterministic members and probabilities that serve as a proxy for the likelihood of a particular feature as measured through this run-to-run continuity. S ince June 2009, ESRL/GSD has produced a real-time, experimental probabilistic thunderstorm guidance product, known as the HCPF ( Weygandt et al. 2015) , using time-lagged HRRR forecasts. Identification of forecasted phenomena, such as deep moist convection (thunderstorms), is achieved through threshold exceedance of both explicitly forecasted storm structures, including composite reflectivity or associated upward vertical motion and mesoscale environmental characteristics such as convective available potential energy (CAPE). To account for both temporal and spatial phase errors inherent in all convection-allowing model forecasts, time and space filters are applied to all forecasts thereby helping identify features within a particular radius of a gridpoint in both space and time. The filter size is selected to match the characteristic accuracy (predictive scale) of the phenomena being forecasted. Figure 2 shows an HCPF forecast (left panel, with associated overlay of highest reflectivity features from the three input HRRR forecasts) and a schematic depicting the spatial and temporal filtering in the time-lagged ensemble procedure (right panel).
Fig 2. Left panel -- sample plot depicting a HRRR Convective Probability Forecast (HCPF) 6h forecast thunderstorm probability (grey shading) with 6-h HRRR deterministic reflectivities from consecutive HRRR runs (which form the time-lagged ensemble) overlaid (colors). Right panel -- schematic of the spatial and temporal filtering used to capture consistently forecasted features, such as rotating thunderstorms, in three consecutive HRRR model runs using a spatial radius of 45 km and temporal radius of 1 hr to formulate the probability of thunderstorm updraft helicity exceeding 25 m 2 s -2 ; valid 22 UTC 27 April 2011 in the southeastern US.
Additional statistical post-processing can be applied to the forecast fields, including diurnal, regional and lead-time varying detection thresholds, to produce a more optimal and uniform forecast frequency bias. Finally, use of regression procedures to obtain non-constant weighting factors for different time-lagged ensemble members can produce statistically reliable forecast probabilities while retaining forecast resolution and sharpness. The coefficients for the regression calculation are derived from a training period in which both uncorrected HRRR forecasts and the verification analyses are available from fields such as those contained in the Multi-Radar Multi-Sensor (MRMS) products. We note that this bias correction step will be a crucial part of the work to optimize the hazard guidance tools created as part of this research.
We propose to generalize the approach developed for the HCPF and expand it to use an ensemble of HRRR time-lagged ensemble members, along with the EMC high-resolution CONUS nest (NAM CONUSnest) and the North American Mesoscale Rapid Refresh (NAM-RR). This approach would be applied to the identification of consistently forecasted features such as heavy precipitation and snow bands, along with other hazards, including lightning, high winds, hail and tornado potential, and eventually severe, aviation, and energy-related weather parameters.
The hourly HRRR run forecast length will be extended to 24 hrs in both the experimental and operational HRRRs in 2015, allowing for larger time-lagged membership, particularly within the first 18 hrs. To complement the experimental HRRR membership, especially at the longer time horizons, the operational NAM CONUSnest and (eventually) NAM-RR forecasts will be included to produce hourly-updated feature identifications for 0-24 hrs at ESRL/GSD using a NOAA research and development high performance computer system where the experimental HRRR is currently run. These hazard guidance grids will be distributed in real-time (with less than 1 hr latency) from ESRL/GSD via existing FTP and LDM feeds in the form of 3-km CONUS GRIB2 grids suitable for visualization in AWIPS/N-AWIPS and will also be displayed on an ESRL/GSD website. These hazard guidance tools will contain both deterministic and probabilistic gridded fields such as the example shown in Fig 3, including the union of deterministic thunderstorm rotation tracks from four consecutive HRRR runs (at left) and the resulting tornado potential probability field valid at 22 UTC 10 May 2010 for the Southern Great Plains (at right).
Fig. 3. Example overlay of four consecutive color-coded HRRR runs (left) from 13-16 UTC 10 May 2010 showing the 0-12 hr forecast tracks of rotating thunderstorms (updraft helicity) along with a 7-hr snapshot of the tornado potential probability field (right) valid at 22 UTC 10 May 2010, including tornado reports (red dots).
1.3.2 WPC and WFO Use and evaluation of prototype Hazard Detection Guidance Tools
An essential element of this research will be an active program of use and evaluation of the hazard-detection guidance tools in NWS operational settings, both at national centers during testbed periods and at Weather Service Forecast Offices. The Weather Prediction Center (WPC) cold season (WWE - Winter Weather Experiment) and warm season (FFaIR - Flash Flood and Intense Rainfall) testbed periods will be focus periods for this proposal; coordinated evaluations of the use and value of the hazard guidance tools will begin with the Jan. 2016 WWE program. Participation by the product developers, verification experts, and social science team members will be critical to rapidly improve the quality of the guidance tools, and is specifically included in the research time-line. An active evaluation program is also planned with NWS local WFOs, with NCAR social scientists slated to visit two offices to collect data on use of the hazard detection guidance (experimental grids will be provided through the NWS regions). As severe weather and aviation hazard guidance tools evolve, we also envision evaluation during both SPC and AWC testbed periods (the development and verification groups have significant previous participation experience with these testbeds and NCAR has experience working with WFOs).
1.3.3 Product Evaluation -- MET/MODE
The Joint Numerical Testbed Program (JNTP) at the National Center for Atmospheric Research (NCAR) has developed, and provides support for a widely used toolkit for objective evaluation of Numerical Weather Prediction (NWP) and other types of forecasts. This toolkit – called the Model Evaluation Tools (MET) – was developed with support from the Developmental Testbed Center (DTC) and is freely available for use by researchers, model developers, and forecasters. The MET tools include traditional methods for evaluating model-based predictions (e.g. Root-Mean-Squared Error, bias) as well as modern methods for evaluation of ensemble and spatial predictions.
One tool within the MET package that is particularly well-suited for evaluating hazardous weather features (especially heavy precipitation bands) is the Method for Object-based Diagnostic Evaluation (MODE). It provides a feature-based evaluation of spatial fields (Davis et al. 2009; Wolff et al. 2014). MODE has been applied to a variety of NWP fields during the HWT 2009-2011 Spring Forecast Experiments and HMT-West 2010-2012 Experiments, including accumulated precipitation, probability of precipitation, radar reflectivity, integrated water vapor and radar echo top heights. MODE has also been used to explore an ensemble of MODE objects and related attributes; one recent investigation examined the interpretation of an ensemble of MODE objects and attributes diagnosed from individual members (Jensen et al., 2014). By evaluating the individual members and compiling the characteristics of all MODE objects identified for each member, it is evident that a simple ensemble mean often is not the best way to provide a single-value solution. Figure 4 (left) shows an example of an ensemble of MODE objects utilized by the DTC for testing. In this figure the thick black lines represent the ensemble means (calculated for only three of five of the areas of observed precipitation). The ensemble mean would not have alerted the user to the possibility of rain over the Upper Peninsula of Michigan or off the coast of Georgia and South Carolina. This method of evaluating ensembles will be explored further during the project. MODE was also recently extended to include the time dimension. An additional package called MODE-Time Domain (MODE-TD) is scheduled for inclusion in the next MET release during summer of 2015. This tool allows the user to define three-dimensional objects using the X-Y and time fields. MODE attributes, such as centroid location, angle, and convex hull can then be derived. When 3-D objects from two fields are paired, differences in the models, including timing errors, may be investigated. Two DTC visitor projects led by PI Clark focused on the application of MODE and MODE-TD to the NSSL Ensemble and updraft helicity field (Clark et al., 2014). Figure 4 (right) provides an example of a MODE-TD representation of the evolution of forecasted and observed precipitation fields over time.
Fig. 4. Left: Example use of MODE to diagnose objects from a precipitation field from seven members of an ensemble. Matched ensemble mean is (thick black line), matched forecast objects (green), matched observed objects (red), unmatched forecast objects (gray) and unmatched observed object are in blue. Right: Example of MODE- TD. Precipitation objects are tracked through time for forecast and observed fields. Color coding is included to show movement from west (blue) to east (red)
1.3.4 Evaluating and Improving Communication of Uncertainty with WPC and WFO Forecasters
NCAR/MMM scientists expertise in weather information communication, interpretation, and use will lead a complementary research-to-operations effort with NWS forecasters to enhance the effectiveness of the probabilistic forecast information being developed by the NOAA/GSD and NCAR/JNT teams. This social science research will focus on assessing WPC and WFO forecasters’ conceptualizations, needs, and interpretations regarding probabilistic precipitation guidance and the forecasters’ communication of the resulting information with their primary user groups. We will examine these issues for different precipitation features (e.g., heavy rain, snow bands) and lead times, and for intra-NWS communications (e.g., WPC-WFO) and NWS communication with key partners and information users.
The social sciences data collection and analysis will be conducted through an iterative process in collaboration with the NOAA/GSD and NCAR/JNT teams and NWS throughout the study. This work will build on previous related work (e.g., NRC 2006; Morss and Ralph 2007; Morss et al. 2005, 2007, 2008, 2015; Demuth et al. 2012, 2013; Novak et al. 2008; Novak and Colle 2012) with four major goals: 1) understand WPC and WFO forecasters’ critical needs for operational model output guidance when assessing and conveying precipitation forecast uncertainty; 2) identify critical gaps in WPC and WFO uncertainty-related communication of actionable precipitation forecast information within the NWS and to non-NWS partners and users; 3) use this knowledge to help the NOAA/GSD and NCAR/JNT project partners develop presentations of the model guidance tools that are readily useable by and useful to NWS forecasters; and 4) use this knowledge to help NWS (including WPC and WFOs) improve communication of actionable precipitation forecast uncertainty information to partners and users.
This work will include in-person observations of and interviews with forecasters (at WPC and WFOs) to observe how they use different types of operational model guidance and other information in interpreting forecast uncertainty and developing forecast products, and how they communicate about uncertainty with different audiences. These data will be collected at multiple locations and times, including as part of the annual flash flood and winter weather experiments. In between these periods of focused in-person data collection, data will also be collected by observing the discussions in NWS coordination calls and other intra-NWS and NWS-partner interactions as major precipitation events approach. The observation and interview data collection protocols will be drawn from related past research conducted by Morss and Demuth with forecasters at WPC and other national centers and at WFOs (Morss and Ralph 2007, Demuth et al. 2012, Morss et al. 2015). These data will be analyzed and discussed with project co-investigators and participants throughout the study to help guide and refine the development, provision, and verification of the probabilistic precipitation guidance by NOAA/GSD and NCAR/JNT, and to help WPC and WFOs increase the effectiveness of their forecast and warnings given how their partners and users conceptualize, interpret, and use forecast uncertainty information. Human subjects approval to conduct this research with the NWS forecasters will be obtained through NCAR.
1.4 Alignment with AO priorities, Technology Readiness Level and Transition to Operations, NWP and OAR Endorsements, Potential Commercialization
The proposed research is strongly focused on the first two priorities in the USWRP AO, improving operational model output guidance and communicating uncertainty. As noted above, the proposal includes a significant social science component, with a strong team the will be actively engaged in the critical evaluation and feedback portion of the product refinement cycle.
The prototype system has a Technology Readiness Level (TRL) # of 6 (prototype demonstration, full-scale realistic engineering feasibility demonstrated in actual application). It has been running in real time and producing grids that have been delivered to Aviation Weather Center (AWC) and Storm Prediction Center (SPC) for the last five years. Over the next three years the TRL will advance to level nine by expanding the method to include additional input models, developing more products tailored to forecasters’ needs, and transitioning the final product to operations. A specific transition plan in accordance NAO 216-105 is not included with this proposal, but will be completed and submitted in anticipation of possible funding. The project has a clear and achievable goal of transitioning the guidance tool products to the NCEP operational suite of products within the three-year period of the proposed work. Two project teams (NCEP/EMC, ESRL/GSD) have extensive experience working with NCEP Central Operations (NCO) on transitioning models to operations, which should greatly facilitate this transition Notifications and copies of the proposal have been sent to the relevant NWS and OAR personnel and memoranda of certification will be sent separately to OAR Office of Weather and Air Quality by March 25, 2015. A description of potential commercial applications is not included, as this project is specifically focused on transitioning a capability to NWS operations. The project is well aligned with goal 1 from the 2011 NWS Strategic Plan – Improve weather decision services for events that threaten lives and livelihood. The proposal addresses a key decision guidance gap expressed by NWS forecasters for hazard detection (e.g., heavy snow bands, intense rainfall areas, severe and aviation weather risks, etc.) and brings together a strong team from OAR, EMC, NCAR, and WPC with strong, relevant experience to fill that gap. Ultimately, this work addresses FACET components 2 (guidance), 3 (the forecaster), and 4 (tools) to enable probabilistic forecast services. Furthermore, specific points of collaboration with FACET were identified. Early into the project we will establish a real-time feed of HCPF probabilistic fields for visualization using the Probabilistic Hazard Information (PHI) tool. As the project advances, an evaluation of improved products will include human factor evaluation to create the best tool for forecasters. This analysis will be performed by collaborative work between NCAR and FACET social scientists.
1 .5 Performance Metrics
- The product improvement will be evaluated by using specific probabilistic forecast measures such as, rank histograms, CRPSS and reliability for selected variables and thresholds. Current version of the product will serve as a baseline.
- MODE will be used to assess the probabilistic guidance for both common events to extreme events by forming objects at several different user-relevant thresholds. Decreased centroid distance, area ratios closer to 1 and an examination of the distributions of intensities within the MODE objects will be used to measure success. Additionally, a value closer to 1 for a performance measure called the Median of Maximum Interest (MMI) will also be used. The first year will be used as the benchmark.
- The measure of success of the social science research is providing research-based guidance that aids the development by NOAA/ESRL and NCAR/RAL of effective ensemble-based probabilistic precipitation forecasts, where "effective" means that forecasters can and do use the forecasts to provide desired forecast information to their users. Success will be illustrated using analysis of data from the interviews and observations with forecasters.
2. Research Timeline and Deliverables showing contributions from each organization
Year 1 (May 1, 2015 - April 30, 2016)
● Coordination meeting amongst participating organizations to ensure roles, responsibilities, reporting, etc. (GSD, EMC, NCAR, WPC,WFOs)
● Meetings and discussion on product generation to identify specific features and formats of interest to forecasters (GSD, EMC, WPC, NCAR)
● Meetings and discussion to identify necessary variables, metrics for evaluation, WFOs to include in study, interview protocol (NCAR lead)
● Establish SVN code repository for initial version of hazard generation software (GSD)
● Produce initial set of test products with time-lagged ensemble HRRR (GSD)
● Modify code to include additional input datasets (EMC NAM CONUSnest) (GSD, EMC)
● Modify code to include additional predictors/criteria for initial set of winter products (snow bands, precipitation type, etc.), targeting WPC WWE, Jan. 2016 (GSD, EMC)
● Transfer grib2 format grids to WPC and NWS forecast offices for forecaster use and to NCAR for MODE verification (GSD, WPC, NCAR)
● Develop preliminary verification system on NOAA supercomputer (e.g. Zeus or Theia) for 1-2 ensemble products and initial operating capability for Ensemble-MODE evaluation of snowbands (NCAR)
● Visit WPC to observe use of uncertainty in forecast process and interview forecasters (NCAR, WPC)
● Prepare and execute evaluation of initial set of cold season products during 2016 WPC Winter Weather experiment (WPC ~ 4 mos. effort)
● Participation in WPC WWE, Jan/Feb 2016 (GSD, NCAR, EMC)
● Meeting to discuss initial feedback from WPC WWE, obtain recommendations for refinements and enhancements to winter weather hazard guidance (GSD, EMC, NCAR, WPC, WFOs)
● Demonstrate initial MODE capabilities and explore use of MODE-TD on variables relevant to snowband prediction (NCAR)
● Visit WFO, observe use of uncertainty in forecast process, interview forecasters (NCAR)
● Identify enhancements needed to verification system per user feedback (NCAR)
● Provide preliminary guidance on communication of uncertainty based on forecaster interviews (NCAR)
Year 2 (May 1, 2016 - April 30, 2017)
● Participation in WPC FFaIR, summer 2016 (GSD, NCAR, EMC)
● Creation of refined warm season hazard detection products, with initial focus on heavy precipitation, but also including severe weather hazards (GSD, EMC).
● Testing and refinement of warm season hazard detection products (GSD, EMC)
● Add NAM-RR model output to hazardous product detection package, pending NAM-RR operational implementation.
● Enhance MET system to extend capability to rainbands and intense rain swaths (NCAR)
● Transfer grib2 format warm season hazard detection grids to WPC and NWS forecast offices for forecaster use and to NCAR for MODE verification (GSD, WPC, NCAR)
● Perform R2O work, now including active engagement with WFOs and GRIB2 data displays in AWIPS II including evaluation of initial set of warm season intense precipitation products (WPC ~ 4 mos. effort)
● Participation in WPC FFaIR, summer 2016 (GSD, NCAR, EMC)
● Visit WPC and demonstrate enhanced verification capability (NCAR)
● Coordination meeting to refine planning for enhancements to cold season hazard detection products and plan for transition to operations (GSD, EMC, NCAR, WPC, WFOs)
● Perform additional R2O work, continuing active engagement with WFOs and GRIB2 data displays in AWIPS II (WPC ~ 2 mos. effort)
● Participation in WPC WWE, Jan/Feb 2017 (GSD, NCAR, EMC)
● Second WFO visit to conduct additional forecaster interviews (NCAR)
● Integrate additions to MODE and MODE-TD into MET repository, identify final enhancements needed for verification system per user feedback (NCAR)
● Provide additional guidance on communication of uncertainty based on forecaster interviews (NCAR)
● Preliminary testing on NCEP computer of prototype hazard detection system using input from HRRR and NAM CONUSnest / NAM-RR (GSD, EMC)
● Coordination meeting to refine plan for enhancements to warm season hazard detection products, further planning for transition to operations (GSD, EMC, NCAR, WPC)
Year 3 (May 1, 2017 - April 30, 2018)
● WPC support, consisting primarily of transition work(testing and feedback of prototype systems), with active engagement with WFOs in AWIPS II (WPC ~ 6 mos. effort)
● Participation in WPC FFaIR, summer 2017, focus on evaluation of pre-implementation version of warm season hazard detection guidance tools (GSD, NCAR, EMC)
● With feedback from WPC and NWS forecasters and NCAR colleagues, complete additional refinement of initial hazard detection system for initial operational implementation (GSD, EMC)
● Continue transfer of grib2 format grids to WPC and other national forecast centers and NWS forecast offices for forecaster use and to NCAR for MODE verification (GSD, WPC, NCAR)
● Participation in WPC WWE, Jan/Feb 2018, focus on evaluation of pre-implementation version of cold season prototype hazard detection guidance tools (GSD, NCAR, EMC)
● Coordinate and work with NCO on transition of initial operational hazard detection system to NCO (EMC, GSD)
● Visit WPC to conduct final interviews, provide final recommendation for communicating uncertainty for precipitation features to WPC and WFOs, transition verification system to WPC and provide user support.(NCAR)
● Subject to NWS approval, initial operational hazard detection system implemented as NCEP operation product, providing NWS forecasters at national centers and regional offices with automated high-resolution model ensemble-based hazard guidance tools.
Year 1 Costs:
● ESRL/GSD - total: $275K. Loaded federal labor (0.5 mo. Branch Chief, 0.5 mo. Project Manager, Sub-total: $39K), Loaded contract labor (12 mos. research scientist, subtotal: $228k), travel: $6K, publications:$2K
● NCEP/EMC: - total: $175K. Loaded contract labor (12 mos. research scientist, $175K)
otal $250K. Loaded labor for (part time Associate Scientists, Project
Manager, RAL Program Director, Scientist and Software Engineers $227.5K),
travel: (five multi-day visits to WPC and a WFO $13K), hardware:(1 laptop $5.5K), purchased services: (transcription $4K)
● NWS/WPC - total $50K. Loaded labor for 4 mos.
Year 2 Costs:
● ESRL/GSD - total: $275K. Loaded federal labor (0.5 mo. Branch Chief, 0.5 mo. Project Manager, Sub-total: $39K), Loaded contract labor (12 mos. research scientist, subtotal: $228K), travel: $6K, publications:$2K
● NCEP/EMC: - total: $150K. Loaded contract labor (10 mos. research scientist, $150K)
total $250K. Loaded labor for (part time Associate Scientists, Project
Manager, RAL Program Director, Scientist and Software Engineers $233.5K),
travel: (three multi-day visits to WPC, two trips to a conference, and a WFO $12.5K), purchased services: (transcription $4K)
● NWS/WPC - total $75K. Loaded labor for 6 mos.
Year 3 Costs:
● ESRL/GSD - total: $250K. Loaded federal labor (0.2 mo. Branch Chief, 0.2 mo. Project Manager, Sub-total: $18K), Loaded contract labor (12 mos. research scientist, subtotal: $227K), travel: $3K, publications:$2K
● NCEP/EMC: - total: $175K. Loaded contract labor (12 mos. research scientist, $175K)
total $250K. Loaded labor for (part time Associate Scientists, Project
Manager, RAL Program Director, Scientist and Software Engineers $232K),
travel: (four multi-day visits to WPC and two trips to a conference, $13.5K), publications: (one $4.5K)
● NWS/WPC - total $75K. Loaded labor for 6 mos.
Alexander C. R. , G. Manikin, S. Benjamin, S. S. Weygandt, G. DiMego, M. Hu, and T. G. Smirnova, 2015: The High-Resolution Rapid Refresh (HRRR): The Operational Implementation. Annual AMS meeting , Phoenix, Arizona.
Alexander C. R., S. G. Benjamin, S. S. Weygandt, D. C. Dowell , and E. P. James, 2014: Time-Lagged 3-km Ensemble High-Resolution Rapid Refresh (HRRR) Forecasts for Key Convective Storm, Fire Weather and Wind Energy Events in 2013. Annual AMS meeting , Atlanta, Georgia.
Clark, A. J., R. G. Bullock, T. L. Jensen, M. Xue, and F. Kong, 2014: Application of object-based time-domain diagnostics for tracking precipitation systems in convection-allowing models. Wea. Forecasting , 29 , 517-542.
Davis, C.A., B.G. Brown, R.G. Bullock and J. Halley Gotway, 2009: The Method for Object-based Diagnostic Evaluation (MODE) Applied to Numerical Forecasts from the 2005 NSSL/SPC Spring Program. Wea. Forecasting , 24 , 1252 - 1267.
Demuth, J. L., R. E. Morss, B. H. Morrow, and J.K Lazo, 2012: Creation and communication of hurricane forecast information. Bull. Amer. Meteor. Soc., 93 , 1133-1145.
Demuth, J. L., R. E. Morss , J. K. Lazo, and D. C. Hilderbrand, 2013: Improving communication effectiveness of weather risk information on the NWS point-and-click webpage. Weather and Forecasting , 28 , 711–726
Ebert, E. E., and J. L. McBride, 2000: Verification of precipitation in weather systems: Determination of systematic errors. J. Hydrol. , 239, 179–202.
Fowler, T. L., T. Jensen, E. I. Tollerud, J. Halley Gotway, P. Oldenburg, and R. Bullock, 2010: 657 New Model Evaluation Tools (MET) software capabilities for QPF verification.
Preprints, 3rd 658 Intl. Conf. on QPE, QPF and Hydrology, Nanjing, China, 18-22
Jensen, T., B. Brown, J. Halley Gotway, T. Fowler and R. Bullock, cited 2014: Understanding Individual and Ensemble Behavior Using MODE. 6th WMO Verification Workshop, New Dehli, India. [Available online at http://cawcr.gov.au/events/verif2014/ ]
Morss, R. E., J.L. Demuth, A. Bostrom, J.K. Lazo, and H. Lazrus, 2015: Flash flood risks and warning decisions: A mental models study of forecasters, public officials, and media broadcasters in Boulder, Colorado. Risk Analysis, in review.
Morss, R. E., J. L. Demuth, and J. K. Lazo, 2008: Communicating uncertainty in weather forecasts: A survey of the U.S. public. Weather and Forecasting , 23 , 974-991.
Morss, R. E., and F. M. Ralph, 2007: Use of information by National Weather Service forecasters and emergency managers during CALJET and PACJET-2001. Weather and Forecasting , 22 , 539-555.
Morss, R. E., O. V. Wilhelmi, M. W. Downton, and E. Gruntfest, 2005: Flood risk, uncertainty, and scientific information for decision-making: Lessons from an interdisciplinary project. Bulletin of the American Meteorological Society , 86 , 1593–1601.
Novak R. D., D. R. Bright, and M. J. Brennan, 2008: Operational Forecaster Uncertainty Needs and Future Roles. Wea. Forecasting , 23 , 1069–1084.
Novak, D. R., and B.A. Colle, 2012: Diagnosing snowband predictability using a multimodel ensemble system. Wea. Forecasting, 27 , 565–585.
NRC (National Research Council), 2006: Completing the forecast: Characterizing and communicating uncertainty for better decisions using weather and climate forecasts. National Academy Press: Washington DC.
Weygandt S. S., C. R. Alexander, J. A. Hamilton, S. Benjamin, E. P. James, T. G. Smirnova, M. Hu, and I. Jankov, 2015: Generation of Ensemble-Based Hazardous Weather Guidance Products from Rapidly Updating Models: The HRRR Convective Probabilistic Forecast (HCPF) and Related Post-Processing Work . Annual AMS meeting , Atlanta, Georgia.
Wolff, J. K., M. Harrold, T. Fowler, J. Halley Gotway, L. Nance, B. G. Brown, 2014: Beyond the basics: Evaluating model-based precipitation forecasts using traditional, spatial and object-based methods. Wea. Forecasting , 29 , 1451-1472.