OBS:

Ingest:

  • working on land converters for snow and smap consolidating and ensuring conventions and units are appropriate
  • finishing a weather-sat-followon WSF-M converter with NRL
  • also with NRL have a new contributor and have added conventions to ObsSpace.yaml for TEC and ECEF coordinates
  • finished adding error estimates and quality flags from OceanSurfaceWind products into IODA files for data from CYGNSS and Spire


UFO:

  • merged lightning flash extent density operator
  • merged last PR for EUMETSAT ROM-SAF project (thank-you MetOffice and Neill Bowler) for MeteoFrance style super refraction check
  • merged new ObsFunction from Met Office PR from ChawnHarlow to multiply and/or divide ObsDataVectors
  • merged VarBC sprint. We realized such a merge needs to be streamlined in the future (code, data, r2d2, etc) and noticed the current CI tests might not catch all potential issues. Therefore, please report any issues you may be encountering. We'll provide as much support as we can. 

JEDI:

ALGO:

HTLM Christian:

    • Maryam and Christian have been testing the HTLM a good bit and are seeing consistent and significant reductions in linearization error with even with relatively low ensemble sizes. The performance is extremely similar with ensemble sizes of 30 and 15 for example. In some cases we even see some additional reductions in linearization error for variables we aren't updating. These results will also be presented at the Adjoint workshop next week.
    • Adding in new configuration options to the HTLM , specifically the ability to read in only the ensemble states at update times while still using a full nonlinear model for the nonlinear control component. This could allow the reuse of a single ensemble for multiple outer loops , or if the ensemble is large the use of a different subset of an ensemble at each outer loop.

Model naming convention Steve V:

    • Discussing imposing metadata on model variables and checking for variables to be in the model naming convention for model variables

GEOS Clementine:

    • Together with INFRA and GMAO: good progress towards running GEOS-jedi with spack-stack

B matrix:

    • Nate: finalized changes to B training suite and looking into use of spectral covariances
    • Several optimizations to BUMP from Benjamin Menetrier

Snow DA:

      • Tested explicit diffusion operator for snow DA covariances.

INFRA:

  • Working with ALGO on JEDI-GEOS interface and on updating FV3 dycore and FMS for JEDI-UFS interface

  • spack-stack

  • IODA (Note: I gave the update here but actually Anna should have updated on ioda)

  • Data ingest and EWOK

  • CI:

    • Added rttov, oasim, and ropp-ufo to CI tests

    • Faster churn on features is leading to instability, report breakages on jedi-infra-support or as an issue.

    • Coming up: CI will be disabled for several hours for an internal migration, this will be announced ahead of time.

Model Interface:

- Improving robustnesthe s of interpolators for the structured grids used in SABER

CRTM: 

(1) Ben is working on upgrading the instrument coefficient generation package, and training in-kind contributors on this matter.
(2) Cheng is developing the optical profile generation package for the CRTM optical generic interface.
(3) The CRTM team is working on several reported issues from collaborators and JCSDA core members regarding CRTM v3.1.0/v2.4.0, including (1) the JEDI/UFO interface for CMAQ aerosol LUT; (2) CRTM calculation for aircraft based sensors.

SOCA:

We moved the soca-specific diffusion saber central block into a generic code in oops/saber. PRs are open now for that. This has been tested with FV3 and SOCA and seems to be working well. It is NOT working for ROMS and MPAS due to an issue with atlas mesh generation. When finished this is a good alternative to BUMP_NICAS for those with very small length scales.

NOAA: 

GFS-17:

  • Testing Travis’s Diffusion correlation operator in for Aerosols
  • Real time conventional observations for marine.
  • SMAP soil moisture IODA converters

Regional (RRFS V2)

  • Continuing to work on MGBF in Saber. Following the GSIbec setup. Unstructured interpolation.
  • PR adding Vader to MPAS-JEDI.
  • Looking at ways of rejecting obs outside the domain.

Atmospheric DA

  • Still looking at excessive memory use by LETKF.
  • Stress testing filters when no observations present.
  • Can pass cubed sphere increments back to UFS.
  • Continued work on JCB, not part of global workflow.

AI/ML

  • Reviving the MAGIC JEDI interface with a view to interfacing NeuralGCM / GraphCastGFS to JEDI.

UFO acceptance: 

  • GPSRO still working on geoval inconsitency
  • Sat winds: PR Submitted for LEO-GEO winds in end-to-end framework. working on others
  • ABI radiance: Phase 1 testing replicates GSI with GSI geovals
  • ADPUPA: A Python converter is finished. Now working on comparison between BUFR and prepBUFR
  • AIRCFT Prepbufr end-to-end: tracking down why std dev difference is large

GMAO:

  • We are looking to understand why the latest VarBC changes introduce unexpected differences in the results of running the var; these are more roundoff that would be caused by the precision of coefficients in the old and new formats not been preserved.
  • We are working to do a very close comparison between GSI and JEDI geovals and clear out differences that seem more than acceptable.


MetOffice:

Philip Underwood reported an issue Chris Thomas (in copy) has identified with the MPI distribution of global aircraft observations. When looking at this obtype in isolation, it’s been identified that some MPI communicators are handling several million locations, whereas others may only be handling tens of thousands (see attached image). Interpolating these to generate the GeoVals is taking a long time as the round-robin distribution means that many of the locations are in geographical regions that are not covered by the model region their MPI communicator is handling. As a result, there is a significant amount of MPI communication taking place for interpolation.

 

Chris has made three suggestions to explore and potentially solve this issue:

  1. Exclude locations that failed thinning prior to calling GetValues
  2. Separate pre-filter step
  3. Create an obs domain decomposition that matches the model one

The following discussion led to the first question; is this problem related to 3D or 4D? Apparently, these issues are less pronounced with 4D. Francois Hebert said that changing the obs distribution would not fix the issue, and that isolating and looking at a single ob type enhances this problem. Hernan G Arango described a more technical approach to MPI distribution, that to be honest I didn’t understand.


(plot was sent after the JEDI meeting for a reference)

NCAR:

reported an issue when we run mpas-jedi in 4DEnVar mode: it will fail if we turn on the output of geovals and ydiags files. Will create an issue for this and seek the solution. 



  • No labels