Steve opened the meeting with a report on his recent effort to refactor the ObsSpace class in ioda.  The goals of this are to

  • provide a common interface for conversion scripts
  • develop capability to include more complex obs types

Steve intends to develop interfaces for python, Fortran and C++ and he hopes to have this ready within 1-2 weeks, in time for the upcoming marine code sprint.

We then continued to hear reports from others in Boulder

Anna reported on a recent pull request that has been merged into ufo that modified the directory structure.  Now directories containing obs operators for different observation types branch directly off of src/ufo, eliminating the previous atmosphere and marine directories.  Please follow this convention if you add any new obs operators.

Maryam has replaced all Boost unit tests in oops with eckit equivalents.  There is an open pull request in oops that has these modifications.  She is now working to do a similar replacement in other repositories, particularly ufo and ioda.  However, she wants to do this in such a way to permit a gradual transition from boost to eckit so it does not break any existing boost unit tests.

Steve Vahl reported that he is working on import scripts to convert satellite data from the Met Office for integration into ioda.

Hailing has implemented a fix for ROPP and is working with Anna on importing 2D GFS data.  A pull request is expected soon.  She also mentioned an MPAS issue that she opened with regard to the availability of both geometric height (native to MPAS) and geopotential height (conversion needed).  After some discussion it was agreed that both fields are needed for the obs operators.  In order for ufo to remain model-agnostic, the model should be capable of providing both, upon request from ufo.  Chris S suggested that this could be controlled through the yaml file - for example, if the model can only provide geopotential height then one might set a flag to this effect in the yaml file.  It was agreed to sort this out with Yannick when he returns.

Ming is continuing to work on the WRF interface to JEDI.  He fixed some problems and now H(X) is working.  The next step is 3DVar.

Xin is implementing variational bias correction.  The module is there and the identity matrix is working.  Now he wants to adapt it to read bias information from GSI.

Travis is getting ready for the marine code sprint - finalizing the directory structure for data and generating backgrounds

Chris S asked about how we should handle surface pressure.  Some models (e.g. hydrostatic models) need it, others don't - but the latter can calculate it as a diagnostic.  For those that calculate it as a diagnostic, Chris asked where it should be calculated - in ufo (if needed) or in the model?  Francois suggested that it should be in ufo but Ming pointed out that you need the model level in addition to the elevation.  Marek mentioned that they have to compute the surface pressure (model level 1 is a proxy) and this is done outside of ufo.

Mark reported that there is a new module available on Cheyenne that uses intel compilers and the MPT MPI library.  This is more optimized than the previous Cheyenne modules so it is more appropriate for performance testing.  If you want to work with a preliminary version, see the posts on the MPAS GitHub Team.  Mark is now fine-tuning this module, adding the ncep libraries, ODB, and PIO.  When finished he will update the ReadtheDocs page with instructions on usage.  

Mark also ran into an internal compiler error when using intel version 19.0 on AWS.  This appears to be associated with crtm and may have something to do with similar errors previously seen with intel version 18 on Discover.  For further information see the corresponding discussion on the JEDI GitHub Team.  Mark now plans to install intel version 17.0.1 on AWS as another option to see if this works.  If so, then we will continue to investigate the cause of the compiler error and potential solutions.

Clementine is working on different methods to invert the T matrix (matrix that represents terms on the observation side of the cost function), trying different solvers.

The floor then turned to the Met Office.

Steve S recently finished work he was doing to represent the increments in terms of cell-centered fields.  Marek said this will enable them to do 3DVar and 4DVar now.  But - they are still not ready for 4D ensemble var.

Marek is writing up a document on the more scientific aspects of 4Densvar and asked if this is something that we might consider distributing on ReadtheDocs.  Steve and Mark agreed that this would be a beneficial complement to the existing documentation.  Steve suggested that Marek consider writing it in restructured text for easy conversion to html but Marek said that this was being prepared as a deliverable to stakeholders so it has to be a word or pdf document.  Still, we agreed that we could provide a link to it on the JEDI site.  Marek also mention that they are preparing for a workshop next week.

Dan reported from a meeting he is attending in Norway.  He merged the new Variable class implementation into oops that we mentioned last week.  This provides an improved infrastructure for computing variable changes.  He also reported on some work he was doing with crtm and ufo which may require additional obs metadata to be available from ioda. Steve agreed to follow up later.

Rahul then reported from GMAO where a host of JEDIs was gathered.  He is working with Guillaume and others to prepare for the upcoming marine code sprint.  He's adding more converters for marine data to the ioda-converters repo.  Steve H asked if these followed the existing python profile script.  Rahul said yes and added that he can readily switch to Steve's new python API when it becomes available.

Hamideh is adding GMT and S mapping to ufo in preparation for the upcoming code sprint.

Hui reported on FSRO operator work and said they have mapped out a plan for the next few months.  They are cleaning up existing operators and adding diagnostics.  Then they will proceed to more complex obs operators.  She and Andrew C and others are preparing for a mini-code sprint that is intended as an instructional session for new EMC and GMAO users.

Andrew C is preparing obs files for the mini-sprint and had a question for Steve V about his work on interfacing with obs files from the UKMO.  He asked how it was different from the other data ingestion pipelines.  Anna, Steve V, and Steve H clarified that this is currently focusing on radiosonde and aircraft and, unlike other pipelines, this works with ODB files and handles observations only.   The GSI pipeline includes observations with the corresponding GeoVaLs. The GSI pipeline also provides GSI-computed H(x) which can be used to compare with the UFO-computed H(x).

Stylianos is working on conversion from ncep tanks into ioda in preparation for the marine code sprint.

Question from Tom on the previous discussion - are we converting ODB into netcdf?

 Answer from Steve H - yes - currently we're using netcdf as the main file format in JEDI so we convert ODB files into netcdf before reading them into JEDI.  However, ODC will become available soon (weeks) and we are seriously considering this as the primary file format for JEDI.  It is a tool for reading and writing ODB files.  However, at this point in time, we still need converters to reorganize ODB files into versions that can be efficiently ingested into ioda.

Shastri is working on a pipeline for processing daily marine data files (e.g., ADT) from ncep data tanks through ftp.

Chris H is continuing to interface his shallow water model with JEDI.  He successfully ran a forecast from within JEDI and is now moving to DA.

NRL had nothing new to report but Sarah emphasized that they are also very interested in the availability of both geometric and geopotential heights from the models.

  • No labels