Page tree

Versions Compared


  • This line was added.
  • This line was removed.
  • Formatting was changed.


  • git-lfs under the hood also uses S3, but the primary difference is that using S3 directly provides us with a significant money savings
    • S3 still charges for storage and bandwidth, but there are no egress charges (for moving data around) for AWS compute instances in the same AWS region.  So, we can arrange it so there are no egress charges for CI testing in AWS CodeBuild.  There would be egress charges for downloading data to locations other than AWS.
    • Estimates of savings is about 4x compared to git-lfs for storage itself (we still need to estimate the difference for egress based on usage patterns)
  • Advantages noted by Maryam above


  • git-lfs can point to different repos such as our JCSDA S3 buckets, but we will still get charged for downloads
  • Can we simply store a copy of the test data on Hera/WCOS and avoid data handling charges?
  • Can we host data on a UCAR site, perhaps on a Google Drive?
    • Since UCAR is an educational organization, we get "unlimited" storage on Google Drive
      • We know that a few TeraBytes of storage is, so far, acceptable
    • Some partners may not have access to our UCAR/JCSDA Google Drive

If you have any further thoughts on this, contact Maryam or Mark.  If needed, we can set up a discussion on the GitHub JEDI Team.

Tiered Testing Approach

We have a large set of unit tests now (small, fast running, targeted to specific modules) along with larger more expensive tests (targeted to flows such as 3DVar, 4DVar, etc.). With the automated testing on every PR and commit to PRs, it is getting too expensive to run all tests in this mode. We are proposing to split the testing, initially, into three tiers so that the more expensive tests can be run only on a nightly or weekly basis. We would add environment variable controls to each repo (SABER is an example) that the automated test flow can inspect to know which tiers to run. Here is the proposed tier numbering:


  • Should we store data for different tiers in different directories to facilitate downloading?  Only data needed for the desired test tier should be downloaded, to reduce time and egress charges from LFS or S3.
  • How do we make sure everyone follows the tier scheme?
  • There is a potential for test escapes (undetected defects) in that deferring an expensive test to the weekend may allow a defect (that the expensive test catches) to be merged into develop before the weekend since the Tier 1 tests all passed
  • For Tier 1, we could build in release mode (compile optimization enabled) in order to get this testing to run as fast as possible
    • Debug build slows the tests way down


Valgrind is a tool to check for invalid memory usage such as a memory leak. It potentially can be installed in the automated testing by using a script to check the output files of Valgrind and issue a pass/fail return code. Valgrind is slow however (Benjamin quoted that it slowed tests running in 1-2 minutes down to several hours), so it would need to be a Tier 3 test.  Valgrind tests could be implemented by means of a shell script enabled by an environment variable.  For an example, see saber PR #33 which is now under review.  Alternatively, or in addition, they could be implemented through the CDash web application.


CDash is a GUI viewer that displays the results of build and test actions. It can post the results of building with different platforms/compilers along with the results of testing. You can see compiler warnings/errors and test pass/fail status all in a neat display. Ryan gave a short demo of CDash to the group. CDash requires a background server running PHP. It appears possible to add in more than build/test status in that it should be possible to add Valgrind to the dislpay using customization features of CDash.


  • Investigate the access restrictions on Hera/WCOS, and the possibility of storing test data on Hera/WCOS
  • For those that want to continue the discussion about test data storage/management, please contact Mark M or Maryam
  • Investigate and provide demos of performance assessment tools: Tau, Intel, Jim's profiler, etc.
    • Need summaries of each profiler so that we can make a decision about what to provide in the containers
    • Mark M will open a ZenHub issue on the special topics board for discussing the results of these demos