Github repository: https://github.com/JCSDA/spack-stack
Documentation: https://spack-stack.readthedocs.io/en/latest/
Table of Contents |
---|
Third party dependencies: licenses
https://docs.google.com/spreadsheets/d/1k3aMu8gkEsAZGJRUqYTx1iQG4xFnJGqe/edit#gid=220490890
spack-stack Development
If you are going to contribute to the spack-stack repository, development is done by forking the spack-stack repository. This section contains steps to fork spack-stack from https://github.com/JCSDA/spack-stack, make changes, and issue a PR.
- Create a fork of https://github.com/JCSDA/spack-stack by clicking the "Fork" button on the top right side of the page to "Fork your own copy of JCSDA/spack-stack".
- Navigate to your repositories and click on the spack-stack repo. Clicking the green "Code" button will show you options to clone this repository. One option is by using https and running
git clone https://github.com/<your_username>/spack-stack.git
- Rename your fork to your name in order to help keep track of multiple user and the authoritative forks. Therefore after you clone the repository run
git remote rename origin <your_name>
. This can be verified by runninggit remote -v
- You can now create a branch and start developing. Note, make sure you pull the authoritative develop brand if your fork is out of date.
- Once your branch is ready for review, you will need to create a PR inside https://github.com/JCSDA/spack-stack pointing to your fork and branch. Push your PR to your fork by running
git push <your_name> feature/<your_branch>
. Then open the PR request form in a bowser adding the necessary information and setting the base repository to JCSDA/spack-stack.
For further reading on using github forks, checkout this site and this one too.
spack-stack Testing
JCSDA is currently responsible for testing spack-stack environments on S4 and Discover. The documented instructions for building spack-stack on HPC platforms are very good but there are two things to take care of before starting (as noted in the instructions).
Make sure you are pointing to a python that is version 3.8 or higher
HPC Platform python3.8+ environment S4 module load miniconda/3.8-s4
Discover module load python/GEOSpyD/Min4.10.3_py3.9
- Make sure you have checked out the branches you need to test before running the setup.sh script
Typically you will be testing feature branches that could have come from various developers. Each of these developers has their own fork of which will be marked on the PR, and you will need to check those out in your local clone. One way to do this is to add their forks to your remotes in your local clone. This can be done using a series of the "git remote add ..." command. Here's an example:
Add remote repos to your local clone# Alex Richert (NOAA) and Dom (NRL) are two of the more active PR contributors git remote add alex https://github.com/AlexanderRichert-NOAA/spack-stack git remote add dom https://github.com/climbfuji/spack-stack ... # Check that you have these entered git remote -v # Update references to the other forks. Running the remote update # command below will pull in all the metadata to the other forks. git remote update -p # Check that you got the references to the other forks (note the use of # the -a option on the git branch command) git branch -avv
Once you have the necessary remotes added, then you can checkout feature branches from other repos
Check out feature branches from other forks# Using the remote link entered above, checkout the feature/cool-spack-thing from Dom's spack-stack fork git checkout dom/feature/cool-spack-thing
Once the above steps are done, follow the documented instructions here: building spack-stack on HPC platforms. The ideal goal with the testing is to successfully complete the following steps:
- Build spack-stack, for each compiler on S4 (intel) and Discover (intel, gnu) both SCU16 and SCU17
- Using the environment from step 1, build jedi-bundle
- Set up and try some skylab experiments
Typically, getting all of these steps done is way too much work. I would settle for one of S4 or Discover SCU16 or Discover SCU17, with just one compiler. All of these possibilities are listed for your awareness with the idea that you can round robin between then when testing different feature branches.
Another way to trim down work, is to just verify the spack-stack build, and occasionally carry on through with the jedi-bundle and skylab testing.
Note that you don't need jedipara access to do this testing. You can build everything in your user area on S4 or Discover. To help manage that, both HPC platforms supply commands to show how much of your quota (disk space and number of files) is free.
HPC Platform | quota command |
---|---|
S4 | myquota |
Discover | showquota -h |
spack-stack Add-on Environment
Occasionally, there is a need to add a handful of upgraded packages to an existing spack-stack release. spack-stack contains a feature called "chaining environments" which allows the rapid construction of such an "add-on" environment. The idea is to build a new environment that points to an existing environment of which you only have to add in the upgraded packages. The module files in the add-on environment then utilize the base environment's installation and module files for the packages that remain unchanged.
Recently the need for this came up for spack-stack-1.6.0 with the g2@3.5.1 and g2tmpl@1.13.0 upgraded packages. Note that spack-stack-1.6.0 was shipped with g2@3.4.5 and g2tmpl@1.12.0. Here is the spack-stack issue describing the desired upgrade: https://github.com/JCSDA/spack-stack/issues/1180. Note under this comment, a method for handling this particular request (on S4) was described here: https://github.com/JCSDA/spack-stack/issues/1180#issuecomment-2251378587. In this case, an add-on environment already existed so it wasn't necessary to create it and thus it was possible to simply add in the new packages, concretize, install, and update modules (ie, skip some of the steps in the chaining environments recipe).
Here are the steps taken to accomplish this task.
- Log into S4 and switch to the jedipara account
sudo -iu jedipara
- We will need to get more of us the jedipara account access
module load miniconda/3.8-s4
- This satisfies the requirement to be pointing to python3.8+ before sourcing the spack-stack setup.sh script
cd /data/prod/jedi/spack-stack/spack-stack-1.6.0
source setup.sh
cd envs/upp-addon-env
spack env activate .
- edit spack.yaml
- Make the following modifications in the
specs:
sectionChange From Thisspecs: - upp-env %intel ^g2tmpl@1.12.0 ^g2@3.4.5 - prod-util@2.1.1 %intel - ip %intel
Change To Thisspecs: - upp-env %intel ^g2tmpl@1.12.0 ^g2@3.4.5 - upp-env %intel ^g2tmpl@1.13.0 ^g2@3.5.1 - grib-util@1.3.0 %intel ^g2@3.4.5 - grib-util@1.3.0 %intel ^g2@3.5.1 - prod-util@2.1.1 %intel - ip %intel - ufs-weather-model-env %intel ^g2tmpl@1.12.0 ^g5@3.4.5
- Note that
- grib-util is dependent on g2, and we want two versions of grib-util: one built with g2@3.4.5 and the other with g2@3.5.1
- ufs-weather-model-env is dependent on g2tmpl@1.12.0 and g2@3.4.5, and we want to preserve this
- We want two versions of upp-env: one with g2tmpl@1.12.0/g2@3.4.5 and the other with g2tmpl@1.13.0/g2@3.5.1
- Hopefully the spec changes indicated above make sense in the context of what we want
- Make the following modifications in the
- Update
envrepo/packages
following what was done on Orion- The idea here is to add the new versions of g2 and g2tmpl to the packages under
envrepo
.envrepo
is a special place used by the chaining environment mechanism to hold extra methods for building the new versions we are trying to add in. - There are probably many ways to accomplish this, but since there was a model of what needed to be done already on Orion, I did the following:
- Tar'd on Orion the
spack-stack-1.6.0/envs/upp-addon-env/envrepo
files. - Unpacked the tar on S4 in a temporary directory
- Updated the `
/data/prod/jedi/spack-stack/spack-stack-1.6.0/envs/upp-addon-env/envrepo`
directory by comparing it with the unpacked tar files.
- Tar'd on Orion the
- The idea here is to add the new versions of g2 and g2tmpl to the packages under
- edit
common/modules.yaml
- Under the
modules.default.lmod.hierarchy
section, add in entries forg2virt
andg2tmplvirt
Add in 'g2virt' and 'g2tmplvirt'modules: default: ... lmod: ... hierarchy: - mpi - g2virt - g2tmplvirt
- Under the
spack concretize 2>&1 | tee log.concretize
- Check that the only things being concretized are those related to the new specs in the
spack.yaml
file aboveCheck what was concretizedgrep "==> Concretized " /data/prod/jedi/spack-stack/spack-stack-1.6.0/envs/upp-addon-env/log.concretize ==> Concretized grib-util@1.3.0%intel ^g2@3.4.5 ==> Concretized grib-util@1.3.0%intel ^g2@3.5.1 ==> Concretized ufs-weather-model-env%intel ^g2@3.4.5 ^g2tmpl@1.12.0 ==> Concretized upp-env%intel ^g2@3.4.5 ^g2tmpl@1.12.0 ==> Concretized upp-env%intel ^g2@3.5.1 ^g2tmpl@1.13.0
- Check that the only things being concretized are those related to the new specs in the
space install -v 2>&1 | tee log.install
- Again this should run quickly and only build the new package versions and associated environment updates
spack module lmod refresh --upstream-modules
spack stack setup-meta-modules