This document is adapted from an original set of instructions in this document. The original is substantially out of date but has been preserved in case it contains relevant historical information.
747101682576.dkr.ecr.us-east-2.amazonaws.com/jedi-gnu-openmpi-dev
747101682576.dkr.ecr.us-east-2.amazonaws.com/jedi-oneapi-impi-dev
Create a new EC2 instance using a c5n.4xlarge host. Either restore the existing container builder AMI or create a new AMI following the instructions in the spack stack repository.
When building on the remote host, it is recommended that you use the screen
session manager in order to save session state since builds can take a long time and may require revisiting past the typical expiration period of a ssh session. Here I have compiled a few useful screen commands
# Start a new named screen session screen -S spack-stack-container-clang # Disconnect from active session: "Ctrl+A" then "D" # Resume a screen session screen -r spack-stack-container-clang # List open screen sessions screen -list |
After running `spack stack create ctr` you will need to remove `mapl` from your spec.
# After running `spack stack create ctr` from step 5 below. cd envs/docker-ubuntu-clang-mpich sed -i '/mapl@/s/^/#/' spack.yaml spack containerize > Dockerfile # Continue step 5. |
Clone the spack stack repo and release
clone -b release/1.9 --recursive "https://github.com/JCSDA/spack-stack.git" |
Setup spack stack
cd spack-stack source setup.sh |
Create the container:
export BUILD_ENV=gnu-openmpi export DOCKER_CTR_REPO=747101682576.dkr.ecr.us-east-2.amazonaws.com spack stack create ctr --container=docker-ubuntu-${BUILD_ENV} \ --specs=jedi-ci | tee log.create.docker-ubuntu-${BUILD_ENV}-ci.001 . Configuring basic directory information ... . ... script directory: /home/ubuntu/spack-stack/spack-stack-1.7.1/spack-ext/lib/jcsda-emc/spack-stack/stack . ... base directory: /home/ubuntu/spack-stack/spack-stack-1.7.1/spack-ext/lib/jcsda-emc/spack-stack . ... spack directory: /home/ubuntu/spack-stack/spack-stack-1.7.1/spack . ==> Created container /home/ubuntu/spack-stack/spack-stack-1.7.1/envs/docker-ubuntu-clang-mpich |
Use spack to create the docker file and build with docker
cd envs/docker-ubuntu-$BUILD_ENV/ spack containerize > Dockerfile docker build -t $DOCKER_CTR_REPO/jedi-${BUILD_ENV}-dev:1.9 . 2>&1 | tee logdocker.txt |
sudo rm -rf $HOME/builds/$BUILD_ENV && mkdir -p $HOME/builds/$BUILD_ENV docker run -v $HOME/builds/$BUILD_ENV:/build -w /build -it \ ${DOCKER_CTR_REPO}/jedi-$BUILD_ENV-dev:1.9 /bin/bash # Now in the container environment. git config --global credential.helper 'cache --timeout=3600' git clone https://github.com/jcsda-internal/jedi-bundle.git wget https://bin.ssec.wisc.edu/pub/s4/CRTM//fix_REL-3.1.1.2.tgz -O /build/fix_REL-3.1.1.2.tgz export CRTM_BINARY_FILES_TARBALL=/build/fix_REL-3.1.1.2.tgz # Remove local esmf rm -vf `find /opt/view/bin -iname '*esmf*'` && \ rm -vf `find /opt/view/lib -iname '*esmf*'` && \ rm -vf `find /opt/view/include -iname '*esmf*'` && \ rm -vf `find /opt/view/cmake -iname '*esmf*'` # Note that new spack stack environments (prior to a skylab release) often # have ctest failures but they should not have build failures. mkdir jedi-bundle/build && cd $_ ecbuild ../ 2>&1 | tee log.configure make -j4 2>&1 | tee log.make ctest |
push the built image to ECR
aws ecr get-login-password --profile=jcsda-usaf-aws-us-east-2 \ --region us-east-2 \ | docker login --username AWS \ --password-stdin $DOCKER_CTR_REPO docker push ${DOCKER_CTR_REPO}/jedi-${BUILD_ENV}-dev:test |
In order to push to dockerhub, an access token is needed. Log into hub.docker.com, go to “Account Settings”, “Security”, “New Access Token”.
# Log into dockerhub using your account token and username. docker login -u USERNAME docker image tag \ 469205354006.dkr.ecr.us-east-1.amazonaws.com/jedi-gnu-openmpi-dev:latest \ jcsda/docker-gnu-openmpi-dev:latest docker image tag jcsda/docker-gnu-openmpi-dev:latest \ jcsda/docker-gnu-openmpi-dev:skylab-vN docker push jcsda/docker-gnu-openmpi-dev:latest docker push jcsda/docker-gnu-openmpi-dev:skylab-vN docker image tag \ 469205354006.dkr.ecr.us-east-1.amazonaws.com/jedi-clang-mpich-dev:latest \ jcsda/docker-clang-mpich-dev:latest docker image tag jcsda/docker-clang-mpich-dev:latest \ jcsda/docker-clang-mpich-dev:skylab-vN docker push jcsda/docker-clang-mpich-dev:latest docker push jcsda/docker-clang-mpich-dev:skylab-vN |
When the image is available in your docker-daemon (you have created it locally, or pulled it from dockerhub or AWS ECR):
singularity build jedi-gnu-openmpi-spack-stack-XYZ.sif docker-daemon:469205354006.dkr.ecr.us-east-1.amazonaws.com/jedi-gnu-openmpi-dev:spack-stack-X.Y.Z
Before uploading to Sylabs make sure to sign the image and authenticate to sylabs.io.
Generate a new keypair (one-off, does not expire). Open a command line and check for your generated keys:
singularity keys list
If you already have a generated key just skip the next steps (create key, obtain your key's fingerprint). Otherwise, if you need to generate a key just run:
singularity keys newpair
In order to obtain the fingerprint from the key you just created, list again your keys:
singularity keys list
This will list the keys you have created. In this list, you will find your key's fingerprint next to "F:", for example:
0) U: John Doe (my key) <johndoe@sylabs.io>
C: 2018-08-21 20:14:39 +0200 CEST
F: D87FE3AF5C1F063FCBCC9B02F812842B5EEE5934
L: 4096
Copy your fingerprint ( e.g. from the previous example the fingerprint would be D87FE3AF5C1F063FCBCC9B02F812842B5EEE5934 ) and then add it to your keystore by doing:
singularity keys push <Your key's fingerprint>
Now sign the image:
singularity sign jedi-gnu-openmpi-spack-stack-XYZ.sif
You need to set your token on Sylabs first:
To upload the Singularity container to Sylabs follow instructions here: https://docs.sylabs.io/guides/latest/user-guide/endpoint.html#public-sylabs-cloud
singularity push jedi-clang-mpich-spack-stack-XYZ.sif library://jcsda/public/jedi-clang-mpich-dev:latest
singularity push jedi-gnu-openmpi-spack-stack-XYZ.sif library://jcsda/public/jedi-gnu-openmpi-dev:latest
Note. We don’t have enough free space on the sylabs platform to store the previous AND latest containers for both clang and gnu. Therefore, upload the latest clang container, then go to the website and delete the previous version of the clang container. Then, upload the latest gnu container and once done delete the previous version of the gnu container. The sylabs people offered us in the past to do a writeup of how JCSDA uses sylabs in return for more free storage; but we don’t really need this.
We have an s3 bucket for storing singularity containers … pull the old ones, store them, before uploading new ones (jcsda-noaa account, jcsda containers backup):
# as root
cd /home/ubuntu/spack-stack/keep_singularity
aws s3 sync ./ s3://jcsda-containers-backup/