Docker installation
Step-by-step guide
Download Docker image for JEDI
[xinzhang@localhost jedi]$ docker container rm $(docker container ls -aq) # Clean up all the running/dead/sleep container
[xinzhang@localhost jedi]$ docker image rm -f $(docker image ls -aq) # clean up all cached images
[xinzhang@8-236 ~]$ docker pull jcsda/jedi-docker:gnu7 # check out the JEDI Docker image from the registry
gnu7: Pulling from jcsda/jedi-docker
ae79f2514705: Pull complete
5ad56d5fc149: Pull complete
170e558760e8: Pull complete
395460e233f5: Pull complete
6f01dc62e444: Pull complete
fb1cc11fd1c7: Pull complete
58bde7425ccf: Pull complete
0ee8a8ab489f: Pull complete
66182f50c1b8: Pull complete
8c5e9c726feb: Pull complete
47feb6bfe9f4: Pull complete
03cc9d13057b: Pull complete
Digest: sha256:c0441594a97f339e5c9d79da7551c7a4ee81254d835d1035e2bc5f9fc656235a
Status: Downloaded newer image for jcsda/jedi-docker:gnu7
[xinzhang@8-236 ~]$ docker image ls # list the available local Docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
jcsda/jedi-docker gnu7 bf752acacf92 18 minutes ago 1.72GB
if you want to know more about the jedi-docker images, please refer to here.
Check out the GSI code and get case (T24) data (outside of container)
[xinzhang@8-236 ~] mkdir -p ~/jedi/code
[xinzhang@8-236 ~] cd ~/jedi/code
[xinzhang@8-236 code]$ git clone https://Your_UserName@bitbucket.org/jcsda/gsi-vlab.git gsi # Check out the GSI code, please replace Your_UserName with your bitbucket username
[xinzhang@8-236 code]$ cd gsi/
[xinzhang@8-236 gsi]$ git submodule init # Check out the external repositories
Submodule 'fix' (git@bitbucket.org:jcsda/fix) registered for path 'fix'
Submodule 'libsrc' (git@bitbucket.org:jcsda/external-libs) registered for path 'libsrc'
[xinzhang@8-236 gsi]$ git submodule update # Check out the external repositories
Cloning into 'fix'...
remote: Counting objects: 332, done.
remote: Compressing objects: 100% (235/235), done.
remote: Total 332 (delta 102), reused 320 (delta 96)
Receiving objects: 100% (332/332), 961.58 MiB | 357.00 KiB/s, done.
Resolving deltas: 100% (102/102), done.
Checking connectivity... done.
Submodule path 'fix': checked out 'd13e5a23dde538ceefedc39746b029878ee6ff60'
Cloning into 'libsrc'...
remote: Counting objects: 896, done.
remote: Compressing objects: 100% (630/630), done.
remote: Total 896 (delta 263), reused 890 (delta 260)
Receiving objects: 100% (896/896), 5.57 MiB | 1.59 MiB/s, done.
Resolving deltas: 100% (263/263), done.
Checking connectivity... done.
Submodule path 'libsrc': checked out '8fa69e1fd011b658de6cd2e9f08cf1ad77566f4f'
[xinzhang@8-236 gsi] cd fix
[xinzhang@8-236 gsi] git checkout rev1
[xinzhang@8-236 jedi]$ cd ~/jedi
[xinzhang@8-236 jedi]$ wget ftp://ftp.ucar.edu/pub/mmm/xinzhang/caseT24.tar.gz
If you cannot run wget, try curl and skip the next two steps:
curl ftp://ftp.ucar.edu/pub/mmm/xinzhang/caseT24.tar.gz | tar xvz
[xinzhang@8-236 jedi]$ tar xvf caseT24.tar.gz
[xinzhang@localhost jedi]$ rm caseT24.tar.gz
[xinzhang@localhost jedi]$ ls -la
total 1163160
drwxrwxr-x. 5 xinzhang xinzhang 64 Oct 4 12:11 .
drwxrwxr-x. 4 xinzhang xinzhang 33 Oct 4 11:30 ..
drwxrwxr-x. 3 xinzhang xinzhang 17 Oct 4 11:30 code
drwxrwxr-x. 4 xinzhang xinzhang 36 Oct 3 13:34 data
drwxrwxr-x. 2 xinzhang xinzhang 32 Oct 3 13:55 test
1) T24 background(size: 52M):
On Theia: /scratch3/BMC/wrfruc/mhu/jedi/casedata/T24_2016030406/gest24/
On Docker: ~/jedi/data/case/T24_2016030406/gest24
aircraft_t_bias.gdas.2016030400 sfcf05.gdas.2016030400 sigf04.gdas.2016030400
biascr.gdas.2016030400 sfcf06.gdas.2016030400 sigf05.gdas.2016030400
biascr_pc.gdas.2016030400 sfcf07.gdas.2016030400 sigf06.gdas.2016030400
radstat.gdas.2016030400 sfcf08.gdas.2016030400 sigf07.gdas.2016030400
sfcf03.gdas.2016030400 sfcf09.gdas.2016030400 sigf08.gdas.2016030400
sfcf04.gdas.2016030400 sigf03.gdas.2016030400 sigf09.gdas.2016030400
2) T24 ensemble forecast(size:178M):
On Theia:/scratch3/BMC/wrfruc/mhu/jedi/casedata/T24_2016030406/enst24
On Docker: ~/jedi/data/case/T24_2016030406/enst24
20 member, hourly output during 3-h to 09-h forecast initialized from 2016030400
sfg_2016030400_fhr03s_mem020
sfg_2016030400_fhr04s_mem020
sfg_2016030400_fhr05s_mem020
sfg_2016030400_fhr06s_mem020
sfg_2016030400_fhr07s_mem020
sfg_2016030400_fhr08s_mem020
sfg_2016030400_fhr09s_mem020
3) Observations:
On Theia:/scratch3/BMC/wrfruc/mhu/jedi/casedata/T24_2016030406/obs
On Docker: ~/jedi/data/case/T24_2016030406/obs
3.1) Original observation file: 1bamua.gdas.2016030406, prepbufr.gdas.2016030406
3.2) Small observation file: 1bamua.gdas_picked, prepbufr_picked
List of observations in prepbufr_listobs.txt:
header: number, level number, SID XOB YOB DHR TYP ELV SAID T29
observation:POB QOB TOB ZOB UOB VOB PWO CAT PRSS
obs quality: PQM QQM TQM ZQM WQM NUL PWQ
obs error: POE QOE TOE NUL WOE NUL PWE
List of observations in 1bamua_listobs.txt:
header: number, channel number, SAID FOVN YEAR MNTH DAYS HOUR MINU SECO CLAT CLON CLATH CLONH HOLS
header 2: SAZA SOZA BEARAZ SOLAZI
obs (channel):
1) code location:
gsi/util/read_ncobs
2) compile the code:
make
3) executable is: read_ncobs.exe
It will read BUFR radiance data from sample file:amsua_n19_wprofiles.nc4
Build GSI code and run GSI case
suppose you already checked out the GSI code as $HOME/jedi/code/gsi and have test case directory as $HOME/jedi/data.
[xinzhang@8-236 jedi]$ cd ~
[xinzhang@8-236 ~] mkdir -p ~/jedi/build/gsi # Prepare the directory for GSI building, out-of-source building
[xinzhang@8-236 ~]$ docker run -it --rm -v $(pwd)/jedi:/jedi jcsda/jedi-docker # Start a container instance from the image, mount local $(pwd)/jedi to /jedi of container.
root@c959e2137acc:/usr/local$ cd /jedi/build/gsi # Enter into directory for GSI building
root@339677624bb2:/jedi/build/gsi$ export FC=mpif90
root@339677624bb2:/jedi/build/gsi$ cmake -DBUILD_CORELIBS=ON -DUSE_WRF=OFF -DBUILD_GLOBAL=ON /jedi/code/gsi
root@339677624bb2:/jedi/build/gsi$ make -j`nproc` # parallel compilation with the availble cores
root@339677624bb2:/jedi/build/gsi$ ls -al bin/
total 44560
drwxr-xr-x. 2 root root 44 Sep 29 17:24 .
drwxr-xr-x. 9 1000 1001 251 Sep 29 17:17 ..
-rwxr-xr-x. 1 root root 8266248 Sep 29 17:24 enkf_gfs.x
-rwxr-xr-x. 1 root root 37358984 Sep 29 17:23 gsi_global.x
root@339677624bb2:/jedi/code/gsi/build$ cd /jedi/test
root@339677624bb2:/jedi/test$ ./run_gsi_picked.csh
After a successful GSI run, we can check the O-B for each observation by reading diagnosis files.
1) combine subdomain drag files to one drag file:
#cd /jedi/test/testT24_picked
#ls dir.*
#cat dir.0000/conv_01 dir.0001/conv_01 > conv_ges
#cat dir.0000/amsua_metop-b_01 dir.0001/amsua_metop-b_01 > amsua_metop-b_ges
2) compile read_daig utilities:
#/jedi/code/gsi/util/Analysis_Utilities/read_diag
#make
Should generate two executables: read_diag_conv.exe read_diag_rad.exe
3) read conventional observations:
#vi namelist.conv
It should be set up like this:
&iosetup
infilename='/jedi/test/testT24_picked/conv_ges',
outfilename='./results_conv_ges',
/
# ./read_diag_conv.exe
Now, we should see a text file: results_conv_ges.
The content of each column is listed below
1-3: variable name,station ID,itype,
4-7: obs relative time,latitude,longitude,pressure,
8-14:iuse,observation,background, O-B (for wind, U and then V)
4) read radiance drag:
#vi namelist.conv
It should be set up like:
&isotope
infilename='/jedi/test/testT24_picked/conv_ges',
outfilename='./results_conv_ges',
/
# ./read_diag_rad.exe
Now, we should see a new text file: results_amsua_metop-b_ges
The content of this text file is listed below:
head information (number 1-26):
1:observation latitude (degrees)
2:observation longitude (degrees)
3:model (guess) elevation at observation location
4:observation time (hours relative to analysis time)
5:sensor scan position
6:satellite zenith angle (degrees)
7:satellite azimuth angle (degrees)
8:solar zenith angle (degrees)
9:solar azimuth angle (degrees)
10:sun glint angle (degrees) (sgagl)
11:fractional coverage by water
12:fractional coverage by land
13:fractional coverage by ice
14:fractional coverage by snow
15:surface temperature over water (K)
16:surface temperature over land (K)
17:surface temperature over ice (K)
18:surface temperature over snow (K)
19:soil temperature (K)
20:soil moisture OR graupel water path (if gmi .or. saphir)
21:surface land type
22:vegetation fraction OR scattering index from AMSU-A (if radmod%lcloud_fwd .and. sea)
23:snow depth OR integrated CLWP (kg/m**2) from background (if radmod%lcloud_fwd .and. sea)
24:surface wind speed (m/s)
25:cloud fraction (%) OR if microwave:
cloud liquid water (kg/m**2) OR
cloud liquid water (kg/m**2) if(radmod%lcloud_fwd .and. sea)
clw (kg/m**2) from retrievals if(gmi .or. amsr2)
26:cloud top pressure (hPa) OR if microwave:
total column precip. water (km/m**2) OR
retrieved CLWP (kg/m**2) from simulated BT if((radmod%lcloud_fwd .and. sea) .or. gmi .or. amsr2)
channel information (loop through each channel with column list), for each column:
1: observed brightness temperature (K)
2: observed - simulated Tb with bias correction (K)
3: observed - simulated Tb with no bias correction (K)
4: inverse observation error
5: quality control mark or event indicator
6: surface emissivity
7: stability index
8: indicator of cloudy consistency (if lcloud_fwd) OR d(Tb)/d(Ts)
Check out the OOPS code (outside of container)
[xinzhang@localhost jedi]$ cd ~/jedi/code
[xinzhang@localhost code]$ git clone -b feature/ufo https://github.com/UCAR/oops.git # Check out the oops code to oops directory
Build and test OOPS code
[xinzhang@localhost code]$ cd ~
[xinzhang@localhost ~] mkdir -p ~/jedi/build/oops # Prepare the directory for OOPS building, out-of-source building
[xinzhang@localhost ~]$ docker container run -it --rm -v $(pwd)/jedi:/jedi jcsda/jedi-docker # Start the container and mount local $(pwd)/OOPS to /OOPS of container
root@3cfbdc6da2b0:/usr/local$ cd /jedi/build/oops
root@3cfbdc6da2b0:/jedi/build/OOPS$ ecbuild --build=release -DLAPACK_LIBRARIES=$LAPACK_LIBRARIES /jedi/code/oops
root@3cfbdc6da2b0:/jedi/build/OOPS$ make -j`nproc`
...
...
Scanning dependencies of target test_qg_4densvar.x
[ 99%] Built target test_qg_4densvar.x
[ 99%] Linking CXX executable test_qg_increment
[ 99%] Built target test_qg_increment
[100%] Linking CXX executable test_qg_localization
[100%] Built target test_qg_localization
root@3cfbdc6da2b0:/jedi/build/OOPS/oops$ ctest
...
...
108/108 Test #108: test_qg_dfi ...................... Passed 0.46 sec
100% tests passed, 0 tests failed out of 108
Label Time Summary:
boost = 1.13 sec (53 tests)
executable = 1.16 sec (56 tests)
fortran = 0.04 sec (3 tests)
oops = 23.89 sec (108 tests)
script = 22.73 sec (52 tests)
Total Test time (real) = 23.94 sec
Download WRF and WPS (outside of container)
[xinzhang@localhost jedi]$ cd ~/jedi/code
[xinzhang@localhost code]$ wget http://www2.mmm.ucar.edu/wrf/src/WPSV3.9.1.TAR.gz
[xinzhang@localhost code]$ tar xvf WPSV3.9.1.TAR.gz
[xinzhang@localhost code]$ rm -f WPSV3.9.1.TAR.gz
[xinzhang@localhost code]$ wget http://www2.mmm.ucar.edu/wrf/src/WRFV3.9.1.1.TAR.gz
[xinzhang@localhost code]$ tar xvf WRFV3.9.1.1.TAR.gz
[xinzhang@localhost code]$ rm -f WRFV3.9.1.1.TAR.gz
If you cannot run wget, try curl:
curl http://www2.mmm.ucar.edu/wrf/src/WRFV3.9.1.1.TAR.gz | tar xvz
curl http://www2.mmm.ucar.edu/wrf/src/WPSV3.9.1.TAR.gz | tar xvz
[xinzhang@localhost code]$ ls -la
total 12
drwxrwxr-x. 6 xinzhang xinzhang 53 Oct 5 16:01 .
drwxrwxr-x. 13 xinzhang xinzhang 196 Oct 5 14:45 ..
drwxrwxr-x. 13 xinzhang xinzhang 243 Oct 5 14:29 gsi
drwxrwxr-x. 10 xinzhang xinzhang 4096 Oct 5 11:13 oops
drwxr-xr-x. 7 xinzhang xinzhang 4096 Aug 17 11:10 WPS
drwxr-xr-x. 17 xinzhang xinzhang 4096 Aug 28 15:01 WRFV3
Build WRF and WPS code
[xinzhang@localhost ~]$ docker container run -it --rm -v $(pwd)/jedi:/jedi jcsda/jedi-docker # Start the container and mount local $(pwd)/OOPS to /OOPS of container
root@acb8561561ca:/usr/local$ cd /jedi/code/WRFV3
root@acb8561561ca:/jedi/code/WRFV3$ ./configure # Select 34 for dmpar gnu
root@acb8561561ca:/jedi/code/WRFV3$ vi configure.wrf # Becasue WRF has problem to test mpi2 support, please add -DMPI2_SUPPORT after DM_CC = mpicc
root@acb8561561ca:/jedi/code/WRFV3$ ./compile em_real # For unknown reason, module_ra_rrtmg_swf.f90 might fail to be compiled due to internal compiler error, typing "./compile em_real" again usually can solve the problem.
...
...
==========================================================================
build started: Thu Oct 5 22:12:18 UTC 2017
build completed: Thu Oct 5 22:21:58 UTC 2017
---> Executables successfully built <---
-rwxr-xr-x. 1 root root 36472488 Oct 5 22:21 main/ndown.exe
-rwxr-xr-x. 1 root root 36346432 Oct 5 22:21 main/real.exe
-rwxr-xr-x. 1 root root 35944328 Oct 5 22:21 main/tc.exe
-rwxr-xr-x. 1 root root 40051304 Oct 5 22:21 main/wrf.exe
==========================================================================
root@acb8561561ca:/jedi/code/WRFV3$ cd ../WPS
root@e7c4293f629c:/jedi/code/WPS$ ./configure # Select 3 for dmpar gfortran
root@acb8561561ca:/jedi/code/WPS$ vi configure.wrf # edit COMPRESSION_LIBS and COMPRESSION_INC as following lines and remove -f90=gfortran from DM_FC and remove -cc=gcc from DM_CC
...
...
...
#
# Settings for Linux x86_64, gfortran (dmpar)
#
#
COMPRESSION_LIBS = -L/usr/local/lib -ljasper -lpng -lz
COMPRESSION_INC = -I/usr/local/include
FDEFS = -DUSE_JPEG2000 -DUSE_PNG
SFC = gfortran
SCC = gcc
DM_FC = mpif90
DM_CC = mpicc
...
...
...
root@acb8561561ca:/jedi/code/WPS$ compile
root@acb8561561ca:/jedi/code/WPS$ ls -al *.exe
lrwxrwxrwx. 1 root root 23 Oct 6 16:02 geogrid.exe -> geogrid/src/geogrid.exe
lrwxrwxrwx. 1 root root 23 Oct 6 16:02 metgrid.exe -> metgrid/src/metgrid.exe
lrwxrwxrwx. 1 root root 21 Oct 6 16:02 ungrib.exe -> ungrib/src/ungrib.exe
Download MPAS code (outside of container)
[xinzhang@localhost code]$ git clone -b v5.2 https://github.com/MPAS-Dev/MPAS-Release.git
Build MPAS code
[xinzhang@localhost MPAS-Release]$ cd ~
[xinzhang@localhost ~]$ docker container run -it --rm -v $(pwd)/jedi:/jedi jcsda/jedi-docker # Start the container and mount local $(pwd)/OOPS to /OOPS of container
root@acb8561561ca:/usr/local$ cd /jedi/MPAS-Release
root@acb8561561ca:/jedi/code/MPAS-Release$ vi ./src/core_atmosphere/physics/checkout_data_files.sh # Replace the MPAS-Data address to https
root@acb8561561ca:/jedi/code/MPAS-Release$ git diff
diff --git a/src/core_atmosphere/physics/checkout_data_files.sh b/src/core_atmosphere/physics/checkout_data_files.sh
index e62b466..f2c3cba 100755
--- a/src/core_atmosphere/physics/checkout_data_files.sh
+++ b/src/core_atmosphere/physics/checkout_data_files.sh
@@ -56,7 +56,7 @@ fi
which git
if [ $? -eq 0 ]; then
echo "*** trying git to obtain WRF physics tables ***"
- git clone git://github.com/MPAS-Dev/MPAS-Data.git
+ git clone https://github.com/MPAS-Dev/MPAS-Data.git
if [ $? -eq 0 ]; then
mv MPAS-Data/atmosphere/physics_wrf/files/* physics_wrf/files
rm -rf MPAS-Data
root@acb8561561ca:/jedi/code/MPAS-Release$ make gfortran CORE=atmosphere
...
...
make[2]: Leaving directory '/home/xinzhang/jedi/code/MPAS-Release/src/core_atmosphere'
*******************************************************************************
MPAS was built with default double-precision reals.
Debugging is off.
Parallel version is on.
Papi libraries are off.
TAU Hooks are off.
MPAS was built without OpenMP support.
MPAS was built with .F files.
The native timer interface is being used
Using the PIO 1.x library.
*******************************************************************************
make[1]: Leaving directory '/home/xinzhang/jedi/code/MPAS-Release'
Test MPAS
root@acb8561561ca:/jedi/code/MPAS-Release$ wget ftp://ftp.ucar.edu/pub/mmm/xinzhang/mpas_10242_case.tgz
root@acb8561561ca:/jedi/code/MPAS-Release$ tar xvf mpas_10242_case.tgz
root@acb8561561ca:/jedi/code/MPAS-Release$ mpirun --allow-run-as-root -np 8 atmosphere_model # or mpirun --allow-run-as-root -np 1 atmosphere_model
task 0 of 8 is running
task 1 of 8 is running
task 2 of 8 is running
task 3 of 8 is running
task 6 of 8 is running
task 7 of 8 is running
task 4 of 8 is running
task 5 of 8 is running
Check out the FV3 code (outside of container)
[xinzhang@localhost jedi]$ cd ~/jedi/code
[xinzhang@localhost code]$ git clone https://Your_UserName@bitbucket.org/jcsda/comfv3.git comfv3 # Check out the FV3 code, please replace Your_UserName with your bitbucket username
[xinzhang@localhost code] cd comfv3
[xinzhang@localhost comfv3] git submodule init
[xinzhang@localhost comfv3] git submodule update
[xinzhang@localhost comfv3] git branch -a
[xinzhang@localhost comfv3] git checkout feature/gfortran_build
[xinzhang@localhost comfv3] cd FV3
[xinzhang@localhost FV3] git checkout feature/gfortran_build
[xinzhang@localhost FV3] cd ../NEMS
[xinzhang@localhost NEMS] git checkout feature/gfortran_build
Build the FV3 code
[xinzhang@localhost NEMS]$ cd ~
[xinzhang@localhost ~]$ docker container run -it --rm -v $(pwd)/jedi:/jedi jcsda/jedi-docker # Start the container and mount local $(pwd)/OOPS to /OOPS of container
root@acb8561561ca:/usr/local$ cd /jedi/code/comfv3
root@acb8561561ca:/jedi/code/comfv3$ cd release/v0/exp/
root@acb8561561ca:/jedi/code/comfv3/release/v0/exp$ ./build.sh macgnu
...
...
...
Elapsed time 66 seconds. Compiling HYDRO=Y 32BIT=N finished
+ cp /home/xinzhang/jedi/code/comfv3/release/v0/exp/../../../tests/fv3_1.exe ../NEMS/exe/fv3_gfs_hydro.prod.64bit.x
+ rm /home/xinzhang/jedi/code/comfv3/release/v0/exp/../../../tests/fv3_1.exe
+ exit 0
Check out and build the CRTM V2.2.3 code
root@acb8561561ca:/usr/local$ cd /jedi/code
root@acb8561561ca:/jedi/code> git clone https://YourUserName@bitbucket.org/jcsda/crtm-release.git crtm_v2.2.3 # please replace Your_UserName with your bitbucket username
root@acb8561561ca:/jedi/code> cd crtm_v2.2.3/
root@acb8561561ca:/jedi/code/crtm_v2.2.3> source config-setup/gfortran.setup
root@acb8561561ca:/jedi/code/crtm_v2.2.3>./configure --prefix=/jedi/build/crtm
root@acb8561561ca:/jedi/code/crtm_v2.2.3> make
root@acb8561561ca:/jedi/code/crtm_v2.2.3> make check
root@acb8561561ca:/jedi/code/crtm_v2.2.3> make install
Related articles
-
Page:
-
Page:
-
Page:
-
Page:
-
Page: