Research Repository for Data and Diagnostics
GitHub repositories:
- https://github.com/JCSDA-internal/r2d2 (R2D2 V2 and R2D2 V3 server: feature/cleanup1)
- https://github.com/JCSDA-internal/r2d2-ingest (R2D2 data ingest request system)
- https://github.com/JCSDA-internal/r2d2-data (localhost R2D2 database)
- https://github.com/JCSDA-internal/r2d2-client (R2D2 V3 client: feature/cleanup1)
Related GitHub repositories:
- https://github.com/JCSDA-internal/diag-plots (R2D2 V3: feature/r2d2v3)
- https://github.com/JCSDA-internal/ewok (R2D2 V3: feature/r2d2v3)
- https://github.com/JCSDA-internal/skylab (R2D2 V3: feature/r2d2v3)
Documentation:
- https://github.com/JCSDA-internal/r2d2/blob/develop/TUTORIAL.md
- https://github.com/JCSDA-internal/r2d2/blob/develop/TABLES.md
R2D2 Planning Meeting Notes:
News Releases:
- https://www.jcsda.org/news-blog/2024/7/3/jcsda-team-publishes-and-presents-research-at-the-2024-international-conference-on-computational-science
- https://www.jcsda.org/news-blog/2024/4/19/joint-nasajcsda-code-sprint-greatly-increases-r2d2-sustainability
Table of Contents |
---|
About
R2D2 is a lightweight Python API, an SQL schema, and a live, production, cloud-based MySQL database server that Skylab utilizes for data assimilation experiments. The current R2D2 is version 2 which uses Python MySQL Connector to execute queries directly to the remote, production MySQL database located on us-east-2 on r2d2.jcsda.org on port 3306. The new R2D2 is version 3 which uses a standardized client / server architecture using REST HTTP API calls well-defined by an OpenAPI 3.0 specification matching R2D2's SQL schema. Both the client and REST server APIs for R2D2 V3 are item-based and not function-based as in the R2D2 V2 database connector API. The SQL schema for V2 and V3 are identical.
Terminology
Procedures
Adding Files to EWOK_STATIC_DATA to HPC environments
Requirements:
Access to Orion since files are synced from there, although this can be worked around.
- Access to
jedi-para
orjedipara
orrole-jcsda
on HPCs
Steps:
- Log into Orion and sudo to the
role-jcsda
account.ssh -Y <user_name>@orion-login.hpc.msstate.edu sudo -su role-jcsda
- Copy static files from staging location to the decided $EWOK_STATIC_DATA location. The staging location is usually given in the work ticket and is the location where the JCSDA team member has placed the data. File names can be renamed if needed. Make sure to match the permissions as the other files in $EWOK_STATIC_DATA directory (hint: chmod 644).
- Run the
rsync
from the other HPCs. There is a script located inside jedipara's~/bin
directory the can be used to perform the rsync. Make sure the username is yours instead of the most recent team members. If you get an ssh error, you can remove the machine from known hosts and try again. An example of the script on Discover is located at /home/jedipara/bin/rsync-ewok-static-from-orion.sh. Sync to discover example:ssh -XY <user_name>@login.nccs.nasa.gov sudo -iu jedipara cd bin vi rsync-ewok-static-from-orion.sh # Edit to your Orion username bash rsync-ewok-static-from-orion.sh
- Check off each machine in the R2D2-data ticket as you sync.
Adding Files to EWOK_STATIC_DATA to the AWS bucket
Execute the following line from Orion. Note, this can be done from your own user and aws is a part of our JEDI environment provided by spack-stack.
/path/to/aws s3 sync --size-only --profile=jcsda-usaf-aws-us-east-2 /work2/noaa/jcsda/role-jcsda/static/ s3://r2d2.static.s3.us-east-2/
Adding Files for R2D2 Archive (such as background files)
Requirements:
This procedure requires the use of R2D2 Admin functions, therefore you will need:
- Recent version of R2D2 on the host the data is located on
- Make sure your r2d2, solo, and venv directories are accessible by the jedipara user. Hint:
chmod g+rX jedi-bundle chmod o+rX jedi-bundle chmod -R g+rX r2d2 chmod -R g+rX solo chmod -R g+rX venv
Steps (example using Discover):
- Log into the HCP where the data to be ingested is located, become the jedipara or role-jcsda user depending on HPC, and go to the location for R2D2 archive.
ssh -XY <user_name>@login.nccs.nasa.gov sudo -iu jedipara cd /discover/nobackup/projects/jcsda/s2127
- Set up your venv, using setup.sh, that was created in the requirements section. You might need to update $JEDI_ENV.
vi setup.sh # Verify JEDI_ENV location source setup.sh
- Log into a screen session so no work will be lost if you get logged out
screen -S r2d2_ingest
- Verify that your venv is still loaded, if not re-load following the same steps as #2.
- Check your r2d2 admin utility access. If this does not work make sure you followed the requirements.
python3 >>> from r2d2.util.admin_util import AdminUtil
- Before moving an entire experiment, you can check which files will be moved by using R2D2Data.search, looping over all of the R2D2Data items (analysis, bias_correction, diagnostic, feedback, forecast, media, observation).
>>> from r2d2 import R2D2Data >>> R2D2Data.search(item='forecast', experiment='e65ab2') # Returns: [{'forecast_index': 3207928, 'model': 'geos', 'experiment': 'e65ab2', 'file_extension': 'nc4', 'resolution': 'c90', 'domain': '', 'file_type': 'bkg', 'step': 'PT3H', 'tile': -9999, 'member': -9999, 'date': datetime.datetime(2022, 2, 15, 6, 0), 'create_date': datetime.datetime(2024, 8, 23, 20, 52, 47), 'mod_date': datetime.datetime(2024, 8, 23, 20, 52, 47)}]
- Use the "AdminUtil.move_experiment" function to move the experiment given in the r2d2-ingest ticket to oper and the correct data_store. You can refer to the r2d2 code to check for arguments needed and how to use them.
>>> AdminUtil.move_experiment(source_experiment='<expid>', target_experiment='oper', ensemble_data_store_type='<data_store_type>')
- To get the data to the other HPCs, on your origin system you will sync to S3 and then sync the data stores.
/path/to/aws s3 sync --size-only --profile=jcsda-usaf-aws-us-east-2 /discover/nobackup/projects/jcsda/s2127/r2d2-archive-nccs/ s3://r2d2-archive-jcsda-usaf-aws-us-east-2/ python3 >>> from r2d2.util.admin_util import AdminUtil >>> AdminUtil.sync_data_stores(source_data_store='r2d2-archive-nccs', target_data_store='r2d2-archive-jcsda-usaf-aws-us-east-2')
- Log into MSU's Orion, rsync from S3 and sync the data_stores. Then you can proceed to log into the other HPCs and sync NWSC/SSEC systems from MSU using the sync scripts in ~/bin and sync the data_stores.
ssh -Y <user_name>@orion-login.hpc.msstate.edu sudo -su role-jcsda # Load venv to get aws modules or load modules following jedi-docs # To sync archive: aws s3 sync --size-only --profile=jcsda-usaf-aws-us-east-2 s3://r2d2-archive-jcsda-usaf-aws-us-east-2/ /work2/noaa/jcsda/role-jcsda/r2d2-archive-msu/ # To sync data_stores: python3 >>> from r2d2.util.admin_util import AdminUtil >>> AdminUtil.sync_data_stores(source_data_store='r2d2-archive-jcsda-usaf-aws-us-east-2', target_data_store='r2d2-archive-msu')
- Finally you will run the
rsync
and sync the data_stores from the other data hubs. There is a script located inside jedipara's~/bin
directory the can be used to perform the rsync. Make sure the username is yours instead of the most recent team members. If you get an ssh error, you can remove the machine from known hosts and try again. Note, if you need to find the value for data_hub that is stored in R2D2 you can use: R2D2Index.search(item='data_hub'). Run the following steps for each data hub:ssh -XY <user_name>@<hpc> sudo -iu jedipara cd bin vi rsync-r2d2-archive-from-msu-<user-name>.sh # Edit to your Orion username # Run the rsync for archive: bash rsync-r2d2-archive-from-msu-<user-name>.sh # Sync the data_stores python3 >>> from r2d2.util.admin_util import AdminUtil >>> AdminUtil.sync_data_stores(source_data_store='r2d2-archive-jcsda-usaf-aws-us-east-2', target_data_store='r2d2-archive-<data_hub>')
Adding observation files to R2D2 Archive
Prerequisites:
- IODA formated observation files to be moved and synced, these must be VERIFIED by the observation team
- Access to R2D2 and the feature/new_scripts branch
Steps:
- The easiest way is to log into two Orion shells, one as user role-jcsda and the other as your own user
- As your own user, update your copy of r2d2-ingest code from https://github.com/JCSDA-internal/r2d2-ingest
ssh -Y <user_name>@orion-login.hpc.msstate.edu cd /work2/noaa/jcsda/<user_name>/JEDI/jedi-bundle # Where ever you have or keep jedi-bundle git clone https://github.com/JCSDA-internal/r2d2-ingest.git cd r2d2-ingest git checkout feature/new_scripts
- As role-jcsda, use the environment from your individual setup of jedi/r2d2-ingest. There are example setup.sh scripts in /work2/noaa/jcsda/role-jcsda to use and refer to.
ssh -Y <user_name>@orion-login.hpc.msstate.edu sudo -su role-jcsda cd /work2/noaa/jcsda/role-jcsda vi setup.sh # Edit JEDI_ENV to your location
- Usually the observation team will give you the location to their log file which contains the output from their local R2D2 store. Inside r2d2-ingest, scripts_v2/parse/parse_r2d2_obs_log.py, will dump the r2d2 indexes into a text file. Therefore, update parse_r2d2_obs_log.py or a file that supports your needs with the information from the observation team in the r2d2-ingest issue ticket. Then run parse_r2d2_obs_log.py (or other). Question for Eric: is this done as your own user?
- Update scripts_v2/parse/move_parsed_obs.py to point to the new text file.
- Once a txt file is generated for the experiment, start a screen session as the role-jcsda user to move the parsed obs to R2D2's archive data store.
# Inside role-jcsda session screen -S move_obs # Verify your $JEDI_ENV is correct echo $JEDI_ENV python3 $JEDI_ENV/jedi-bundle/r2d2-ingest/scripts_v2/parse/move_parsed_obs.py
- After that completes, follow the steps to sync R2D2's archive and data_stores across all of our HPCs. WARNING, you must use the screen command (or nohup). The full obs ingest and sync could take hours and background files could even take days!
Installing the r2d2 v3 server
This installation process needs to be updated and combined into one standard install. Note: The r2d2 server does NOT require spack-stack or any spack-stack-related dependencies.
cd r2d2 python3 -m pip install -e . cd server python3 -m pip install -e .
Starting the r2d2 v3 server
cd r2d2/server/app pwd # Returns .../r2d2/server/app run_r2d2_app --port=8080 --debug # You should see this output. This means that the server is running. * Serving Flask app 'app.app' * Debug mode: on WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead. * Running on all addresses (0.0.0.0) * Running on http://127.0.0.1:8080 * Running on http://192.168.10.66:8080 Press CTRL+C to quit * Restarting with stat * Debugger is active! * Debugger PIN: 912-946-391
Installing the r2d2 v3 client
cd r2d2-client python3 -m pip install -e .
How to use the r2d2 v3 client generator
cd r2d2/server/app python3 >>> from generator import Generator >>> Generator().generate(client_output_path='/Users/eric2/jedi/jedi-bundle/r2d2-client/src/r2d2_client/r2d2_client.py') >>> # OR >>> Generator(selected_item='observation').generate(client_output_path='/Users/eric2/jedi/jedi-bundle/r2d2-client/src/r2d2_client/r2d2_client.py')
Using the r2d2 v3 client
python3 >>> from r2d2_client import R2D2Client >>> R2D2Client.search_experiment(user='eric')
How to launch the Swagger Editor for editing app.yaml using a localhost Docker container
docker pull swaggerapi/swagger-editor docker run -d -p 80:8080 swaggerapi/swagger-editor
Important Reference Links
- https://swagger.io/specification/v3/
- https://editor.swagger.io/ (Online Version)
- https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods
- https://developer.mozilla.org/en-US/docs/Web/HTTP/Status