Experiments and Workflows Orchestration Kit

Github repository: https://github.com/JCSDA-internal/ewok

Documentation: https://github.com/JCSDA-internal/ewok/blob/develop/README.md

Table of Contents

EWOK Developer Section

EWOK currently uses the ECMWSF's ecFlow software system or cylcfor implementing workflows. This section will describe how EWOK, ecFlow, and cylc interact to run a Skylab experiment. Instructions for setting up your environment to can be found in the JEDI Documentation.

EWOK contains 3 important components to the workflow:

  1. Suites - reference tasks and task dependencies. The scripts used for suites is located at {JEDI_SRC}/ewok/src/ewok/suites/.

    A suite will typically build and set up JEDI, define a cycle and loop, and control tasks and triggers for the experiment. All of our suites are written in python. Within a suite file, you can identify the task it calls by the "suite.addTask()" function. A standard function will be passed a task and the configuration. Additional inputs to tasks are used as triggers. An example of this is:

    rawfile = suite.addTask(ewok.fetchObservations, obsconf)
    iodafiles = suite.addTask(ewok.convertObservations, obsconf, rawfile=rawfile)

    Here "rawfile" will be an output from the "fetchObservations" task. Since "rawfile" is used as an input for the "convertObservation" task, then "convertObservation" will not run until after the "fetchObservation" task completes and is what we are referring to as a "trigger".

  2. Tasks - select the runtime script and set up configuration needed at runtime. The task fils are kept at {JEDI_SRC}/ewok/src/ewok/tasks. These are all python based scripts. When adding a new task file, you must update ewok/src/ewok/__init__.py. Inside the task file, you can setup variables to be used at runtime. It is important to note that if your runtime script is a bash script then use "self.RUNTIME_ENV". If your runtime script is a python script then use "self.RUNTIME_YAML". The following line is how you tell the workflow which runtime file to execute:

    self.command = os.path.join(os.environ.get("JEDI_SRC"),
                                          "ewok/src/runtime/getObservationsRun.py")
  3. Runtime - script executed during experiment. Runtime files are located at {JEDI_SRC}/ewok/src/runtime. These are primarily written in bash or python and will be executed when all of the triggers are satisfied. 

ecFlow Guide

ecFlow UI 

The ecFlow UI is user friendly and you can follow instructions in the JEDI Documentation to get started. After creating an experiment you can use the UI to suspend and rerun tasks, view log files and configuration, and much more. In order to execute a command on a task, you must click on the task name in the UI. The top bar will then show you which task you have clicked by saying "your_host → exp_id → task_name". You can right click on the task to pull up a menu of options and select what you want to run or view. Some helpful hot keys to use inside the UI are:

  • Suspend: command S (or ctrl S)
  • Rerun: command U (or ctrl U)
  • Execute: command E (or ctrl E)

The UI will also color the boxes to the left of a task to show the status of that task. The UI will update every 60 seconds. If you want to see the most recent status of the tasks, then click the green refresh arrow at the top left of the screen. The colors of the status boxes mean:

  • Red: aborted
  • Green: active
  • Yellow: completed
  • Blue: queued
  • Cyan: submitted
  • Orange: suspended
  • Grey: unknown status

ecFlow Directories

As part of EWOK's set up, you will notice two variables that pertain to ecFlow which are needed to run an experiment. They are EWOK_WORKDIR and EWOK_FLOWDIR. The EWOK_WORKDIR is where all of your experiment files will be saved that are generated by the workflow - such as feedback files, background files, observations, and your forecasts. The EWOK_FLOWDIR will contain configuration files and the runtime files that get executed. Tip: for testing small on the fly changes, after kicking off an experiment you can edit the runtime files in EWOK_FLOWDIR and then restart the task. Although the runtime file in the EWOK repository will not be updated, this method is useful if you need to force something to work or if you want to troubleshoot without touching the repo. 

ecFlow Troubleshooting and FAQ

How to remove an experiment from ecflow?

You can run:

ecflow_client --delete=force yes /<exp_id>

To clean up all tasks run:

ecflow_client --delete=_all_ force

Note: for the full cleanup also need to remove ${EWOK_FLOWDIR}/<exp_id>, ${EWOK_WORKDIR}/<exp_id>, <local experiments dir>/<exp_id>, <local experiments dir>/<model>/<type>/<exp_id>. Aa error “ClientInvoker: Connection error” indicates need to add extra port argument, where the port number is the value reported by "ecflow_start.sh".

ecflow_client --port=<int> --delete=_all_ force

Where are the logs?

While the experiment is running, right-click on the task in ecflow UI, click output, pick the file to see (there would also be a path to that file). In some cases (variational experiments? others?) the stdout/stderr logs can be found in the path: ${EWOK_WORKDIR}/<exp_id>/<date>/. After the experiment has completed, the finishExperiment task will have cleaned up many of these logs. In most cases the yamls, jobs and logs for the latest cycle can still be found in ${EWOK_FLOWDIR}/<exp_id>. To prevent this cleanup of the ewok dir, suspend the finishExperiment task via the GUI or the command line after starting the experiment.

How to run the task with e.g. OOPS_DEBUG on?

In the ecflow UI, go to the task, right click and select edit. That will bring up a window where you can tick the pre-process box before editing, edit the script and set the environment variable or anything you want to edit. Then you can submit the edited script (on the top right).

How to check whether the ecflow_server is running?

Run the command:

ps -ux | grep ecflow

Note, the server may run on a different node, eg: orion-login-4.

What is the best way to find out which experiment yaml was used for a particular experiment?

Follow the instructions available in the JEDI Documentation.

How to run 2 experiments with different versions of ewok on the same machine at the same time?

Once an experiment is running in ecflow it should not depend on your ewok repo any more. So in principle you can then change branch in ewok and start another experiment. That works if you use your own ewok, not the default one that’s installed orion for example. You should also be able to change ${JEDI_SRC} and ${JEDI_BUILD} to point to another set of executables/yamls between submitting experiments.

Debugging new experiments. Experiment failed on some task, and there was a need to update an experiment yaml file. Do I need to create a new experiment, or is there a way to restart the failed task?

You can edit the yaml files in ${EWOK_FLOWDIR}/<exp_id> (and/or rebuild the executable in ${JEDI_BUILD} if you are debugging). Then, in the ecflow UI, right click on the task and choose rerun. You can also select edit in the right-click menu, that will bring up the script for that task, there tick the pre-process box (upper right) and then you can edit the script before submitting it. Once you are done debugging, don’t forget to copy the change back in github. Otherwise, the pre-processing is done by ewok when create_experiment was executed so you have to create a new experiment.

When do I need to run pip install -e in ewok? Do changes in suites and/or tasks require that?

If you use "pip install -e" you only do it once, it will always use the current version (so you can edit, change branch, etc…).

How to rerun all failed tasks in the family?

Right-click on family, choose “Requeue aborted”.

ecflow shows that a task is running, but in fact it is not. How to resolve it?

If you can check the logfile and are sure it completed correctly, right click and set complete in the UI. If you know it failed, or cannot figure out, right click and set aborted then rerun.

How to limit the number of tasks submitted at a time for a particular family?

ecflow_client --alter add limit maxtasks 200 /YOUR_EXPERIMENT_ID
ecflow_client --alter add inlimit maxtasks /YOUR_EXPERIMENT_ID/an
# NOTE: Use delete instead of add to remove the limit.

How to make sure the logs in ${EWOK_FLOWDIR} aren’t removed after the experiment is finished?

Suspend finishExperiment task before the experiment is finished.

How to make sure the logs in ${EWOK_WORKDIR} aren’t removed after the experiment is finished?

Suspend endCycle task before the cycle is finished. Our cleanup will remove cycle directories that are from the previous two cycles or older automatically during the endCycle task.

cylc Guide

Setting up Cylc 

Cylc has many python dependencies, which means it clashes with our ewok-env. Therefore a work around is to install it inside its own virtual environment on your local machine. Instructions are also included for using Discover’s cylc installation. This works with spack-stack 1.9.x modules and GMAO’s SWELL application. 

Cylc Version 8:

Note: You can set up cylc-flow and cylc-uiserver in separate virtual environments. If you install cylc-flow in your JEDI-Skylab virtual environment, you can then install cylc-uiserver in a separate virtual env and use that environment to kick off the GUI. Follow steps "Adding cylc to your workflow" and then "Setting up venv for cylc GIU".

Adding cylc to your workflow venv:

If you want to run cylc with JEDI-Skylab workflows, you can force install cylc-flow into your workflow's virtual environment. 

  1. Activate your Skylab virtual environment, if you haven't already. 
    source $JEDI_ROOT/venv/bin/activate
  2. Force install cylc-flow. Note, cylc is installed in spack-stack but it has some compatibility issues so it is easiest at the point to re-install in your venv. 
    pip install cylc-flow --force-reinstall
  3. (*Optional - as needeed) rsync  is required for the workflow.  If a which rsync  does not return this application (if one is on a brand new OrbStack machine):
    sudo su
    apt install -y rsync
    exit
  4. Check cylc location and test with skylab/experiments/workflow-engine-test.yaml: 
    which cylc
    create_experiment.py skylab/experiments/workflow-engine-test.yaml

Setting up venv for cylc GUI (ex: mac):

  1. Needs python3.9 for the UI, therefore run: 
    brew install python@3.9
  2. Update PYTHONPATH:
    module purge
    unset PYTHONPATH
    unset PYTHONHOME
  3. Created venv without spack-stack:
    python3.9 -m venv --system-site-packages cylc-venv
  4. Activate venv:
    source cylc-venv/bin/activate
  5. Install cylc:
    pip install cylc-flow
    pip install cylc-uiserver
    pip install cylc-rose metomi-rose
  6. Install optional:
    pip install 'cylc-flow[tutorial]'
    pip install 'cylc-uiserver[hub]'
  7. Graphviz:
    brew install graphviz
  8. To test the GUI:
    cylc gui

Setting up cylc localhost configuration:

In order to run Skylab with the correct virtual environment, since cylc ignores PYTHONPATH, you need to add a global.cylc file to run an init-script before runtime that will activate the JEDI venv. This should go in ~/.cylc/flow/global.cylc. The install block is optional for now, but it sets your cylc work directory and run directory to mimic ecflow. Note, these will automatically put a cylc-run directory under the parent EWOK_WORKDIR and EWOK_FLOWDIR directories.

vi ~/.cylc/flow/global.cylc

~/.cylc/flow/global.cylc
[install]
    [[symlink dirs]]  
        [[[localhost]]]
            work = ${EWOK_WORKDIR}
            run = ${EWOK_FLOWDIR}

[platforms]
    [[localhost]]
        hosts = localhost
        job runner = background
        global init-script = source ${JEDI_ROOT}/venv/bin/activate

Setting up cylc HPC configuration:

Similar to the localhost setup, you will need to add or update ~/.cylc/flow/global.cylc. The install block is optional for now, but it sets your cylc work directory and run directory to mimic ecflow. Note, these will automatically put a cylc-run directory under the parent EWOK_WORKDIR and EWOK_FLOWDIR directories. Example of global.cylc file for HPCs that use slurm/sbatch for jobs:

vi ~/.cylc/flow/global.cylc

~/.cylc/flow/global.cylc
[install]
    [[symlink dirs]]  
        [[[localhost]]]
            work = ${EWOK_WORKDIR}
            run = ${EWOK_FLOWDIR}

[platforms]
    [[localhost]]
        hosts = localhost
        job runner = background
	    global init-script = source ${JEDI_ROOT}venv/bin/activate 

    [[compute]]
	hosts = localhost
	job runner = slurm
	install target = localhost
	global init-script = """
	    source ${JEDI_ROOT}/venv/bin/activate
	    export SLURM_EXPORT_ENV=ALL
	    export HDF5_USE_FILE_LOCKING=FALSE
	    ulimit -s unlimited || true
	    ulimit -v unlimited || true
	    """	
Hercules Note:

On Hercules, it appears that the aws package is not found when only running source {JEDI_ROOT}venv/bin/activate . Therefore it is best to source your setup.sh script instead of just the virtual environment. Replace the global init-script = ${JEDI_ROOT}/venv/bin/activate  with global init-script = ${JEDI_ROOT}/setup.sh  or wherever you keep setup.sh. Then you will want to comment out all of the ecflow lines in setup.sh. 

Discover via spack-stack:

  1. Load spack-stack modules
    #!/usr/bin/env bash
    
    # Initialize modules
    source $MODULESHOME/init/bash
    
    # Load python dependencies
    echo "Using SLES15 modules"
    module use /discover/swdev/jcsda/spack-stack/scu17/modulefiles
    module use /gpfsm/dswdev/jcsda/spack-stack/scu17/spack-stack-1.9.0/envs/ue-intel-2021.10.0/install/modulefiles/Core
    module load stack-intel/2021.10.0
    module load stack-intel-oneapi-mpi/2021.10.0
    module load stack-python/3.11.7
    module load py-pip/23.1.2

  2. Load cylc module and test
    # Load the cylc module
    
    module use -a /discover/nobackup/projects/gmao/advda/swell/dev/modulefiles/core/
    module load cylc/sles15_8.4.0
    
    # Run cylc command
    cylc "$@"

  3. You might need to create a file called $HOME/bin/cylc, and make sure it is executable in order to run locally: chmod +x $HOME/bin/cylc
    1. Note, I did not have to do this when setting up and running on discover-mil with spack-stack 1.9.0 intel
  4. Add the example of ~/.cylc/flow/global.cylc see above for HPCs that use slurm/sbatch for job submission.

Cycl TUI (Terminal User Interface)

The TUI is Cylc's built-in terminal interface. It is useful on HPCs where you cannot easily forward ports for the browser-based GUI, or when you just want a quick view of your workflow.

Launch: `cylc tui <workflow_name>`

If you are unsure of the workflow name, run `cylc scan` first to list all running workflows.

Navigation:

- Up/Down arrow keys move between tasks, families, and cycle points

- Enter expands or collapses a family or cycle point

- q quits the TUI

Task status symbols in the TUI:

- ○ waiting (task has unmet prerequisites)

- ◑ preparing / submitted (task is being prepared or has been submitted to the job runner)

- ● running (task is currently executing)

- ✓ succeeded (task completed successfully)

- ✗ failed (task exited with a non-zero return code)

- ⊘ submit-failed (job submission itself failed, e.g. slurm rejected it)

- ♦ held (task is paused and will not run until released)

Key operations from the TUI (select a task and press Enter to see the menu):

- Hold: pause a task so it will not run even when its prerequisites are met

- Release: un-hold a previously held task

- Trigger: force a task to run immediately, regardless of prerequisites (this is how you rerun a failed task)

- Kill: terminate a running task

- Log: view the job stdout (job.out) or stderr (job.err) directly in the terminal

Tips:

- You can also hold or release the entire workflow from the top-level entry in the TUI.

- The TUI refreshes automatically. You do not need to manually refresh like in ecFlow UI.

- To follow a specific cycle, expand the cycle point node with Enter and navigate into it.

Cylc Directories

Similar to ecFlow, Cylc uses $EWOK_WORKDIR and $EWOK_FLOWDIR. The key difference is that Cylc also creates a ~/cylc-run/ directory that contains symlinks back to these locations (when configured in global.cylc).

Key paths for a Cylc experiment:

- Flow definitions: ${EWOK_FLOWDIR}/<workflow_name>/ -- the generated flow.cylc file, task scripts, and YAML configuration

- Cylc run directory: ${EWOK_FLOWDIR}/cylc-run/<workflow_name>/run1/ -- created by cylc install; symlinked from ~/cylc-run/<workflow_name>/run1/ via the run symlink dir in global.cylc

- Experiment work directory: ${EWOK_WORKDIR}/<experiment_name>/ -- runtime output data (forecasts, analyses, observations, feedback files)

- Cylc task work: ${EWOK_WORKDIR}/cylc-run/<workflow_name>/run1/work/ -- per-cycle task working directories; symlinked from the run directory via the work symlink dir in global.cylc. Typically not used in our Skylab workflow.

- Logs: ~/cylc-run/<workflow_name>/run1/log/ -- scheduler logs, install log, and per-task job logs

- Job logs per task: ~/cylc-run/<workflow_name>/run1/log/job/<cycle_point>/<task_name>/<submit_number>/job.out and job.err OR access from the TUI and GUI. Note, cylc’s <cycle_point> is formatted in YYYYMMDDTHHMMZ (ex: 20200624T1800Z)

- Stats: ${EWOK_WORKDIR}/<experiment_name>/stats_<expid>.txt -- start/end timestamps, resources, and task IDs written by the EWOK task wrapper

Note: <workflow_name> refers to the full workflow name such as "cylc_<expid>". <experiment_name> refers to the expid generated by R2D2 or manually set.

Tip: The flow.cylc file in ${EWOK_FLOWDIR}/<workflow_name>/ is the pre-install source. After "cylc install" runs, the installed copy lives under ~/cylc-run/. If you need to edit and reinstall, edit the source copy and run "cylc reinstall <workflow_name>" followed by "cylc reload <workflow_name>".

Cylc Troubleshooting and FAQ

This section covers common issues you may encounter when running Cylc workflows with EWOK and how to resolve them.

In the section below, the following are used as place holders:

  • <workflow_name> – refers to the full workflow name such as "cylc_<expid>". 
  • <experiment_name> – refers to the expid generated by R2D2 or manually set.
  • <cycle_point> – referes to the cycle date, for cylc this is formated as YYYYMMDDTHHMMZ (ex: 20200624T1800Z)
  • <task_name> – refers to the name of your task. Note that cylc’s task names have the families and cycles appended to the end due to it’s naming convention requirements (ex: plotStatistics_an)
  • <submit_number> – refers to the number of runs starting at “01” for your first run of the task. The <submit_number> increments each time a task is triggered.

"cylc install" fails with "workflow already installed":

If you previously installed the workflow and need to start fresh, clean it first:  cylc clean <workflow_name> 

Then re-run create_experiment.py  or manually run cylc install  again. Note: cylc clean  removes the run directory and all associated logs. If you need the logs, copy them out first.

Task is stuck in "waiting" and never runs:

Check that the upstream tasks it depends on have completed successfully. You can inspect prerequisites with: cylc show <workflow_name>//<cycle_point>/<task_name> 

This will list all prerequisites and whether they are satisfied. Also check if the task or workflow is held:  cylc scan --states=held 

If the task is held, release it: cylc release <workflow_name>//<cycle_point>/<task_name> 

Task failed (shows ✗ in TUI or red in GUI):

First, check the job logs: cylc cat-log <workflow_name>//<cycle_point>/<task_name>

This shows stdout (job.out). To see stderr: cycl cat-log -f e <workflow_name>//<cycle_point>/<task_name>

You can also browse the log files directly at:

~/cylc-run/<workflow_name>/run1/log/job/<cycle_point>/<task_name>/<submit_number>/job.out

~/cylc-run/<workflow_name>/run1/log/job/<cycle_point>/<task_name>/<submit_number>/job.err

Submit-failed (job runner rejected the task):

This usually means slurm/PBS rejected the job submission. Common causes:

- Incorrect account or partition in global.cylc

- Requested resources exceed what is available on the HPC

- The queue or partition is down

Check the task job log for the slurm/PBS error, then verify the [platforms] section in ~/.cylc/flow/global.cylc and the directive header file being used.

global.cylc not being picked up:

Confirm the file exists at ~/.cylc/flow/global.cylc. You can dump the resolved global configuration with: cylc config 

This will show the merged configuration from all sources. Verify your platform, init-script, and symlink dir settings appear correctly.

Symlink directories not created in expected locations:

Verify the [install][[symlink dirs]] section in ~/.cylc/flow/global.cylc and that $EWOK_WORKDIR and $EWOK_FLOWDIR environment variables are set and exported before running create_experiment.py. If you changed global.cylc after a previous install, you will need to "cylc clean" and reinstall.

Workflow will not start after using the --no-submit flag:

The --no-submit flag (or -ns) only generates the suite without submitting. To start the workflow manually: cylc play <workflow_name> 

If the workflow was previously created with --suspend, you can also play it from the TUI or GUI.

How to stop a running workflow:

Graceful stop (lets active tasks finish): cylc stop <workflow_name> 

Immediate stop (kills active tasks): cylc stop --now --now <workflow_name> 

How to remove/clean a Cylc workflow:

cylc clean <workflow_name>

This removes ~/cylc-run/<workflow_name>/ and its symlink targets. To also clean up experiment data, remove ${EWOK_WORKDIR}/<experiment_name>/ and ${EWOK_FLOWDIR}/<workflow_name>/ manually.

How to check if a Cylc scheduler is running:

cylc scan 

This lists all workflows currently managed by a scheduler on the current host.

How to run 2 experiments at the same time with Cylc:

Each experiment gets its own workflow name (cylc_<expid>), so multiple Cylc workflows can run simultaneously without conflict. Make sure you have enough resources and that your global.cylc queue limits accommodate the combined task count.

Debugging Cylc Experiments Without Rerunning

These techniques let you investigate failures, inspect state, and make fixes without creating a brand new experiment.

Inspecting job logs after a failure:

View stdout: cylc cat-log <workflow_name>//<cycle_point>/<task_name> 

View stderr: cylc cat-log -f e <workflow_name>//<cycle_point>/<task_name> 

Or go directly to the log files on disk:

~/cylc-run/<workflow_name>/run1/log/job/<cycle_point>/<task_name>/<submit_number>/job.out  and

~/cylc-run/<workflow_name>/run1/log/job/<cycle_point>/<task_name>/<submit_number>/job.err 

Tip: The <submit_number> increments each time a task is triggered. If you have retriggered a task multiple times, look at the highest numbered directory for the latest attempt. The "NN" symlink always points to the latest submission.

Viewing task prerequisites and state:

cylc show <workflow_name>//<cycle_point>/<task_name>

This shows the task's current state, all prerequisites (and whether each is satisfied), and all outputs.

Viewing the resolved workflow configuration:

To see the full flow.cylc with all Jinja2 variables expanded: cylc config <workflow_name> 

You can also inspect the source flow.cylc directly at ${EWOK_FLOWDIR}/<workflow_name>/flow.cylc

Editing runtime files in place (without re-creating the experiment):

Just like with ecFlow, you can edit the scripts and YAML configuration that EWOK generated in ${EWOK_FLOWDIR}/<workflow_name>/  and then re-trigger the task. This avoids having to recreate the entire experiment.

  1.  Edit the file you need to change, for example the task YAML: ${EWOK_FLOWDIR}/<workflow_name>/<task_name>.yaml Or the runtime script: ${EWOK_FLOWDIR}/<workflow_name>/<script_name>
  2. Trigger the task to rerun with your changes. This can be done with the GUI or TUI or manually with: cylc trigger <workflow_name>//<cycle_point>/<task_name> If the GUI or TUI are not showing the task, the logs for this task can be found in: $EWOK_FLOWDIR/cylc-run/<workflow_name>/run1/log/job/<cycle_point>/<task_name>/<submit_nubmer>
  3. Once you are done debugging, copy your changes back to the repository. The edits in $EWOK_FLOWDIR are local to that experiment only.

Setting debug environment variables without editing files:

Cylc provides cylc broadcast to inject environment variables into a running workflow on the fly. For example, to enable OOPS_DEBUG for a specific task and cycle:

cylc broadcast <workflow_name> -p <cycle_point> -n <task_name> -s "[environment]OOPS_DEBUG=1" 

This sets the variable for the next run of that task without editing any files. You can also broadcast to all tasks or all cycles: cylc broadcast <workflow_name> -s "[environment]OOPS_DEBUG=1"

To see active broadcasts: cylc broadcast <workflow_name> --display

To clear a broadcast: cylc broadcast <workflow_name> --clear 

Rerunning a single failed task:

From the CLI: cylc trigger <workflow_name>//<cycle_point>/<task_name> 

From the TUI: navigate to the task, press Enter to open the menu, and select Trigger.

This reruns the task within the existing workflow. It does not recreate the experiment or re-process any other tasks.

Rerunning all failed tasks:

cylc trigger <workflow_name>//* 

This will re-trigger all tasks that are in a failed state. You can also target a specific cycle with cylc trigger <workflow_name//<cycle_point>/* 

How to make sure logs are not removed after the experiment finishes:

Hold the finishExperiment task before the experiment completes. 

From the CLI: cylc hold <workflow_name>//<cycle_point>/finishExperiment

Or from the TUI, navigate to finishExperiment and select “Hold”. The finishExperiment task cleans up runtime directories. By holding it, all logs and working files remain accessible for debugging.

To preserve per-cycle working directories, also hold the endCycle task: cylc hold <workflow_name>//<cycle_point>/endCycle_<family_name> 

Our cleanup removes cycle directories that are from the previous two cycles or older during the endCycle task.

Checking task timing and stats:

EWOK's task wrapper (ewoktask.sh) records start and end timestamps for every task in: ${EWOK_WORKDIR}/<experiment_name>/stats_<expid>.txt  Each line contains start_time, end_time, nodes, tasks, threads, cycle_date, task_id, and category. This is useful for identifying slow tasks or comparing timing across cycles without needing to parse individual job logs.

  • No labels