-
Notifications
You must be signed in to change notification settings - Fork 204
Labels
triageIssues that are triageIssues that are triage
Description
What is wrong?
Doing a rocotocheck of gdas_cleanup and enkfgdas_cleanup I see:
dependencies
AND is not satisfied
enkfgdas_earc_vrfy of cycle 202208151200 is SUCCEEDED
SOME is satisfied
enkfgdas_earc_tars_00 of cycle 202208151200 is SUCCEEDED
enkfgdas_earc_tars_01 of cycle 202208151200 is SUCCEEDED
enkfgdas_earc_tars_02 of cycle 202208151200 is SUCCEEDED
enkfgdas_earc_tars_03 of cycle 202208151200 is SUCCEEDED
enkfgdas_earc_tars_04 of cycle 202208151200 is SUCCEEDED
enkfgdas_earc_tars_05 of cycle 202208151200 is SUCCEEDED
enkfgdas_earc_tars_06 of cycle 202208151200 is SUCCEEDED
enkfgdas_earc_tars_07 of cycle 202208151200 is SUCCEEDED
enkfgdas_earc_tars_08 of cycle 202208151200 is SUCCEEDED
OR is not satisfied
gfs_fcst_seg0 of cycle 202208151800 is not SUCCEEDED
NOT is not satisfied
cycle 202208151800 exists
and
dependencies
AND is satisfied
SOME is satisfied
gdas_atmos_prod_f000 of cycle 202208160000 is SUCCEEDED
gdas_atmos_prod_f001 of cycle 202208160000 is SUCCEEDED
gdas_atmos_prod_f002 of cycle 202208160000 is SUCCEEDED
gdas_atmos_prod_f003 of cycle 202208160000 is SUCCEEDED
gdas_atmos_prod_f004 of cycle 202208160000 is SUCCEEDED
gdas_atmos_prod_f005 of cycle 202208160000 is SUCCEEDED
gdas_atmos_prod_f006 of cycle 202208160000 is SUCCEEDED
gdas_atmos_prod_f007 of cycle 202208160000 is SUCCEEDED
gdas_atmos_prod_f008 of cycle 202208160000 is SUCCEEDED
gdas_atmos_prod_f009 of cycle 202208160000 is SUCCEEDED
/gpfs/f6/gfs-cpu/world-shared/role.glopara/RETRO_GFSv17/comroot/retrov17_01_stream1a/gdas.20220816/00//model/atmos/history/gdas.t00z.atm.f009.nc is available
/gpfs/f6/drsa-precip3/world-shared/role.glopara/dump/gdas.20220816/06/atmos/gdas.t06z.updated.status.tm00.bufr_d is available
SOME is satisfied
gdas_fcst_seg0 of cycle 202208160000 is SUCCEEDED
However, in the xml:
<cycledef group="gdas_half">202208150600 202208150600 06:00:00</cycledef>
<cycledef group="gdas">202208151200 202210151800 06:00:00</cycledef>
<cycledef group="gfs">202208300000 202210151800 06:00:00</cycledef>
<cycledef group="gfs_seq">202208300600 202210151800 06:00:00</cycledef>
we see that the first gfs forecast isn't until 202208300600.
What should have happened?
Ideally if a gfs forecast isn't yet requested, this would not be a dependency on the cleanup jobs.
What machines are impacted?
All or N/A
What global-workflow hash are you using?
develop hash 22926af
Steps to reproduce
This particular instance is using https://github.com/NOAA-EMC/global-workflow/blob/develop/dev/ci/cases/gfsv17/retrov17_stream1a.yaml
Additional information
No response
Do you have a proposed solution?
An alternative solution is that spin-up jobs would be a different xml, or we'll just boot these jobs manually.
Metadata
Metadata
Assignees
Labels
triageIssues that are triageIssues that are triage