-
Notifications
You must be signed in to change notification settings - Fork 271
Test PR for Log Warnings #2924
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
Test PR for Log Warnings #2924
Conversation
…d build env to remove gnu from stack (was ufs-community#2842) (ufs-community#2867) * UFSWM - update ufs_noaacloud.intel.lua module file * UFSWM - replace icplocn2atm with use_oceanuv in scripts and tests * CMEPS - update CCPP metadata and type defs for use_oceanuv * FV3 - * ccpp-physics - replace instances of icplocn2atm with use_oceanuv * atmos_cubed_sphere - replace instances of icplocn2atm with use_oceanuv * NOAHMP - replace icplocn2atm with use_oceanuv
…ther-model into feature/log-warning
…ther-model into feature/log-warning
@DeniseWorthen I've updated this so that only warning/failing tests are reported. At the bottom, I have the number of tests passing on each platform, but that could easily be inverted to how many are warning/failing for runtime/memory on each platform. I could also do percentages or decimal value (0 to 1) if preferred. In theory, I could add two rows, one with warning and one with failing. Lots of options, so I'd like to hear what you think would be most useful. I can stick to your original idea if that's what you prefer but wanted to propose options! Current output here. I also added a column that shows number of platforms on which a test is passing. Seems like a row of mostly red would also be cause for concern, as it suggests an issue with the specific test, rather than with a particular platform. For Jong's plots, I believe they are only for Ursa, and it would be a lot of plots if we did one for each machine. Should we just use Ursa as a reference machine for the plots? Or do you think it would be useful to have plots for every test on every machine? |
Commit Queue Requirements:
test_changes.list
indicates which tests, if any, are changed by this PR. Committest_changes.list
, even if it is empty.Description:
This PR is currently being used to test a GitHub Actions workflow that will hopefully resolve Issue #2527. Currently, the scorecard can be viewed by clicking on "Regression Resource Check / write-results (pull_request)" at the bottom of the PR once it has passed. Then, on the left-hand side of the page that opens, click Summary. Scroll down, and click on "Runtime Results Summary" and/or "Memory Results Summary." See here, for example.
The scorecard currently:
develop
branch to calculate the mean and standard deviation for runtime and memory per testdevelop
.In progress:
Commit Message:
Priority:
Git Tracking
UFSWM:
Sub component Pull Requests:
UFSWM Blocking Dependencies:
Documentation:
Changes
Regression Test Changes (Please commit test_changes.list):
Input data Changes:
Library Changes/Upgrades:
Testing Log: