-
-
Notifications
You must be signed in to change notification settings - Fork 558
Release script improvement, added tracking of release state to resume… #280
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
… from step that failed
|
CodeAnt AI is reviewing your PR. Thanks for using CodeAnt! 🎉We're free for open-source projects. if you're enjoying it, help us grow by sharing. Share on X · |
WalkthroughThe pull request adds state tracking and resume functionality to the release script, enabling it to recover from failures and continue from the last incomplete step without manual intervention. Documentation updates describe these new smart state tracking features with examples. Changes
Sequence Diagram(s)sequenceDiagram
actor User
participant Script as publish-release.cjs
participant StateFile as .release-state.json
participant Build as Build/Test Pipeline
participant Git as Git Operations
participant NPM as NPM Registry
participant MCP as MCP Server
User->>Script: Run script (normal or after failure)
Script->>StateFile: Load existing state
alt State exists (resume)
Script->>Script: Display resume info (last step, time, completed steps)
StateFile-->>Script: Previous state loaded
end
loop For each release step
Script->>Script: Check if step completed
alt Step not completed
Script->>Build: Execute step (bump/build/test/git/npm/mcp)
Build-->>Script: Step success
Script->>StateFile: Mark step complete
else Step already completed
Script->>Script: Skip step (resume mode)
end
end
alt All steps complete
Script->>StateFile: Clear state (success)
Script->>User: Print success summary with links
else Failure occurs
Script->>StateFile: Save current state
Script->>User: Display error + resume instruction
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes
Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
CodeAnt AI finished reviewing your PR. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (5)
scripts/publish-release.cjs (5)
21-23: State helpers are clear and cohesive; consider improving behavior on corrupted state filesThe
STATE_FILEconstant and theloadState/saveState/clearStatehelpers form a clean, easy‑to‑reason‑about persistence layer, and the use of JSON with pretty printing looks good.One small improvement: on a JSON parse failure in
loadState, you currently just warn and returnnull, leaving the corrupted file in place. That will cause the warning to repeat every run until something (or the user via--clear-state) deletes the file. Consider either deleting/renaming the bad state file automatically or extending the warning to explicitly suggest--clear-stateso recovery is self‑service and obvious.Also applies to: 55-88
221-225: Good resume UX; minor edge case where “resume” is effectively a fresh startThe
isResumehandling and the resume banner (start time, last step, completed steps, target version) are well done and give clear context when rerunning.Note, though, that on a first‑run failure before any
markStepCompletecall (for example, ifnpm run bumpfails), no state has actually been written yet, so the next run will be a fresh start even though the docs frame everything as “just rerun to resume.” This is mostly harmless but worth being aware of; see also the catch‑block comment below about persisting state on failure to make that guarantee true.Also applies to: 234-247, 254-271
287-307: Per‑step state checks and skip logic are solid; be careful when combining skip flags with resumeThe pattern
const shouldSkipX = options.skipX || isStepComplete(state, '<step>')plus the tri‑state messaging (“already completed ✓” vs “skipped (manual override)”) reads well and makes the behavior explicit.One subtle corner: if you run once with a skip flag (e.g.
--skip-bumpso only build/tests run and markbuildcomplete) and then rerun without that flag, the script will bump the version but still skip the build/tests step because state saysbuildis complete, even though that build was for the pre‑bump version. That’s an advanced edge case, but it might be worth either documenting that mixing skip flags with resume can cause this, or encoding version into thebuildstep state in the future so mismatched builds can be detected.Also applies to: 310-327, 331-341, 345-390
432-483: MCP publish step is thorough; consider aligning error reporting with the actual failing stepThe MCP step nicely checks for
mcp-publisher, provides explicit remediation (install, login, validation hints), and only marks themcpstep complete after a successful publish plus optional verification.However, when
mcp-publisher publishthrows, you rethrow and the outer catch logsRelease failed at step: ${state.lastStep || 'startup'}, which represents the last completed step (likelynpm), not the MCP step that actually failed. That can be misleading when reading logs. Tracking acurrentStepstring (e.g.'mcp') and logging that instead would make the final error summary line match what actually failed and also better match the examples in the docs.
486-488: Success path and state clearing look good; ensure the state guarantee holds on failures tooOn full success you call
clearState()and then print a clear summary (NPM, MCP Registry, GitHub tag, next steps), which is exactly what you want after a clean release, and also keeps.release-state.jsonfrom lingering.To fully align with the documented promise that state is saved on failure and you can always “just rerun to resume,” it would be good to also explicitly persist
statein the error path (see next comment) so the behavior is symmetric: state is written on every failure and deleted on every full success.Also applies to: 490-504
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
PUBLISH.md(3 hunks)scripts/publish-release.cjs(12 hunks)
🔇 Additional comments (3)
scripts/publish-release.cjs (2)
111-125: CLI parsing and help text look consistent and user‑friendly
parseArgsandshowHelpare aligned: every option you parse (--clear-state,--mcp-only, skip flags,--dry-run, etc.) is documented, and the unknown‑option path now correctly referencespublish-release.cjs.The explicit
State Managementsection and--clear-stateexample inshowHelpmatch the new behavior and should make the resume semantics easy to discover.Also applies to: 127-176, 181-210
393-428: NPM publish flow and resume behavior look correctThe NPM step correctly:
- Checks
npm whoamiwithignoreErrorand gives a clear remediation path.- Avoids calling
markStepCompletein dry‑run mode.- Marks
npmcomplete only after a successfulnpm publishand then performs a verification check.This should make reruns after login or transient publish issues behave as expected, resuming at the NPM step without redoing earlier steps.
PUBLISH.md (1)
7-24: Top‑level automated release description and--clear-stateexample are consistent with the scriptThe updated intro clearly communicates automatic state tracking and the new
--clear-stateoption, and the examplenode scripts/publish-release.cjs --clear-statematches the script’s CLI.This gives readers an accurate high‑level mental model of the new capabilities.
| ### ✨ Smart State Tracking | ||
|
|
||
| The script automatically tracks completed steps and **resumes from failures**: | ||
|
|
||
| 1. **Automatic Resume**: If any step fails, just run the script again - it will skip completed steps and continue from where it failed | ||
| 2. **No Manual Flags**: No need to remember which `--skip-*` flags to use | ||
| 3. **Clear State**: Use `--clear-state` to reset and start from the beginning | ||
| 4. **Transparent**: Shows which steps were already completed when resuming | ||
|
|
||
| **Example workflow:** | ||
| ```bash | ||
| # Start release - tests fail | ||
| npm run release | ||
| # ❌ Step 2/6 failed: Tests failed | ||
|
|
||
| # Fix the tests, then just run again | ||
| npm run release | ||
| # ✓ Step 1/6: Version bump already completed | ||
| # ✓ Step 2/6: Running tests... (continues from here) | ||
| ``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
“Smart State Tracking” example output doesn’t quite match current script logging
The “Smart State Tracking” bullets line up well with the implementation (automatic resume, no need for manual skip flags in normal use, clear‑state option, transparency on resumed steps).
The sample workflow, though, shows output like # ❌ Step 2/6 failed: Tests failed and # ✓ Step 2/6: Running tests... (continues from here), while the actual script logs a generic Release failed at step: <lastStep> followed by the underlying error message, and the step labels are Step 2/6: Running tests (includes build)... with a success line rather than the exact strings in the example.
Consider either:
- Updating the script’s logging to emit messages closer to the documented example (e.g. per‑step failure lines and step names), or
- Adjusting the example output here to mirror the current script output (including the “Release failed at step: …” summary and the resume banner).
Either direction keeps the docs honest and avoids confusion when someone compares real runs to this example.
Also applies to: 47-56
🤖 Prompt for AI Agents
In PUBLISH.md around lines 26-45 and 47-56, the example output for “Smart State
Tracking” does not match the script’s real logging; update the documentation
examples to mirror the actual script output (or vice versa) — specifically
replace the shown per-step emoji lines with the script’s real messages such as
the "Release failed at step: <lastStep>" summary, the actual step label formats
like "Step 2/6: Running tests (includes build)..." and the resume banner text;
ensure both example blocks consistently reflect the current log wording and
formatting so readers see output that matches a real run.
| } catch (error) { | ||
| console.log(''); | ||
| printError('Release failed!'); | ||
| printError('Release failed at step: ' + (state.lastStep || 'startup')); | ||
| printError(error.message); | ||
| console.log(''); | ||
| printInfo('State has been saved. Simply run the script again to resume from where it failed.'); | ||
| printInfo('Use --clear-state to start over from the beginning.'); | ||
| console.log(''); | ||
| process.exit(1); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
“State has been saved” message can be incorrect; persist state in the catch block
In the catch block you log:
Release failed at step: ${state.lastStep || 'startup'}State has been saved. Simply run the script again to resume from where it failed.
But if the failure happens before any markStepComplete call (e.g. during the version bump command on a fresh run), no state file has ever been written and the message is inaccurate—there’s nothing new to resume from.
A simple fix is to ensure you write the current in‑memory state before exiting, e.g.:
} catch (error) {
console.log('');
- printError('Release failed at step: ' + (state.lastStep || 'startup'));
- printError(error.message);
- console.log('');
- printInfo('State has been saved. Simply run the script again to resume from where it failed.');
- printInfo('Use --clear-state to start over from the beginning.');
- console.log('');
+ printError('Release failed at step: ' + (state.lastStep || 'startup'));
+ printError(error.message);
+ console.log('');
+ if (state) {
+ // Persist latest known state so resume works even after early failures
+ saveState(state);
+ }
+ printInfo('State has been saved. Simply run the script again to resume from where it failed.');
+ printInfo('Use --clear-state to start over from the beginning.');
+ console.log('');
process.exit(1);
}This makes the log message truthful in all cases and guarantees a state file exists after any failure inside the try.
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In scripts/publish-release.cjs around lines 513 to 521, the catch block logs
"State has been saved" even when no state file was written; before logging and
exiting, persist the current in-memory state to disk so the message is truthful.
Call the existing state-write helper (e.g. writeState/saveState/writeStateFile)
with the current state before printing the info lines, wrap that call in its own
small try/catch so failures to persist do not mask the original error (log a
secondary warning if save fails), and only then print the "State has been saved"
guidance and exit with code 1.
User description
… from step that failed
CodeAnt-AI Description
Add state tracking and automatic resume to release script
What Changed
--clear-stateoption for the automated release workflowImpact
✅ Fewer repeated release runs✅ Clearer guidance for release recovery✅ Shorter failure recovery time💡 Usage Guide
Checking Your Pull Request
Every time you make a pull request, our system automatically looks through it. We check for security issues, mistakes in how you're setting up your infrastructure, and common code problems. We do this to make sure your changes are solid and won't cause any trouble later.
Talking to CodeAnt AI
Got a question or need a hand with something in your pull request? You can easily get in touch with CodeAnt AI right here. Just type the following in a comment on your pull request, and replace "Your question here" with whatever you want to ask:
This lets you have a chat with CodeAnt AI about your pull request, making it easier to understand and improve your code.
Example
Preserve Org Learnings with CodeAnt
You can record team preferences so CodeAnt AI applies them in future reviews. Reply directly to the specific CodeAnt AI suggestion (in the same thread) and replace "Your feedback here" with your input:
This helps CodeAnt AI learn and adapt to your team's coding style and standards.
Example
Retrigger review
Ask CodeAnt AI to review the PR again, by typing:
Check Your Repository Health
To analyze the health of your code repository, visit our dashboard at https://app.codeant.ai. This tool helps you identify potential issues and areas for improvement in your codebase, ensuring your repository maintains high standards of code health.
Summary by CodeRabbit
New Features
--clear-stateoption to reset release state when needed.Documentation