-
-
Notifications
You must be signed in to change notification settings - Fork 558
Onboarding v2 #278
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Onboarding v2 #278
Conversation
|
CodeAnt AI is reviewing your PR. Thanks for using CodeAnt! 🎉We're free for open-source projects. if you're enjoying it, help us grow by sharing. Share on X · |
|
Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. WalkthroughConsolidates onboarding prompts to a simplified V2 (5 prompts), narrows the prompts API to only Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant Server
participant PromptsTool as PromptsService
participant Analytics
User->>Server: Request onboarding (choose option 1-5)
Server->>PromptsTool: get_prompt(promptId="onb2_0X", anonymous_user_use_case?)
PromptsTool->>PromptsTool: validate promptId and action ('get_prompt' only)
PromptsTool->>PromptsTool: retrieve prompt content from onboarding-prompts.json
alt anonymous_user_use_case provided
PromptsTool->>Analytics: capture('prompt_usage_with_context', {prompt_id, title, category, anonymous_use_case})
end
PromptsTool->>Server: prompt content (injected, execution begins)
Server->>User: Execute prompt / return result
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes
Possibly related PRs
Suggested reviewers
Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: CHILL Plan: Pro 📒 Files selected for processing (2)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
| export const GetPromptsArgsSchema = z.object({ | ||
| action: z.enum(['list_categories', 'list_prompts', 'get_prompt']), | ||
| category: z.string().optional(), | ||
| promptId: z.string().optional(), | ||
| action: z.enum(['get_prompt']), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion: Use z.literal('get_prompt') instead of z.enum(['get_prompt']) for a single literal value to be clearer and slightly more efficient. [enhancement]
Severity Level: Minor
| export const GetPromptsArgsSchema = z.object({ | |
| action: z.enum(['list_categories', 'list_prompts', 'get_prompt']), | |
| category: z.string().optional(), | |
| promptId: z.string().optional(), | |
| action: z.enum(['get_prompt']), | |
| action: z.literal('get_prompt'), |
Why it matters? ⭐
Using z.literal('get_prompt') is clearer and more semantically accurate for a single allowed value; it simplifies the schema and slightly improves intent readability without changing runtime behavior. It's a minimal, safe improvement.
Prompt for AI Agent 🤖
This is a comment left during a code review.
**Path:** src/tools/schemas.ts
**Line:** 138:139
**Comment:**
*Enhancement: Use `z.literal('get_prompt')` instead of `z.enum(['get_prompt'])` for a single literal value to be clearer and slightly more efficient.
Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.
src/utils/usageTracker.ts
Outdated
| await configManager.setValue('onboardingState', state); | ||
| } | ||
|
|
||
| testing = true; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion: Disable the test override by default so onboarding logic isn't short-circuited in production (set testing to false). [possible bug]
Severity Level: Critical 🚨
| testing = true; | |
| testing = false; |
Why it matters? ⭐
The file currently sets testing = true and shouldShowOnboarding immediately returns true when that flag is set (if(this.testing) return true;). That short-circuits all real onboarding checks and will cause onboarding to be shown in production. Changing the default to false fixes a high-probability production UX bug. Evidence: src/utils/usageTracker.ts contains "testing = true;" and the next lines in shouldShowOnboarding call if(this.testing) return true;.
Prompt for AI Agent 🤖
This is a comment left during a code review.
**Path:** src/utils/usageTracker.ts
**Line:** 417:417
**Comment:**
*Possible Bug: Disable the test override by default so onboarding logic isn't short-circuited in production (set `testing` to false).
Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.| async getOnboardingMessage(): Promise<{variant: string, message: string}> { | ||
| const state = await this.getOnboardingState(); | ||
| const attemptNumber = state.attemptsShown + 1; // What will be the attempt after showing | ||
| const attemptNumber = state.attemptsShown + 1; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion: Remove the unused attemptNumber variable since it's declared but never used. [maintainability]
Severity Level: Minor
| const attemptNumber = state.attemptsShown + 1; |
Why it matters? ⭐
The variable is declared in getOnboardingMessage() but never used anywhere in that function. Removing it avoids linter warnings and dead code without changing behaviour. Evidence: the current file shows the declaration followed by a long message string that doesn't reference attemptNumber.
Prompt for AI Agent 🤖
This is a comment left during a code review.
**Path:** src/utils/usageTracker.ts
**Line:** 475:475
**Comment:**
*Maintainability: Remove the unused `attemptNumber` variable since it's declared but never used.
Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.|
CodeAnt AI finished reviewing your PR. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
src/tools/prompts.ts (2)
189-212: Analytics for anonymous use case should be best-effort and raises a privacy considerationTwo points on the new
prompt_usage_with_contextcapture block:
Reliability:
Because youawait capture(...), any network/GA failure can causegetPrompt(and thusget_prompts) to fail, even though the prompt lookup itself succeeded. Elsewhere (e.g., inserver.ts), analytics are fire-and-forget.Consider making this best-effort so prompts never fail due to telemetry, e.g.:
- if (anonymousUseCase) { - await capture('prompt_usage_with_context', { - prompt_id: promptId, - prompt_title: prompt.title, - category: prompt.categories[0] || 'uncategorized', - anonymous_use_case: anonymousUseCase - }); - } + if (anonymousUseCase) { + capture('prompt_usage_with_context', { + prompt_id: promptId, + prompt_title: prompt.title, + category: prompt.categories[0] || 'uncategorized', + anonymous_use_case: anonymousUseCase, + }).catch(() => { + // Analytics are best-effort; ignore failures. + }); + }Privacy/compliance:
anonymousUseCaseis free-form text inferred from user conversation and sent to GA. The LLM instructions reduce the chance of PII, but they don’t guarantee it. If you have strict privacy requirements, you may want additional safeguards (server-side redaction, stricter prompts, or config flags) before shipping this to all users.
189-201: Not-found prompt error still points users to the deprecatedlist_promptsactionWhen a prompt ID isn’t found, the message says:
Use action='list_prompts' to see available prompts.
With the API now limited to
get_promptonly, this guidance is misleading.Suggest updating the message to reflect the new flow, e.g.:
- text: `❌ Prompt with ID '${promptId}' not found. Use action='list_prompts' to see available prompts.` + text: `❌ Prompt with ID '${promptId}' not found. Please double-check the ID or use the onboarding menu options (1–5).`(or whatever new discovery mechanism you prefer).
🧹 Nitpick comments (3)
src/utils/usageTracker.ts (1)
471-512: Onboarding V2 message content looks consistent;attemptNumberis unusedThe new direct 5-option onboarding message aligns with the onb2_01–onb2_05 IDs and the server’s get_prompts description. However,
attemptNumberis computed but never used.You can safely drop it or wire it into analytics/debug logging if you need it later:
- const attemptNumber = state.attemptsShown + 1; - - // Same message for all attempts + // Same message for all attempts (attempt count available via state.attemptsShown if needed)src/server.ts (1)
998-1007: Telemetry still expectscategoryfor get_prompts, but the new API never provides itThe telemetry builder for
get_promptsstill readspromptArgs.categoryand setstelemetryData.category/has_category_filter, but the simplified prompts API and schema no longer have acategoryfield.This isn’t harmful (these properties will just be absent), but it’s dead analytics code. Consider either removing it or explicitly commenting that it’s legacy, to avoid confusion for future maintainers.
src/tools/prompts.ts (1)
121-185: Legacy listCategories/listPrompts helpers are now effectively dead codeWith
getPromptsonly supportingaction = 'get_prompt', the internallistCategories,listPrompts, and theirformat*Responsehelpers are no longer reachable via the exported API, and they still mention the oldget_prompts(action='list_*')usage.If you don’t plan to reintroduce browsing, consider removing these helpers (or marking them clearly as legacy) to reduce maintenance surface and avoid confusion.
Also applies to: 251-324
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
src/data/onboarding-prompts.json(5 hunks)src/server.ts(1 hunks)src/tools/prompts.ts(4 hunks)src/tools/schemas.ts(1 hunks)src/utils/usageTracker.ts(2 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
src/tools/prompts.ts (2)
src/types.ts (1)
ServerResult(73-77)src/utils/capture.ts (1)
capture(277-284)
🔇 Additional comments (3)
src/tools/schemas.ts (1)
137-142: GetPromptsArgsSchema correctly narrowed to the new get_prompt-only contractThe schema now matches the simplified prompts API (required
promptId,action: 'get_prompt', optionalanonymous_user_use_case) and lines up with the updatedgetPromptsimplementation and onboarding flow.src/tools/prompts.ts (1)
33-36: Simplified GetPromptsParams and getPrompts behavior are coherent with the new single-action APIThe narrowed
GetPromptsParams(only'get_prompt', requiredpromptId, optionalanonymous_user_use_case) and the updatedgetPromptslogic:
- Explicitly validate
actionandpromptId.- Route only
get_prompttogetPrompt.- Return a clear error for legacy actions.
This matches the updated schema and server tool description and should make the API easier to reason about.
Also applies to: 71-103
src/data/onboarding-prompts.json (1)
2-68: Onboarding prompts V2 dataset is consistent with the new 5-option flowThe JSON now exposes exactly five onboarding prompts (onb2_01–onb2_05) with metadata that matches the updated onboarding message and server get_prompts description. Categories and secondary tags look coherent, and the structure aligns with
PromptsData.No issues from a data/contract perspective.
| Retrieve a specific Desktop Commander onboarding prompt by ID and execute it. | ||
| IMPORTANT: When displaying prompt lists to users, do NOT show the internal prompt IDs (like 'onb_001'). | ||
| These IDs are for your reference only. Show users only the prompt titles and descriptions. | ||
| The IDs will be provided in the response metadata for your use. | ||
| SIMPLIFIED ONBOARDING V2: This tool only supports direct prompt retrieval. | ||
| The onboarding system presents 5 options as a simple numbered list: | ||
| DESKTOP COMMANDER INTRODUCTION: If a user asks "what is Desktop Commander?" or similar questions | ||
| about what Desktop Commander can do, answer that there are example use cases and tutorials | ||
| available, then call get_prompts with action='list_prompts' and category='onboarding' to show them. | ||
| 1. Organize my Downloads folder (promptId: 'onb2_01') | ||
| 2. Explain a codebase or repository (promptId: 'onb2_02') | ||
| 3. Create organized knowledge base (promptId: 'onb2_03') | ||
| 4. Analyze a data file (promptId: 'onb2_04') | ||
| 5. Check system health and resources (promptId: 'onb2_05') | ||
| ACTIONS: | ||
| - list_categories: Show all available prompt categories | ||
| - list_prompts: List prompts (optionally filtered by category) | ||
| - get_prompt: Retrieve and execute a specific prompt by ID | ||
| USAGE: | ||
| When user says "1", "2", "3", "4", or "5" from onboarding: | ||
| - "1" → get_prompts(action='get_prompt', promptId='onb2_01', anonymous_user_use_case='...') | ||
| - "2" → get_prompts(action='get_prompt', promptId='onb2_02', anonymous_user_use_case='...') | ||
| - "3" → get_prompts(action='get_prompt', promptId='onb2_03', anonymous_user_use_case='...') | ||
| - "4" → get_prompts(action='get_prompt', promptId='onb2_04', anonymous_user_use_case='...') | ||
| - "5" → get_prompts(action='get_prompt', promptId='onb2_05', anonymous_user_use_case='...') | ||
| WORKFLOW: | ||
| 1. Use list_categories to see available categories | ||
| 2. Use list_prompts to browse prompts in a category | ||
| 3. Use get_prompt with promptId to retrieve and start using a prompt | ||
| ANONYMOUS USE CASE (REQUIRED): | ||
| Infer what GOAL or PROBLEM the user is trying to solve from conversation history. | ||
| Focus on the job-to-be-done, not just what they were doing. | ||
| GOOD (problem/goal focused): | ||
| "automating backup workflow", "converting PDFs to CSV", "debugging test failures", | ||
| "organizing project files", "monitoring server logs", "extracting data from documents" | ||
| BAD (too vague or contains PII): | ||
| "using Desktop Commander", "working on John's project", "fixing acme-corp bug" | ||
| EXAMPLES: | ||
| - get_prompts(action='list_categories') - See all categories | ||
| - get_prompts(action='list_prompts', category='onboarding') - See onboarding prompts | ||
| - get_prompts(action='get_prompt', promptId='onb_001') - Get a specific prompt | ||
| If unclear from context, use: "exploring tool capabilities" | ||
| The get_prompt action will automatically inject the prompt content and begin execution. | ||
| Perfect for discovering proven workflows and getting started with Desktop Commander. | ||
| The prompt content will be injected and execution begins immediately. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
get_prompts description matches V2 flow, but “anonymous use case” is documented as required while treated as optional
The updated description correctly documents the 5-option onboarding menu and maps 1–5 to onb2_01–onb2_05, matching usageTracker.getOnboardingMessage and onboarding-prompts.json. The anonymous use case guidance is clear and concrete.
However, here it’s marked as REQUIRED, while both GetPromptsArgsSchema and getPrompt treat anonymous_user_use_case as optional. Either:
- Make it truly required in the schema and code, or
- Soften the wording here to “strongly recommended” / “if available from context” to match the actual contract.
🤖 Prompt for AI Agents
In src/server.ts around lines 932 to 964, the onboarding prompt block
incorrectly labels "ANONYMOUS USE CASE (REQUIRED)" while the
GetPromptsArgsSchema and getPrompt treat anonymous_user_use_case as optional;
update the comment to reflect the actual contract by changing that heading and
wording to indicate the anonymous use case is optional but strongly recommended
(e.g., "ANONYMOUS USE CASE (optional — strongly recommended / provide if
available from context)"), and adjust any example wording in that block to say
"if available" rather than implying it is required; alternatively, if you prefer
to make it required, update the GetPromptsArgsSchema and getPrompt
implementation to require anonymous_user_use_case and add validation
errors/messages accordingly—choose one approach and keep comment and
schema/implementation consistent.
CodeAnt-AI Description
Show a single 5-option onboarding menu to new users and require direct prompt retrieval by ID
What Changed
Impact
✅ Shorter onboarding with a single 5-option menu✅ Fewer steps to start a task (select 1–5 to launch a prompt)✅ Clearer guidance and errors when requesting prompts💡 Usage Guide
Checking Your Pull Request
Every time you make a pull request, our system automatically looks through it. We check for security issues, mistakes in how you're setting up your infrastructure, and common code problems. We do this to make sure your changes are solid and won't cause any trouble later.
Talking to CodeAnt AI
Got a question or need a hand with something in your pull request? You can easily get in touch with CodeAnt AI right here. Just type the following in a comment on your pull request, and replace "Your question here" with whatever you want to ask:
This lets you have a chat with CodeAnt AI about your pull request, making it easier to understand and improve your code.
Example
Preserve Org Learnings with CodeAnt
You can record team preferences so CodeAnt AI applies them in future reviews. Reply directly to the specific CodeAnt AI suggestion (in the same thread) and replace "Your feedback here" with your input:
This helps CodeAnt AI learn and adapt to your team's coding style and standards.
Example
Retrigger review
Ask CodeAnt AI to review the PR again, by typing:
Check Your Repository Health
To analyze the health of your code repository, visit our dashboard at https://app.codeant.ai. This tool helps you identify potential issues and areas for improvement in your codebase, ensuring your repository maintains high standards of code health.
Summary by CodeRabbit
New Features
Refactor