Skip to content

Conversation

@wonderwhy-er
Copy link
Owner

@wonderwhy-er wonderwhy-er commented Nov 14, 2025

CodeAnt-AI Description

Show a single 5-option onboarding menu to new users and require direct prompt retrieval by ID

What Changed

  • New users now receive a mandatory onboarding footer containing an exact 5-item menu (1–Organize Downloads, 2–Explain codebase, 3–Create knowledge base, 4–Analyze data file, 5–Check system health). Agents must answer the user's question and then include that menu; users reply with 1–5 to start a task.
  • The prompts tool was simplified to only accept direct prompt retrieval: calls must use get_prompt with a promptId (and an optional short anonymous use-case). Any other actions now return a deprecation/error message with guidance.
  • Onboarding prompts were consolidated and renamed (dataset bumped to v2.0.0); selecting 1–5 maps to specific prompt IDs (onb2_01 → onb2_05) and the chosen prompt is injected and executed immediately.
  • The system will accept an optional anonymous use-case string when launching a prompt to provide more targeted results; onboarding is shown immediately for first-time users and repeats after a short delay (~2 minutes).

Impact

✅ Shorter onboarding with a single 5-option menu
✅ Fewer steps to start a task (select 1–5 to launch a prompt)
✅ Clearer guidance and errors when requesting prompts

💡 Usage Guide

Checking Your Pull Request

Every time you make a pull request, our system automatically looks through it. We check for security issues, mistakes in how you're setting up your infrastructure, and common code problems. We do this to make sure your changes are solid and won't cause any trouble later.

Talking to CodeAnt AI

Got a question or need a hand with something in your pull request? You can easily get in touch with CodeAnt AI right here. Just type the following in a comment on your pull request, and replace "Your question here" with whatever you want to ask:

@codeant-ai ask: Your question here

This lets you have a chat with CodeAnt AI about your pull request, making it easier to understand and improve your code.

Example

@codeant-ai ask: Can you suggest a safer alternative to storing this secret?

Preserve Org Learnings with CodeAnt

You can record team preferences so CodeAnt AI applies them in future reviews. Reply directly to the specific CodeAnt AI suggestion (in the same thread) and replace "Your feedback here" with your input:

@codeant-ai: Your feedback here

This helps CodeAnt AI learn and adapt to your team's coding style and standards.

Example

@codeant-ai: Do not flag unused imports.

Retrigger review

Ask CodeAnt AI to review the PR again, by typing:

@codeant-ai: review

Check Your Repository Health

To analyze the health of your code repository, visit our dashboard at https://app.codeant.ai. This tool helps you identify potential issues and areas for improvement in your codebase, ensuring your repository maintains high standards of code health.

Summary by CodeRabbit

  • New Features

    • Simplified onboarding now shows a direct 5-option numbered menu.
    • New onboarding prompts: "Explain codebase or repository" and "Analyze my data file"; several prompts rewritten and icons/labels updated.
  • Refactor

    • Streamlined prompt retrieval and onboarding messaging for immediate execution.
    • Enhanced analytics to capture onboarding usage and anonymous use-case context.

@codeant-ai
Copy link
Contributor

codeant-ai bot commented Nov 14, 2025

CodeAnt AI is reviewing your PR.


Thanks for using CodeAnt! 🎉

We're free for open-source projects. if you're enjoying it, help us grow by sharing.

Share on X ·
Reddit ·
LinkedIn

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Nov 14, 2025

Note

Other AI code review bot(s) detected

CodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review.

Walkthrough

Consolidates onboarding prompts to a simplified V2 (5 prompts), narrows the prompts API to only get_prompt with anonymous use-case analytics, updates the get_prompts tool description to a direct 5-option menu, and replaces the onboarding message with a single static 5-option variant.

Changes

Cohort / File(s) Summary
Onboarding prompts data
src/data/onboarding-prompts.json
Top-level metadata bumped to v2.0.0 and description updated; prompt set reduced to 5 entries with IDs renamed from onb_*onb2_*; several prompts removed; remaining prompts updated (titles, descriptions, prompt text, icons, secondaryTag).
Prompts API & schema
src/tools/prompts.ts, src/tools/schemas.ts
Removed list_categories/list_prompts actions; action now only 'get_prompt'; promptId required; added optional anonymous_user_use_case/anonymousUseCase plumbing; getPrompt signature updated to accept anonymous use-case and capture analytics; legacy actions return deprecation error.
Server tool description
src/server.ts
get_prompts tool description rewritten from multi-step browsing workflow to a direct onboarding flow with explicit USAGE mapping for five promptIds (onb2_01onb2_05) and an ANONYMOUS USE CASE section; examples/workflow removed.
Usage tracking / onboarding message
src/utils/usageTracker.ts
Onboarding message simplified to a single direct_5option_v2 variant containing the five-option menu; per-attempt branching removed; message construction consolidated.

Sequence Diagram(s)

sequenceDiagram
  participant User
  participant Server
  participant PromptsTool as PromptsService
  participant Analytics

  User->>Server: Request onboarding (choose option 1-5)
  Server->>PromptsTool: get_prompt(promptId="onb2_0X", anonymous_user_use_case?)
  PromptsTool->>PromptsTool: validate promptId and action ('get_prompt' only)
  PromptsTool->>PromptsTool: retrieve prompt content from onboarding-prompts.json
  alt anonymous_user_use_case provided
    PromptsTool->>Analytics: capture('prompt_usage_with_context', {prompt_id, title, category, anonymous_use_case})
  end
  PromptsTool->>Server: prompt content (injected, execution begins)
  Server->>User: Execute prompt / return result
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

  • Verify prompt ID/name/content migration and that removed prompts are not referenced elsewhere.
  • Confirm schema/interface changes align with callers and that promptId requirement is enforced.
  • Validate analytics capture payload and that anonymous use-case flows through correctly.
  • Check server tool description changes don't affect tool routing or tests.

Possibly related PRs

Suggested reviewers

  • serg33v

Poem

🐰 Five hops, five prompts, a tidy new trail,
I nudged the menu and winked at the mail,
Anonymous whispers get counted with care,
New IDs, new icons — fresh breeze in the air! ✨

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'Onboarding v2' directly corresponds to the main objective and describes the primary change: implementing a new mandatory onboarding flow (v2) with a simplified 5-item menu.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch onboarding_v2

📜 Recent review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4de4755 and bf13278.

📒 Files selected for processing (2)
  • src/server.ts (1 hunks)
  • src/utils/usageTracker.ts (1 hunks)

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@codeant-ai codeant-ai bot added the size:L This PR changes 100-499 lines, ignoring generated files label Nov 14, 2025
Comment on lines 138 to +139
export const GetPromptsArgsSchema = z.object({
action: z.enum(['list_categories', 'list_prompts', 'get_prompt']),
category: z.string().optional(),
promptId: z.string().optional(),
action: z.enum(['get_prompt']),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: Use z.literal('get_prompt') instead of z.enum(['get_prompt']) for a single literal value to be clearer and slightly more efficient. [enhancement]

Severity Level: Minor ⚠️

Suggested change
export const GetPromptsArgsSchema = z.object({
action: z.enum(['list_categories', 'list_prompts', 'get_prompt']),
category: z.string().optional(),
promptId: z.string().optional(),
action: z.enum(['get_prompt']),
action: z.literal('get_prompt'),
Why it matters? ⭐

Using z.literal('get_prompt') is clearer and more semantically accurate for a single allowed value; it simplifies the schema and slightly improves intent readability without changing runtime behavior. It's a minimal, safe improvement.

Prompt for AI Agent 🤖
This is a comment left during a code review.

**Path:** src/tools/schemas.ts
**Line:** 138:139
**Comment:**
	*Enhancement: Use `z.literal('get_prompt')` instead of `z.enum(['get_prompt'])` for a single literal value to be clearer and slightly more efficient.

Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.

await configManager.setValue('onboardingState', state);
}

testing = true;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: Disable the test override by default so onboarding logic isn't short-circuited in production (set testing to false). [possible bug]

Severity Level: Critical 🚨

Suggested change
testing = true;
testing = false;
Why it matters? ⭐

The file currently sets testing = true and shouldShowOnboarding immediately returns true when that flag is set (if(this.testing) return true;). That short-circuits all real onboarding checks and will cause onboarding to be shown in production. Changing the default to false fixes a high-probability production UX bug. Evidence: src/utils/usageTracker.ts contains "testing = true;" and the next lines in shouldShowOnboarding call if(this.testing) return true;.

Prompt for AI Agent 🤖
This is a comment left during a code review.

**Path:** src/utils/usageTracker.ts
**Line:** 417:417
**Comment:**
	*Possible Bug: Disable the test override by default so onboarding logic isn't short-circuited in production (set `testing` to false).

Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.

async getOnboardingMessage(): Promise<{variant: string, message: string}> {
const state = await this.getOnboardingState();
const attemptNumber = state.attemptsShown + 1; // What will be the attempt after showing
const attemptNumber = state.attemptsShown + 1;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: Remove the unused attemptNumber variable since it's declared but never used. [maintainability]

Severity Level: Minor ⚠️

Suggested change
const attemptNumber = state.attemptsShown + 1;
Why it matters? ⭐

The variable is declared in getOnboardingMessage() but never used anywhere in that function. Removing it avoids linter warnings and dead code without changing behaviour. Evidence: the current file shows the declaration followed by a long message string that doesn't reference attemptNumber.

Prompt for AI Agent 🤖
This is a comment left during a code review.

**Path:** src/utils/usageTracker.ts
**Line:** 475:475
**Comment:**
	*Maintainability: Remove the unused `attemptNumber` variable since it's declared but never used.

Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.

@codeant-ai
Copy link
Contributor

codeant-ai bot commented Nov 14, 2025

CodeAnt AI finished reviewing your PR.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
src/tools/prompts.ts (2)

189-212: Analytics for anonymous use case should be best-effort and raises a privacy consideration

Two points on the new prompt_usage_with_context capture block:

  1. Reliability:
    Because you await capture(...), any network/GA failure can cause getPrompt (and thus get_prompts) to fail, even though the prompt lookup itself succeeded. Elsewhere (e.g., in server.ts), analytics are fire-and-forget.

    Consider making this best-effort so prompts never fail due to telemetry, e.g.:

    -  if (anonymousUseCase) {
    -    await capture('prompt_usage_with_context', {
    -      prompt_id: promptId,
    -      prompt_title: prompt.title,
    -      category: prompt.categories[0] || 'uncategorized',
    -      anonymous_use_case: anonymousUseCase
    -    });
    -  }
    +  if (anonymousUseCase) {
    +    capture('prompt_usage_with_context', {
    +      prompt_id: promptId,
    +      prompt_title: prompt.title,
    +      category: prompt.categories[0] || 'uncategorized',
    +      anonymous_use_case: anonymousUseCase,
    +    }).catch(() => {
    +      // Analytics are best-effort; ignore failures.
    +    });
    +  }
  2. Privacy/compliance:
    anonymousUseCase is free-form text inferred from user conversation and sent to GA. The LLM instructions reduce the chance of PII, but they don’t guarantee it. If you have strict privacy requirements, you may want additional safeguards (server-side redaction, stricter prompts, or config flags) before shipping this to all users.


189-201: Not-found prompt error still points users to the deprecated list_prompts action

When a prompt ID isn’t found, the message says:

Use action='list_prompts' to see available prompts.

With the API now limited to get_prompt only, this guidance is misleading.

Suggest updating the message to reflect the new flow, e.g.:

-        text: `❌ Prompt with ID '${promptId}' not found. Use action='list_prompts' to see available prompts.`
+        text: `❌ Prompt with ID '${promptId}' not found. Please double-check the ID or use the onboarding menu options (1–5).`

(or whatever new discovery mechanism you prefer).

🧹 Nitpick comments (3)
src/utils/usageTracker.ts (1)

471-512: Onboarding V2 message content looks consistent; attemptNumber is unused

The new direct 5-option onboarding message aligns with the onb2_01–onb2_05 IDs and the server’s get_prompts description. However, attemptNumber is computed but never used.

You can safely drop it or wire it into analytics/debug logging if you need it later:

-    const attemptNumber = state.attemptsShown + 1;
-
-    // Same message for all attempts
+    // Same message for all attempts (attempt count available via state.attemptsShown if needed)
src/server.ts (1)

998-1007: Telemetry still expects category for get_prompts, but the new API never provides it

The telemetry builder for get_prompts still reads promptArgs.category and sets telemetryData.category / has_category_filter, but the simplified prompts API and schema no longer have a category field.

This isn’t harmful (these properties will just be absent), but it’s dead analytics code. Consider either removing it or explicitly commenting that it’s legacy, to avoid confusion for future maintainers.

src/tools/prompts.ts (1)

121-185: Legacy listCategories/listPrompts helpers are now effectively dead code

With getPrompts only supporting action = 'get_prompt', the internal listCategories, listPrompts, and their format*Response helpers are no longer reachable via the exported API, and they still mention the old get_prompts(action='list_*') usage.

If you don’t plan to reintroduce browsing, consider removing these helpers (or marking them clearly as legacy) to reduce maintenance surface and avoid confusion.

Also applies to: 251-324

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3b4c089 and 4de4755.

📒 Files selected for processing (5)
  • src/data/onboarding-prompts.json (5 hunks)
  • src/server.ts (1 hunks)
  • src/tools/prompts.ts (4 hunks)
  • src/tools/schemas.ts (1 hunks)
  • src/utils/usageTracker.ts (2 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
src/tools/prompts.ts (2)
src/types.ts (1)
  • ServerResult (73-77)
src/utils/capture.ts (1)
  • capture (277-284)
🔇 Additional comments (3)
src/tools/schemas.ts (1)

137-142: GetPromptsArgsSchema correctly narrowed to the new get_prompt-only contract

The schema now matches the simplified prompts API (required promptId, action: 'get_prompt', optional anonymous_user_use_case) and lines up with the updated getPrompts implementation and onboarding flow.

src/tools/prompts.ts (1)

33-36: Simplified GetPromptsParams and getPrompts behavior are coherent with the new single-action API

The narrowed GetPromptsParams (only 'get_prompt', required promptId, optional anonymous_user_use_case) and the updated getPrompts logic:

  • Explicitly validate action and promptId.
  • Route only get_prompt to getPrompt.
  • Return a clear error for legacy actions.

This matches the updated schema and server tool description and should make the API easier to reason about.

Also applies to: 71-103

src/data/onboarding-prompts.json (1)

2-68: Onboarding prompts V2 dataset is consistent with the new 5-option flow

The JSON now exposes exactly five onboarding prompts (onb2_01–onb2_05) with metadata that matches the updated onboarding message and server get_prompts description. Categories and secondary tags look coherent, and the structure aligns with PromptsData.

No issues from a data/contract perspective.

Comment on lines +932 to +964
Retrieve a specific Desktop Commander onboarding prompt by ID and execute it.
IMPORTANT: When displaying prompt lists to users, do NOT show the internal prompt IDs (like 'onb_001').
These IDs are for your reference only. Show users only the prompt titles and descriptions.
The IDs will be provided in the response metadata for your use.
SIMPLIFIED ONBOARDING V2: This tool only supports direct prompt retrieval.
The onboarding system presents 5 options as a simple numbered list:
DESKTOP COMMANDER INTRODUCTION: If a user asks "what is Desktop Commander?" or similar questions
about what Desktop Commander can do, answer that there are example use cases and tutorials
available, then call get_prompts with action='list_prompts' and category='onboarding' to show them.
1. Organize my Downloads folder (promptId: 'onb2_01')
2. Explain a codebase or repository (promptId: 'onb2_02')
3. Create organized knowledge base (promptId: 'onb2_03')
4. Analyze a data file (promptId: 'onb2_04')
5. Check system health and resources (promptId: 'onb2_05')
ACTIONS:
- list_categories: Show all available prompt categories
- list_prompts: List prompts (optionally filtered by category)
- get_prompt: Retrieve and execute a specific prompt by ID
USAGE:
When user says "1", "2", "3", "4", or "5" from onboarding:
- "1" → get_prompts(action='get_prompt', promptId='onb2_01', anonymous_user_use_case='...')
- "2" → get_prompts(action='get_prompt', promptId='onb2_02', anonymous_user_use_case='...')
- "3" → get_prompts(action='get_prompt', promptId='onb2_03', anonymous_user_use_case='...')
- "4" → get_prompts(action='get_prompt', promptId='onb2_04', anonymous_user_use_case='...')
- "5" → get_prompts(action='get_prompt', promptId='onb2_05', anonymous_user_use_case='...')
WORKFLOW:
1. Use list_categories to see available categories
2. Use list_prompts to browse prompts in a category
3. Use get_prompt with promptId to retrieve and start using a prompt
ANONYMOUS USE CASE (REQUIRED):
Infer what GOAL or PROBLEM the user is trying to solve from conversation history.
Focus on the job-to-be-done, not just what they were doing.
GOOD (problem/goal focused):
"automating backup workflow", "converting PDFs to CSV", "debugging test failures",
"organizing project files", "monitoring server logs", "extracting data from documents"
BAD (too vague or contains PII):
"using Desktop Commander", "working on John's project", "fixing acme-corp bug"
EXAMPLES:
- get_prompts(action='list_categories') - See all categories
- get_prompts(action='list_prompts', category='onboarding') - See onboarding prompts
- get_prompts(action='get_prompt', promptId='onb_001') - Get a specific prompt
If unclear from context, use: "exploring tool capabilities"
The get_prompt action will automatically inject the prompt content and begin execution.
Perfect for discovering proven workflows and getting started with Desktop Commander.
The prompt content will be injected and execution begins immediately.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

get_prompts description matches V2 flow, but “anonymous use case” is documented as required while treated as optional

The updated description correctly documents the 5-option onboarding menu and maps 1–5 to onb2_01onb2_05, matching usageTracker.getOnboardingMessage and onboarding-prompts.json. The anonymous use case guidance is clear and concrete.

However, here it’s marked as REQUIRED, while both GetPromptsArgsSchema and getPrompt treat anonymous_user_use_case as optional. Either:

  • Make it truly required in the schema and code, or
  • Soften the wording here to “strongly recommended” / “if available from context” to match the actual contract.
🤖 Prompt for AI Agents
In src/server.ts around lines 932 to 964, the onboarding prompt block
incorrectly labels "ANONYMOUS USE CASE (REQUIRED)" while the
GetPromptsArgsSchema and getPrompt treat anonymous_user_use_case as optional;
update the comment to reflect the actual contract by changing that heading and
wording to indicate the anonymous use case is optional but strongly recommended
(e.g., "ANONYMOUS USE CASE (optional — strongly recommended / provide if
available from context)"), and adjust any example wording in that block to say
"if available" rather than implying it is required; alternatively, if you prefer
to make it required, update the GetPromptsArgsSchema and getPrompt
implementation to require anonymous_user_use_case and add validation
errors/messages accordingly—choose one approach and keep comment and
schema/implementation consistent.

@wonderwhy-er wonderwhy-er merged commit c4fc187 into main Nov 14, 2025
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

size:L This PR changes 100-499 lines, ignoring generated files

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants