Skip to content

Add Prompt Optimizer Feature to Reduce Token Usage #5703

@blackgirlbytes

Description

@blackgirlbytes

This request is based on a conversation I had in Discord where two users are hoping to have some kind of prompt optimizer similar to cursor.

Problem

Users with detailed, long prompts (especially when defining app requirements) can quickly exhaust their token budget, particularly when using token-intensive models like Claude Sonnet 4. While Goose has an auto-compact feature that kicks in at 80% context window usage, optimizing prompts before they're sent would help reduce token consumption from the start.

Proposed Solution

Add a built-in prompt optimizer feature to Goose that can rephrase long requirements and descriptions into more concise versions while preserving the essential information and intent. This would be similar to features found in tools like Trae/Cursor.

Use Case

When a user provides a lengthy initial prompt with detailed requirements, the optimizer could:

  • Condense verbose descriptions while maintaining clarity
  • Remove redundant information
  • Restructure for token efficiency
  • Preserve all critical technical details and requirements

Benefits

  • Reduce token consumption, especially with expensive models
  • Extend conversation length before hitting context limits
  • Complement the existing auto-compact feature at 80% usage
  • Help users get more value from their token budget

Additional Context

Potential Implementation Ideas

  • Optional pre-processing step for user prompts
  • Integration with Claude or other LLMs for prompt refinement
  • User toggle to enable/disable optimization
  • Preview optimized prompt before sending

Metadata

Metadata

Assignees

Labels

enhancementNew feature or requesthelp wantedGreat issue for non-Block contributorsp2Priority 2 - Medium

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions