-
Notifications
You must be signed in to change notification settings - Fork 2k
Description
This request is based on a conversation I had in Discord where two users are hoping to have some kind of prompt optimizer similar to cursor.
Problem
Users with detailed, long prompts (especially when defining app requirements) can quickly exhaust their token budget, particularly when using token-intensive models like Claude Sonnet 4. While Goose has an auto-compact feature that kicks in at 80% context window usage, optimizing prompts before they're sent would help reduce token consumption from the start.
Proposed Solution
Add a built-in prompt optimizer feature to Goose that can rephrase long requirements and descriptions into more concise versions while preserving the essential information and intent. This would be similar to features found in tools like Trae/Cursor.
Use Case
When a user provides a lengthy initial prompt with detailed requirements, the optimizer could:
- Condense verbose descriptions while maintaining clarity
- Remove redundant information
- Restructure for token efficiency
- Preserve all critical technical details and requirements
Benefits
- Reduce token consumption, especially with expensive models
- Extend conversation length before hitting context limits
- Complement the existing auto-compact feature at 80% usage
- Help users get more value from their token budget
Additional Context
- Current workaround: Use lead/worker multi-model setup to delegate work to cheaper models
- Related docs:
Potential Implementation Ideas
- Optional pre-processing step for user prompts
- Integration with Claude or other LLMs for prompt refinement
- User toggle to enable/disable optimization
- Preview optimized prompt before sending