1.0.1
🐛 Bug Fixes & Improvements
Enhanced Error Diagnostics
- Added stop reason to error messages when LLM fails to generate tool calls
- If stop reason is
length, increasemax_tokensparameter for the affected agent in provider settings - Improves troubleshooting and configuration optimization
DuckDuckGo Search Stability
- Migrated to new DuckDuckGo API with HTML response parsing
- Added comprehensive test coverage with real-world search scenarios
- Significantly improved reliability and result quality
Provider Guardrails Bypass
- Added explicit authorization framework to all agent prompts
- Prevents blocking by OpenAI, Anthropic, and Google Gemini content filters
- Clarified penetration testing context as pre-authorized activity
OpenAI Configuration Updates
- Temporarily switched from
gpt-5too4-minifor primary agent and assistant due to OpenAI prompt evaluation instability - Increased
max_tokenslimits across multiple agents for better output capacity - Recommendation: Enable Human-in-the-loop mode (
ASK_USER=truein.env) when using OpenAI provider for improved stability
Additional Improvements
- Enhanced message formatting in vector store communications with document match scores
- Improved clarity in generator and refiner prompts for user task interpretation
- Added customer interaction protocol for AskUser tool
Full Changelog: v1.0.0...v1.0.1