Skip to content

1.1.0

Latest

Choose a tag to compare

@asdek asdek released this 17 Jan 23:30
c6f84e6

🔧 Bug Fixes & Improvements

LiteLLM Passthrough Support

  • Fixed Gemini provider compatibility issues preventing proper LiteLLM integration
  • All providers now support LiteLLM passthrough mode with standardized endpoints:
    • OpenAI: http://litellm:4000/openai/v1
    • Anthropic: http://litellm:4000/anthropic/v1
    • Gemini: http://litellm:4000/gemini
  • Tested and verified with LiteLLM v1.80.11-stable.1
  • Enhanced Gemini provider with custom HTTP transport for API key injection and URL rewriting

Windows File Path Compatibility

  • Changed file mounting scheme in PentAGI container to resolve Windows path format issues
  • Migrated from host path mapping to fixed container paths for better cross-platform compatibility
  • Updated volume mounts:
    • PENTAGI_LLM_SERVER_CONFIG_PATH/opt/pentagi/conf/custom.provider.yml
    • PENTAGI_OLLAMA_SERVER_CONFIG_PATH/opt/pentagi/conf/ollama.provider.yml
    • PENTAGI_DOCKER_CERT_PATH/opt/pentagi/docker/ssl
  • Migration: Installer v1.0.0 automatically migrates old settings to new schema
  • Users can now specify absolute paths in their host filesystem through installer forms

Ollama Single Model Configuration

  • Added OLLAMA_SERVER_MODEL environment variable to select a single model for all agents
  • Eliminates need to create custom provider configuration files for simple setups
  • Additional fine-tuning options:
    • OLLAMA_SERVER_PULL_MODELS_ENABLED - Control automatic model downloads (default: false)
    • OLLAMA_SERVER_LOAD_MODELS_ENABLED - Query available models on startup (default: false)
    • OLLAMA_SERVER_PULL_MODELS_TIMEOUT - Timeout for model pulls in seconds (default: 600)
  • Default model: llama3.1:8b-instruct-q8_0

Installer v1.0.0

  • Bumped installer version to 1.0.0 with comprehensive stability improvements
  • Full Windows support with all configuration scenarios matching Linux and macOS
  • Disabled ANSI formatting in Docker Compose commands on Windows for cleaner console output
  • Automatic settings migration from old path variables to new schema:
    • DOCKER_CERT_PATHPENTAGI_DOCKER_CERT_PATH
    • LLM_SERVER_CONFIG_PATHPENTAGI_LLM_SERVER_CONFIG_PATH
    • OLLAMA_SERVER_CONFIG_PATHPENTAGI_OLLAMA_SERVER_CONFIG_PATH
  • Enhanced error handling and validation throughout installation process
  • Recommended: Download latest installer, run "Apply changes" to migrate to new file mounting scheme, then navigate to Maintenance tab and execute "Update PentAGI" to download the new version that supports these options

Enhanced Terminal Command Handling

  • Improved agents' understanding of empty results from synchronous terminal commands
  • Enhanced background command processing with asynchronous result capture
  • Introduced quick check timeout for background command execution
  • Clearer feedback on command failures and silent successes
  • More accurate success messages reflecting actual command execution outcomes

Additional Improvements

  • Fixed nil pointer dereference in Graphiti client methods
  • Enhanced error handling for invalid server and proxy URLs during provider initialization
  • Improved Docker Compose command handling on Windows systems
  • Added comprehensive tests for Gemini API key injection and URL rewriting

Full Changelog: v1.0.1...v1.1.0