🎉 v0.1.3 Release
🐳 Docker Support for Advanced Codecs
- Cross-platform H.265/HEVC encoding - No more codec dependency nightmares!
- Automated Docker container management for non-MP4 codecs
- Works seamlessly on Windows (WSL), macOS, and Linux
- Handles all FFmpeg operations in isolated environment
🤖 Multi-LLM Provider Support
- Added Google Gemini support - Use
provider='google'
in MemvidChat - Added Anthropic Claude support - Use
provider='anthropic'
in MemvidChat - New modular
LLMClient
class for easy provider management - Consistent interface across all LLM providers
⚙️ Enhanced Configuration System
- Centralized configuration management via
config.py
- Per-codec configuration profiles for optimal compression
- Flexible FFmpeg parameter customization
- Support for different video container formats (MP4, MKV, AVI)
✨ New Examples
codec_comparison.py
Compare different video codecs side-by-side:
- Test H.264, H.265, and MP4V compression ratios
- Benchmark encoding/decoding performance
- Find the optimal codec for your use case
file_chat.py
Enhanced document processing and chat:
- Process entire directories or specific files
- Configurable chunking parameters
- Support for PDF, EPUB, HTML, and text files
- Load and chat with existing memories
- Graceful FAISS index fallback for small datasets
🔧 Improvements
Better Error Handling
]- Improved error messages for missing dependencies
- Better handling of codec-specific issues
Configuration Flexibility
- Customizable chunk sizes and overlap
- Per-codec video parameters (CRF, preset, profile)
- Configurable frame rates and sizes
Package Structure
- Moved LLM providers to optional dependencies:
pip install memvid[llm]
- Added EPUB support as optional:
pip install memvid[epub]
- Core dependencies remain minimal
📦 Installation
# Basic installation
pip install memvid==0.1.3
API Keys
Set your API keys as environment variables:
export OPENAI_API_KEY="sk-..."
export GOOGLE_API_KEY="AIza..."
export ANTHROPIC_API_KEY="sk-ant-..."
🙏 Acknowledgments
Special thanks to our contributors who made this release possible with Docker support, codec testing, and multi-LLM integration!