Releases: nomic-ai/gpt4all
Releases · nomic-ai/gpt4all
v2.6.2
What's Changed
- Fix crash when deserializing chats with saved context from 2.5.x and earlier (#1859)
- New light mode and dark mode UI themes (#1876)
- Update to latest llama.cpp after merge of Nomic's Vulkan PR (#1819, #1883)
- Much faster prompt processing on Linux and Windows thanks to re-enabled GPU support in the Vulkan backend
- Support offloading only some layers of the model if you have less VRAM (#1890)
- Support Maxwell and Pascal Nvidia GPUs (#1895)
Fixes
- Don't show "retrieving localdocs" if there are no collections (#1874)
- Fix potential crash when loading fails due to insufficient VRAM (6db5307, Issue #1870)
- Fix VRAM leak when switching models (Issue #1840)
- Support Nomic Embed as LocalDocs embedding model via Atlas (d14b95f)
New Contributors
- @realKarthikNair made their first contribution in #1871
Full Changelog: v2.6.1...v2.6.2
v2.6.1
What's Changed
- Update to November 23rd version of llama.cpp (#1706)
- Fix AVX support by removing direct linking to AVX2 libs (#1750)
- Implement configurable context length (#1749)
- Update server.cpp to return valid created timestamps by @CalAlaera (#1763)
- Fix issue that caused v2.6.0 release to fail to load models
New Contributors
- @moritz-t-w made their first contribution in #1697
- @CalAlaera made their first contribution in #1763
- @gerstrong made their first contribution in #1756
- @ThiloteE made their first contribution in #1793
Full Changelog: v2.5.4...v2.6.1
v2.5.4
What's Changed
- Fixed gpt4all_api server after GGUF changes, by @dpsalvatierra in #1659
- Add Orca 2 7B and 13B to the models list (#1672)
- Networking retry on download error for models in the chat UI (#1671)
- Fix a bug that caused the system prompt to be ignored in new chat sessions until the chat was cleared
New Contributors
- @dpsalvatierra made their first contribution in #1659
Full Changelog: v2.5.3...v2.5.4
v2.5.3
What's Changed
- LocalDocs now uses text embeddings to query documents for more accurate retrieval (#1648)
- Fix GUI hang with LocalDocs (#1658)
- Typescript bindings: vulkan and gguf support by @jacoobes in #1390
- Bindings: improve quality of error messages (#1625)
New Contributors
- @aj-gameon made their first contribution in #1607
Full Changelog: v2.5.2...v2.5.3
v2.5.2
What's Changed
- backend: support GGUFv3 (#1582)
- Important fixes for AMD GPUs
- Don't start recalculating context immediately for saved chats
- UI fixes for chat name generation
- UI fixes for leading whitespaces in chat generation
Full Changelog: v2.5.1...v2.5.2
v2.5.0-pre1
Pre-release 1 of version 2.5.0 is now available! This is a pre-release with offline installers and includes:
- GGUF file format support (only, old model files will not run)
- Completely new set of models including Mistral and Wizard v1.2
- Restored support for Falcon model (which is now GPU accelerated)
- Restored support for MPT
- Based on latest llama.cpp as of late September
- Speed improvements and bugfixes to GPU support
- Improved GUI for cpu/gpu fallback
- Numerous other improvements