All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
This release completely refactors the directory structure of the repository for a more seamless and intuitive developer journey. It also adds support to deploy latest accelerated embedding and reranking models across the cloud, data center, and workstation using NVIDIA NeMo Retriever NIM microservices.
- End-to-end RAG examples enhancements
- Single-command deployment for all the examples using Docker Compose.
- All end to end RAG examples are now more encapsulated with documentation, code and deployment assets residing in dedicated example specific directory.
- Segregated examples into basic and advanced RAG with dedicated READMEs.
- Added reranker model support to multi-turn RAG example.
- Added dedicated prompt configuration file for every example.
- Removed Python dev packages from containers to enhance security.
- Updated to latest version of langchain-nvidia-ai-endpoints.
- Speech support using RAG Playground
- Added support to access RIVA speech models from NVIDIA API Catalog.
- Speech support in RAG Playground is opt-in.
- Documentation enhancements
- Added more comprehensive how-to guides for end to end RAG examples.
- Added example specific architecture diagrams in each example directory.
- Added a new industry specific top level directory
- Added notebooks showcasing new usecases
- Basic langchain based RAG pipeline using latest NVIDIA API Catalog connectors.
- Basic llamaindex based RAG pipeline using latest NVIDIA API Catalog connectors.
- NeMo Guardrails with basic langchain RAG.
- NVIDIA NIM microservices using NeMo Guardrails based RAG.
- Using NeMo Evaluator using Llama 3.1 8B Instruct.
- Agentic RAG pipeline with Nemo Retriever and NIM for LLMs.
- Added new
community
(beforeexperimental
) example- Create a simple web interface to interact with different selectable NIM endpoints. The provided interface of this project supports designing a system prompt to call the LLM.
- Major restructuring and reorganisation of the assets within the repository
- Top level
experimental
directory has been renamed ascommunity
. - Top level
RetrievalAugmentedGeneration
directory has been renamed as justRAG
. - The Docker Compose files inside top level
deploy
directory has been migrated to example-specific directories underRAG/examples
. The vector database and on-prem NIM microservices deployment files are underRAG/examples/local_deploy
. - Top level
models
has been renamed tofinetuning
. - Top level
notebooks
directory has been moved to underRAG/notebooks
and has been organised framework wise. - Top level
tools
directory has been migrated toRAG/tools
. - Top level
integrations
directory has been moved intoRAG/src
. RetreivalAugmentedGeneration/common
is now residing underRAG/src/chain_server
.RetreivalAugmentedGeneration/frontend
is now residing underRAG/src/rag_playground/default
.5 mins RAG No GPU
example under top levelexamples
directory, is now undercommunity
.
- Top level
- Github pages based documentation is now replaced with markdown based documentation.
- Top level
examples
directory has been removed. - Following notebooks were removed
This release switches all examples to use cloud hosted GPU accelerated LLM and embedding models from Nvidia API Catalog as default. It also deprecates support to deploy on-prem models using NeMo Inference Framework Container and adds support to deploy accelerated generative AI models across the cloud, data center, and workstation using latest Nvidia NIM-LLM.
- Added model auto download and caching support for
nemo-retriever-embedding-microservice
andnemo-retriever-reranking-microservice
. Updated steps to deploy the services can be found here. - Multimodal RAG Example enhancements
- Moved to the PDF Plumber library for parsing text and images.
- Added
pgvector
vector DB support. - Added support to ingest files with .pptx extension
- Improved accuracy of image parsing by using tesseract-ocr
- Added a new notebook showcasing RAG usecase using accelerated NIM based on-prem deployed models
- Added a new experimental example showcasing how to create a developer-focused RAG chatbot using RAPIDS cuDF source code and API documentation.
- Added a new experimental example demonstrating how NVIDIA Morpheus, NIM microservices, and RAG pipelines can be integrated to create LLM-based agent pipelines.
- All examples now use llama3 models from Nvidia API Catalog as default. Summary of updated examples and the model it uses is available here.
- Switched default embedding model of all examples to Snowflake arctic-embed-I model
- Added more verbose logs and support to configure log level for chain server using LOG_LEVEL enviroment variable.
- Bumped up version of
langchain-nvidia-ai-endpoints
,sentence-transformers
package andmilvus
containers - Updated base containers to use ubuntu 22.04 image
nvcr.io/nvidia/base/ubuntu:22.04_20240212
- Added
llama-index-readers-file
as dependency to avoid runtime package installation within chain server.
- Deprecated support of on-prem LLM model deployment using NeMo Inference Framework Container. Developers can use Nvidia NIM-LLM to deploy TensorRT optimized models on-prem and plug them in with existing examples.
- Deprecated kubernetes operator support.
nvolveqa_40k
embedding model was deprecated from Nvidia API Catalog. Updated all notebooks and experimental artifacts to use Nvidia embed-qa-4 model instead.- Removed notebooks numbered 00-04, which used on-prem LLM model deployment using deprecated NeMo Inference Framework Container.
- Ability to switch between API Catalog models to on-prem models using NIM-LLM.
- New API endpoint
/health
- Provides a health check for the chain server.
- Containerized evaluation application for RAG pipeline accuracy measurement.
- Observability support for langchain based examples.
- New Notebooks
- Added Chat with NVIDIA financial data notebook.
- Added notebook showcasing langgraph agent handling.
- A simple rag example template showcasing how to build an example from scratch.
- Renamed example
csv_rag
to structured_data_rag - Model Engine name update
nv-ai-foundation
andnv-api-catalog
llm engine are renamed tonvidia-ai-endpoints
nv-ai-foundation
embedding engine is renamed tonvidia-ai-endpoints
- Embedding model update
developer_rag
example uses UAE-Large-V1 embedding model.- Using
ai-embed-qa-4
for api catalog examples instead ofnvolveqa_40k
as embedding model
- Ingested data now persists across multiple sessions.
- Updated langchain-nvidia-endpoints to version 0.0.11, enabling support for models like llama3.
- File extension based validation to throw error for unsupported files.
- The default output token length in the UI has been increased from 250 to 1024 for more comprehensive responses.
- Stricter chain-server API validation support to enhance API security
- Updated version of llama-index, pymilvus.
- Updated pgvector container to
pgvector/pgvector:pg16
- LLM Model Updates
- Multiturn Chatbot now uses
ai-mixtral-8x7b-instruct
model for response generation. - Structured data rag now uses
ai-llama3-70b
for response and code generation.
- Multiturn Chatbot now uses
This release adds new dedicated RAG examples showcasing state of the art usecases, switches to the latest API catalog endpoints from NVIDIA and also refactors the API interface of chain-server. This release also improves the developer experience by adding github pages based documentation and streamlining the example deployment flow using dedicated compose files.
- Github pages based documentation.
- New examples showcasing
- Support for delete and list APIs in chain-server component
- Streamlined RAG example deployment
- Dedicated new docker compose files for every examples.
- Dedicated docker compose files for launching vector DB solutions.
- New configurations to control top k and confidence score of retrieval pipeline.
- Added a notebook which covers how to train SLMs with various techniques using NeMo Framework.
- Added more experimental examples showcasing new usecases.
- New dedicated notebook showcasing a RAG pipeline using web pages.
- Switched from NVIDIA AI Foundation to NVIDIA API Catalog endpoints for accessing cloud hosted LLM models.
- Refactored API schema of chain-server component to support runtime allocation of llm parameters like temperature, max tokens, chat history etc.
- Renamed
llm-playground
service in compose files torag-playground
. - Switched base containers for all components to ubuntu instead of pytorch and optimized container build time as well as container size.
- Deprecated yaml based configuration to avoid confusion, all configurations are now environment variable based.
- Removed requirement of hardcoding
NVIDIA_API_KEY
incompose.env
file. - Upgraded all python dependencies for chain-server and rag-playground services.
- Fixed a bug causing hallucinated answer when retriever fails to return any documents.
- Fixed some accuracy issues for all the examples.
- New dedicated notebooks showcasing usage of cloud based Nvidia AI Playground based models using Langchain connectors as well as local model deployment using Huggingface.
- Upgraded milvus container version to enable GPU accelerated vector search.
- Added support to interact with models behind NeMo Inference Microservices using new model engines
nemo-embed
andnemo-infer
. - Added support to provide example specific collection name for vector databases using an environment variable named
COLLECTION_NAME
. - Added
faiss
as a generic vector database solution behindutils.py
.
- Upgraded and changed base containers for all components to pytorch
23.12-py3
. - Added langchain specific vector database connector in
utils.py
. - Changed speech support to use single channel for Riva ASR and TTS.
- Changed
get_llm
utility inutils.py
to return Langchain wrapper instead of Llmaindex wrappers.
- Fixed a bug causing empty rating in evaluation notebook
- Fixed document search implementation of query decomposition example.
- New dedicated example showcasing Nvidia AI Playground based models using Langchain connectors.
- New example demonstrating query decomposition.
- Support for using PG Vector as a vector database in the developer rag canonical example.
- Support for using Speech-in Speech-out interface in the sample frontend leveraging RIVA Skills.
- New tool showcasing RAG observability support.
- Support for on-prem deployment of TRTLLM based nemotron models.
- Upgraded Langchain and llamaindex dependencies for all container.
- Restructured README files for better intuitiveness.
- Added provision to plug in multiple examples using a common base class.
- Changed
minio
service's port to9010
from9000
in docker based deployment. - Moved
evaluation
directory from top level to undertools
and created a dedicated compose file. - Added an experimental directory for plugging in experimental features.
- Modified notebooks to use TRTLLM and Nvidia AI foundation based connectors from langchain.
- Changed
ai-playground
model engine name tonv-ai-foundation
in configurations.
- Support for using Nvidia AI Playground based LLM models
- Support for using Nvidia AI Playground based embedding models
- Support for deploying and using quantized LLM models
- Support for Kubernetes deployment support using helm charts
- Support for evaluating RAG pipeline
- Repository restructing to allow better open source contributions
- Upgraded dependencies for chain server container
- Upgraded NeMo Inference Framework container version, no seperate sign up needed for access.
- Main README now provides more details.
- Documentation improvements.
- Better error handling and reporting mechanism for corner cases
- Renamed
triton-inference-server
container tollm-inference-server
- Fixed issue #13 of pipeline not able to answer questions unrelated to knowledge base
- Fixed issue #12 typechecking while uploading PDF files