Skip to content

✨ A curated list of awesome community resources, integrations, and examples of Redis in the AI ecosystem.

License

Notifications You must be signed in to change notification settings

redis-developer/redis-ai-resources

Repository files navigation

AI Resources

AI Resources

License: MIT Language GitHub last commit Discord Twitter

✨ A curated repository of code recipes, demos, tutorials and resources for basic and advanced Redis use cases in the AI ecosystem. ✨


Getting Started

New to Redis for AI applications? Here's how to get started:

  1. First time with Redis? Start with our Redis Intro notebook
  2. Want to try vector search? Check our Vector Search with RedisVL recipe
  3. Building a RAG application? Begin with RAG from Scratch
  4. Ready to see it in action? Play with the Redis RAG Workbench demo

Demos

No faster way to get started than by diving in and playing around with a demo.

Demo Description
Redis RAG Workbench Interactive demo to build a RAG-based chatbot over a user-uploaded PDF. Toggle different settings and configurations to improve chatbot performance and quality. Utilizes RedisVL, LangChain, RAGAs, and more.
Redis VSS - Simple Streamlit Demo Streamlit demo of Redis Vector Search
ArXiv Search Full stack implementation of Redis with React FE
Product Search Vector search with Redis Stack and Redis Enterprise
ArxivChatGuru Streamlit demo of RAG over Arxiv documents with Redis & OpenAI

Recipes

Need quickstarts to begin your Redis AI journey?

Getting started with Redis & Vector Search

Recipe GitHub Google Colab
🏁 Redis Intro - The place to start if brand new to Redis Open In GitHub Open In Colab
🔍 Vector Search with RedisPy - Vector search with Redis python client Open In GitHub Open In Colab
📚 Vector Search with RedisVL - Vector search with Redis Vector Library Open In GitHub Open In Colab
🔄 Hybrid Search - Hybrid search techniques with Redis (BM25 + Vector) Open In GitHub Open In Colab
🔢 Data Type Support - Shows how to convert a float32 index to float16 or integer dataypes Open In GitHub Open In Colab

Retrieval Augmented Generation (RAG)

Retrieval Augmented Generation (aka RAG) is a technique to enhance the ability of an LLM to respond to user queries. The retrieval part of RAG is supported by a vector database, which can return semantically relevant results to a user's query, serving as contextual information to augment the generative capabilities of an LLM.

To get started with RAG, either from scratch or using a popular framework like Llamaindex or LangChain, go with these recipes:

Recipe GitHub Google Colab
🧩 RAG from Scratch - RAG from scratch with the Redis Vector Library Open In GitHub Open In Colab
⛓️ LangChain RAG - RAG using Redis and LangChain Open In GitHub Open In Colab
🦙 LlamaIndex RAG - RAG using Redis and LlamaIndex Open In GitHub Open In Colab
🚀 Advanced RAG - Advanced RAG techniques Open In GitHub Open In Colab
🖥️ NVIDIA RAG - RAG using Redis and Nvidia NIMs Open In GitHub Open In Colab
📊 RAGAS Evaluation - Utilize the RAGAS framework to evaluate RAG performance Open In GitHub Open In Colab
🔒 Role-Based RAG - Implement a simple RBAC policy with vector search using Redis Open In GitHub Open In Colab

LLM Memory

LLMs are stateless. To maintain context within a conversation chat sessions must be stored and re-sent to the LLM. Redis manages the storage and retrieval of message histories to maintain context and conversational relevance.

Recipe GitHub Google Colab
💬 Message History - LLM message history with semantic similarity Open In GitHub Open In Colab
👥 Multiple Sessions - Handle multiple simultaneous chats with one instance Open In GitHub Open In Colab

Semantic Caching

An estimated 31% of LLM queries are potentially redundant (source). Redis enables semantic caching to help cut down on LLM costs quickly.

Recipe GitHub Google Colab
🧠 Gemini Semantic Cache - Build a semantic cache with Redis and Google Gemini Open In GitHub Open In Colab
🦙 Llama3.1 Doc2Cache - Build a semantic cache using the Doc2Cache framework and Llama3.1 Open In GitHub Open In Colab
⚙️ Cache Optimization - Use CacheThresholdOptimizer from redisvl to setup best cache config Open In GitHub Open In Colab

Semantic Routing

Routing is a simple and effective way of preventing misuse with your AI application or for creating branching logic between data sources etc.

Recipe GitHub Google Colab
🔀 Basic Routing - Simple examples of how to build an allow/block list router in addition to a multi-topic router Open In GitHub Open In Colab
⚙️ Router Optimization - Use RouterThresholdOptimizer from redisvl to setup best router config Open In GitHub Open In Colab

AI Gateways

AI gateways manage LLM traffic through a centralized, managed layer that can implement routing, rate limiting, caching, and more.

Recipe GitHub Google Colab
🚪 LiteLLM Proxy - Getting started with LiteLLM proxy and Redis Open In GitHub Open In Colab

Agents

Recipe GitHub Google Colab
🕸️ LangGraph Agents - Notebook to get started with lang-graph and agents Open In GitHub Open In Colab
👥 CrewAI Agents - Notebook to get started with CrewAI and lang-graph Open In GitHub Open In Colab
🧠 Memory Agent - Building an agent with short term and long term memory using Redis Open In GitHub Open In Colab
🛠️ Full-Featured Agent - Notebook builds full tool calling agent with semantic cache and router Open In GitHub Open In Colab

Computer Vision

Recipe GitHub Google Colab
👤 Facial Recognition - Build a facial recognition system using the Facenet embedding model and RedisVL Open In GitHub Open In Colab

Recommendation Systems

Recipe GitHub Google Colab
📋 Content Filtering - Intro content filtering example with redisvl Open In GitHub Open In Colab
👥 Collaborative Filtering - Intro collaborative filtering example with redisvl Open In GitHub Open In Colab
🏗️ Two Towers - Intro deep learning two tower example with redisvl Open In GitHub Open In Colab

Feature Store

Recipe GitHub Google Colab
💳 Credit Scoring - Credit scoring system using Feast with Redis as the online store Open In GitHub Open In Colab
🔍 Transaction Search - Real-time transaction feature search with Redis Open In GitHub Open In Colab

☕️ Java AI Recipes

A set of Java recipes can be found under /java-recipes.

Tutorials

Need a deeper-dive through different use cases and topics?

🤖 Agentic RAG
A tutorial focused on agentic RAG with LlamaIndex and Cohere
☁️ RAG on VertexAI
A RAG tutorial featuring Redis with Vertex AI
🔍 Recommendation Systems
Building realtime recsys with NVIDIA Merlin & Redis

Integrations

Redis integrates with many different players in the AI ecosystem. Here's a curated list below:

Integration Description
RedisVL A dedicated Python client lib for Redis as a Vector DB
AWS Bedrock Streamlines GenAI deployment by offering foundational models as a unified API
LangChain Python Popular Python client lib for building LLM applications powered by Redis
LangChain JS Popular JS client lib for building LLM applications powered by Redis
LlamaIndex LlamaIndex Integration for Redis as a vector Database (formerly GPT-index)
LiteLLM Popular LLM proxy layer to help manage and streamline usage of multiple foundation models
Semantic Kernel Popular lib by MSFT to integrate LLMs with plugins
RelevanceAI Platform to tag, search and analyze unstructured data faster, built on Redis
DocArray DocArray Integration of Redis as a VectorDB by Jina AI

Other Helpful Resources


Contributing

We welcome contributions to Redis AI Resources! Here's how you can help:

  1. Add a new recipe: Create a Jupyter notebook demonstrating a Redis AI use case
  2. Improve documentation: Enhance existing notebooks or README with clearer explanations
  3. Fix bugs: Address issues in code samples or documentation
  4. Suggest improvements: Open an issue with ideas for new content or enhancements

To contribute:

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Submit a pull request

Please follow the existing style and format of the repository when adding content.