This repository contains two examples demonstrating how to integrate a Gaia Node (providing an OpenAI-compatible API for local LLMs) with Redis to enhance your AI applications.
- Gaia Node: You must have a Gaia Node running locally or accessible over the network. Note down its API URL (e.g.,
http://localhost:8080/v1), the model name it serves (e.g.,llama3b), and generate an API key. - Redis: You need a Redis server running. Standard Redis Open Source is sufficient for these examples.
- Python Environment: Ensure you have Python installed. It's recommended to use a virtual environment.
- Clone or Download: Obtain the example files (
basic.py,intermediate.py) and therequirements.txtfile. - Create
.envFile: In the directory with your scripts, create a file named.envand add your Gaia Node credentials:GAIA_NODE_URL=http://your-gaia-node-address:port/v1 # e.g., http://localhost:8080/v1 GAIA_API_KEY=your_actual_gaia_api_key GAIA_MODEL_NAME=your_model_name # e.g., llama3b # REDIS_URL=redis://localhost:6379 # Optional, defaults to local Redis
- Install Dependencies: Install the required Python packages:
pip install -r requirements.txt - Run an Example: Execute either Streamlit application:
streamlit run basic.pystreamlit run intermediate.py
This example sets up the fundamental connection between your application, a Gaia Node, and Redis.
- Gaia Node: Sends chat messages to your Gaia Node using the
langchain-openailibrary and displays the AI's response. - Redis: Demonstrates simple storing and retrieving of data (like a message or setting) using Redis as a fast key-value store.
- UI: Provides a chat interface for the Gaia Node and a simple input section in the sidebar to interact with Redis.
- Connecting to your Gaia Node's API.
- Connecting to a Redis instance.
- Performing basic
SETandGEToperations in Redis.
This example builds on the basic one by adding a powerful feature: caching. It prevents the application from asking the Gaia Node the same question repeatedly by storing previous answers in Redis.
- Gaia Node: The chat functionality remains, but now the application checks Redis before asking the Gaia Node a question.
- Redis (Caching):
- Check: Before calling the Gaia Node, it calculates a unique key for the current question/conversation history and checks if an answer for this key exists in Redis.
- Cache Hit: If found, it displays the answer directly from Redis, skipping the Gaia Node call (much faster!).
- Cache Miss: If not found, it calls the Gaia Node, gets the answer, stores the answer in Redis under the calculated key (for future use), and then displays it.
- UI: The chat works the same, but now it will indicate if a response came from the cache. It also shows how many items are currently cached in Redis.
- Using Redis as a fast cache to store and retrieve LLM responses.
- Improving application speed and reducing load on the Gaia Node by avoiding duplicate requests.
- Generating unique keys for cache entries based on input.
Redis acts as a high-speed data layer alongside your Gaia Node. The basic example shows simple data storage, while the intermediate example shows how this fast storage can be used strategically (caching) to make your AI application more efficient and responsive.