This project is a collection of useful Retrieval-Augmented Generation (RAG) and other LLM implementations built with LangFlow. The goal is to provide templates for different use cases, making it easier to implement AI solutions without starting from scratch.
- Overview
- Project Structure
- RAG Implementations
- Getting Started
- Prerequisites
- Installation
- Usage
- Contributing
- License
This repository contains various AI implementation templates using LangFlow, with a focus on RAG (Retrieval-Augmented Generation) patterns. Each implementation is designed to be a standalone template that can be adapted for specific use cases.
Set of two LangFlow models
This implementation combines Ollama embeddings with ChromaDB vector store for creating powerful RAG applications.
Key components:
- ChromaDBVectorStore-DataLoader.json: Handles data loading and vector storage
- OllamaWithEmbeddings-RAG.json: Implements the RAG pattern using Ollama for embeddings
Mares a GET call to CBR site to get a list of currency rates in XML. Then you may ask questions about currency and rates
Key components:
- CurrencyAPIRequest.json: Handles data loading and Ollama requests
These instructions will help you get started with using the templates in this repository.
- LangFlow installed
- Ollama service running
- ChromaDB service accessible
- Python 3.8 or higher (if required by your specific implementation)
-
Clone this repository:
git clone https://github.com/pekt00p/LangFlowProjects.git
-
Navigate to the desired implementation directory e.g.:
cd OllamaChromaDBRAG -
Import the JSON files into your LangFlow instance
- Open LangFlow in your browser
- Import the desired JSON template
- Configure the components as needed for your use case
- Run the flow
Example of LangFlow interface with RAG implementation
Configuration of ChromaDB vector store
Contributions are welcome! Please feel free to submit a Pull Request.
This project is licensed under the MIT License - see the LICENSE file for details.
