This guide walks through how to build a RAG using LlamaParse (from LlamaIndex) to parse documents and Snowflake Cortex for text splitting, search and generation.
LlamaParse is a genAI-native document parsing platform - built with LLMs and for LLM use cases. The main goal of LlamaParse is to parse and clean your data, ensuring that it's high quality before passing to any downstream LLM use case such as RAG or agents.
In this guide, we will:
- Parse a PDF with LlamaParse
- Load the parsed data into Snowflake
- Split the text for search
- Create a Cortex Search service
- Retrieve relevant context
- Build a simple RAG pipeline for Q&A on your data
For prerequisites, environment setup, step-by-step guide and instructions, please refer to the QuickStart Guide.