From f8674b4b2beba927bd2e8f95b86796fc2857035b Mon Sep 17 00:00:00 2001 From: Jael Gu Date: Fri, 12 Jul 2024 18:23:57 +0800 Subject: [PATCH] Update READ E Signed-off-by: Jael Gu --- .../apps/rag_search_with_milvus/README.md | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-) diff --git a/bootcamp/tutorials/quickstart/apps/rag_search_with_milvus/README.md b/bootcamp/tutorials/quickstart/apps/rag_search_with_milvus/README.md index f6d89385a..deb41e3c0 100644 --- a/bootcamp/tutorials/quickstart/apps/rag_search_with_milvus/README.md +++ b/bootcamp/tutorials/quickstart/apps/rag_search_with_milvus/README.md @@ -29,14 +29,14 @@ $ cd bootcamp/bootcamp/tutorials/quickstart/app/rag_search_with_milvus ``` OPENAI_API_KEY=************ - MILVUS_CONNECTION_URI=./milvus_demo.db + MILVUS_ENDPOINT=./milvus_demo.db # MILVUS_TOKEN=************ COLLECTION_NAME=my_rag_collection ``` - `OPENAI_API_KEY`: We will use OpenAI for LLM capabilities. You need to prepare the [OpenAI API Key](https://help.openai.com/en/articles/4936850-where-do-i-find-my-openai-api-key). - - `MILVUS_CONNECTION_URI`: The URI to connect Milvus or Zilliz Cloud service. By default, we use a local file "./milvus_demo.db" for convenience, as it automatically utilizes [Milvus Lite](https://milvus.io/docs/milvus_lite.md) to store all data at local. + - `MILVUS_ENDPOINT`: The URI to connect Milvus or Zilliz Cloud service. By default, we use a local file "./milvus_demo.db" for convenience, as it automatically utilizes [Milvus Lite](https://milvus.io/docs/milvus_lite.md) to store all data at local. > - If you have large scale of data, you can set up a more performant Milvus server on docker or kubernetes. In this setup, please use the server uri, e.g. http://localhost:19530, as your uri. > > - If you want to use Zilliz Cloud, the fully managed cloud service for Milvus, adjust the uri and token, which correspond to the Public Endpoint and Api key in Zilliz Cloud. @@ -90,26 +90,27 @@ Run the Streamlit application: ```bash $ streamlit run app.py ``` + ### Example Usage +**Step 1:** Enter your question in the chat and click on 'submit' button.
Description of Image -
Step 1: Enter your question in the chat and click on 'submit' button.
+**Step 2:** Response generated by LLM based on the prompt.
Description of Image -
Step 2: Response generated by LLM based on the prompt.
+**Step 3:** Top 3 retrieved original quotes from text data are listed on the left along with their distances, indicating how related the quote and the prompt are.
Description of Image -
Step 3: Top 3 retrieved original quotes from text data are listed on the left along with their distances, indicating how related the quote and the prompt are.
@@ -125,6 +126,6 @@ $ streamlit run app.py - **app.py:** The main file to run application by Streamlit, where the user interface is defined and the RAG chatbot is presented. - **encoder.py:** The text encoder is able to convert text inputs to text embeddings using a pretrained model. -- **insert.py:** This script handles the retrieving text data required for the application and inserting text embeddings into data collection. +- **insert.py:** This script creates collection and inserts document chunks with embeddings into the collection. - **milvus_utils.py:** Functions to interact with the Milvus database, such as creating collection and retrieving search results.