You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: README.md
+2-2
Original file line number
Diff line number
Diff line change
@@ -2,14 +2,14 @@
2
2
3
3
Please visit http://ai-cookbook.io for the accompanying documentation for this repo.
4
4
5
-
This repo provides [learning materials](https://ai-cookbook.io/) and [production-ready code](https://github.com/databricks/genai-cookbook/tree/main/rag_app_sample_code) to build a **high-quality RAG application** using Databricks. The [Mosaic Generative AI Cookbook](https://ai-cookbook.io/) provides:
5
+
This repo provides [learning materials](https://ai-cookbook.io/) and [production-ready code](https://github.com/databricks/genai-cookbook/tree/v0.2.0/agent_app_sample_code) to build a **high-quality RAG application** using Databricks. The [Mosaic Generative AI Cookbook](https://ai-cookbook.io/) provides:
6
6
- A conceptual overview and deep dive into various Generative AI design patterns, such as Prompt Engineering, Agents, RAG, and Fine Tuning
7
7
- An overview of Evaluation-Driven development
8
8
- The theory of every parameter/knob that impacts quality
9
9
- How to root cause quality issues and detemermine which knobs are relevant to experiment with for your use case
10
10
- Best practices for how to experiment with each knob
11
11
12
-
The [provided code](https://github.com/databricks/genai-cookbook/tree/main/rag_app_sample_code) is intended for use with the Databricks platform. Specifically:
12
+
The [provided code](https://github.com/databricks/genai-cookbook/tree/v0.2.0/agent_app_sample_code) is intended for use with the Databricks platform. Specifically:
13
13
-[Mosaic AI Agent Framework](https://docs.databricks.com/en/generative-ai/retrieval-augmented-generation.html) which provides a fast developer workflow with enterprise-ready LLMops & governance
14
14
-[Mosaic AI Agent Evaluation](https://docs.databricks.com/en/generative-ai/agent-evaluation/index.html) which provides reliable, quality measurement using proprietary AI-assisted LLM judges to measure quality metrics that are powered by human feedback collected through an intuitive web-based chat UI
Copy file name to clipboardexpand all lines: genai_cookbook/10-min-demo/Mosaic-AI-Agents-10-Minute-Demo.ipynb
+4-1
Original file line number
Diff line number
Diff line change
@@ -677,7 +677,7 @@
677
677
"\n",
678
678
"## Browse the code samples\n",
679
679
"\n",
680
-
"Open the `./genai-cookbook/rag_app_sample_code` folder that was synced to your Workspace by this notebook. Documentation [here](https://ai-cookbook.io/nbs/6-implement-overview.html).\n",
680
+
"Open the `./genai-cookbook/agent_app_sample_code` folder that was synced to your Workspace by this notebook. Documentation [here](https://ai-cookbook.io/nbs/6-implement-overview.html).\n",
681
681
"\n",
682
682
"## Read the [Generative AI Cookbook](https://ai-cookbook.io)!\n",
2. Data from your [requirements](/nbs/5-hands-on-requirements.md#requirements-questions) is available in your [Lakehouse](https://www.databricks.com/blog/2020/01/30/what-is-a-data-lakehouse.html) inside a Unity Catalog [volume](https://docs.databricks.com/en/connect/unity-catalog/volumes.html)<!-- or [Delta Table](https://docs.databricks.com/en/delta/index.html)-->
You can find all of the sample code referenced throughout this section [here](https://github.com/databricks/genai-cookbook/tree/main/rag_app_sample_code).
16
+
You can find all of the sample code referenced throughout this section [here](https://github.com/databricks/genai-cookbook/tree/v0.2.0/agent_app_sample_code).
17
17
```
18
18
19
19
**Expected outcome**
@@ -62,20 +62,9 @@ By default, the POC uses the open source models available on [Mosaic AI Foundati
62
62
63
63
64
64
65
-
1.**Open the POC code folder within [`A_POC_app`](https://github.com/databricks/genai-cookbook/tree/main/rag_app_sample_code/A_POC_app) based on your type of data:**
65
+
1.**Open the [`agent_app_sample_code`](https://github.com/databricks/genai-cookbook/tree/v0.2.0/agent_app_sample_code)**
If your data doesn't meet one of the above requirements, you can customize the parsing function (`parser_udf`) within `02_poc_data_pipeline` in the above POC directories to work with your file types.
67
+
If your data doesn't meet one of the above requirements, you can customize the parsing function (`file_parser`) within `02_data_pipeline` in the above directory to work with your file types.
79
68
80
69
Inside the POC folder, you will see the following notebooks:
81
70
@@ -84,27 +73,23 @@ By default, the POC uses the open source models available on [Mosaic AI Foundati
84
73
```
85
74
86
75
```{tip}
87
-
The notebooks referenced below are relative to the specific POC you've chosen. For example, if you see a reference to `00_config` and you've chosen `pdf_uc_volume`, you'll find the relevant `00_config` notebook at [`A_POC_app/pdf_uc_volume/00_config`](https://github.com/databricks/genai-cookbook/blob/main/rag_app_sample_code/A_POC_app/pdf_uc_volume/00_config.py).
76
+
The notebooks referenced below are relative to the specific POC you've chosen. For example, if you see a reference to `00_config` and you've chosen `pdf_uc_volume`, you'll find the relevant `00_global_config` notebook at [`00_global_config`](https://github.com/databricks/genai-cookbook/blob/v0.2.0/agent_app_sample_code/00_global_config.py).
88
77
```
89
78
90
79
<br/>
91
80
92
81
2.**Optionally, review the default parameters**
93
82
94
-
Open the `00_config` Notebook within the POC directory you chose above to view the POC's applications default parameters for the data pipeline and RAG chain.
83
+
Open the `00_global_config` Notebook within the directory to view the POC's applications default parameters for the data pipeline and RAG chain.
95
84
96
85
97
86
```{note}
98
87
**Important:** our recommended default parameters are by no means perfect, nor are they intended to be. Rather, they are a place to start from - the next steps of our workflow guide you through iterating on these parameters.
99
88
```
100
89
101
-
3.**Validate the configuration**
102
-
103
-
Run the `01_validate_config` to check that your configuration is valid and all resources are available. You will see an `rag_chain_config.yaml` file appear in your directory - we will use this in step 4 to deploy the application.
104
-
105
-
4.**Run the data pipeline**
90
+
3.**Run the data pipeline**
106
91
107
-
The POC data pipeline is a Databricks Notebook based on Apache Spark. Open the `02_poc_data_pipeline` Notebook and press Run All to execute the pipeline. The pipeline will:
92
+
The POC data pipeline is a Databricks Notebook based on Apache Spark. Open the `02_data_pipeline` Notebook and press Run All to execute the pipeline. The pipeline will:
108
93
109
94
1. Load the raw documents from the UC Volume
110
95
2. Parse each document, saving the results to a Delta Table
@@ -142,7 +127,7 @@ The notebooks referenced below are relative to the specific POC you've chosen. F
142
127
The POC Chain uses MLflow code-based logging. To understand more about code-based logging, visit the [docs](https://docs.databricks.com/generative-ai/create-log-agent.html#code-based-vs-serialization-based-logging).
143
128
```
144
129
145
-
1. Open the `03_deploy_poc_to_review_app` Notebook
130
+
1. Open the `03_agent_proof_of_concept` Notebook
146
131
147
132
2. Run each cell of the Notebook.
148
133
@@ -155,7 +140,7 @@ The notebooks referenced below are relative to the specific POC you've chosen. F
155
140
4. Modify the default instructions to be relevant to your use case. These are displayed in the Review App.
156
141
157
142
```python
158
-
instructions_to_reviewer = f"""## Instructions for Testing the {RAG_APP_NAME}'s Initial Proof of Concept (PoC)
143
+
instructions_to_reviewer = f"""## Instructions for Testing the {AGENT_NAME}'s Initial Proof of Concept (PoC)
159
144
160
145
Your inputs are invaluable for the development team. By providing detailed feedback and corrections, you help us fix issues and improve the overall quality of the application. We rely on your expertise to identify any gaps or areas needing enhancement.
161
146
@@ -170,7 +155,7 @@ The notebooks referenced below are relative to the specific POC you've chosen. F
170
155
- Carefully review each document that the system returns in response to your question.
171
156
- Use the thumbs up/down feature to indicate whether the document was relevant to the question asked. A thumbs up signifies relevance, while a thumbs down indicates the document was not useful.
172
157
173
-
Thank you for your time and effort in testing {RAG_APP_NAME}. Your contributions are essential to delivering a high-quality product to our end users."""
158
+
Thank you for your time and effort in testing {AGENT_NAME}. Your contributions are essential to delivering a high-quality product to our end users."""
Copy file name to clipboardexpand all lines: genai_cookbook/nbs/5-hands-on-curate-eval-set.md
+3-3
Original file line number
Diff line number
Diff line change
@@ -10,9 +10,9 @@
10
10
11
11
*Time varies based on the quality of the responses provided by your stakeholders. If the responses are messy or contain lots of irrelevant queries, you will need to spend more time filtering and cleaning the data.*
You can find all of the sample code referenced throughout this section [here](https://github.com/databricks/genai-cookbook/tree/main/rag_app_sample_code).
15
+
You can find all of the sample code referenced throughout this section [here](https://github.com/databricks/genai-cookbook/tree/v0.2.0/agent_app_sample_code).
16
16
```
17
17
18
18
#### **Overview & expected outcome**
@@ -49,6 +49,6 @@ Databricks recommends that your Evaluation Set contain at least 30 questions to
49
49
50
50
2. Inspect the Evaluation Set to understand the data that is included. You need to validate that your Evaluation Set contains a representative and challenging set of questions. Adjust the Evaluation Set as required.
51
51
52
-
3. By default, your evaluation set is saved to the Delta Table configured in `EVALUATION_SET_FQN` in the [`00_global_config`](https://github.com/databricks/genai-cookbook/blob/main/rag_app_sample_code/00_global_config.py) Notebook.
52
+
3. By default, your evaluation set is saved to the Delta Table configured in `EVALUATION_SET_FQN` in the [`00_global_config`](https://github.com/databricks/genai-cookbook/blob/v0.2.0/agent_app_sample_code/00_global_config.py) Notebook.
53
53
54
54
> **Next step:** Now that you have an evaluation set, use it to [evaluate the POC app's](./5-hands-on-evaluate-poc.md) quality/cost/latency.
You can find all of the sample code referenced throughout this section [here](https://github.com/databricks/genai-cookbook/tree/main/rag_app_sample_code).
15
+
You can find all of the sample code referenced throughout this section [here](https://github.com/databricks/genai-cookbook/tree/v0.2.0/agent_app_sample_code).
Copy file name to clipboardexpand all lines: genai_cookbook/nbs/5-hands-on-improve-quality-step-1-generation.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -10,7 +10,7 @@ The following is a step-by-step process to address **generation quality** issues
10
10
11
11
12
12
13
-
1. Open the [`B_quality_iteration/01_root_cause_quality_issues`](https://github.com/databricks/genai-cookbook/blob/main/rag_app_sample_code/B_quality_iteration/01_root_cause_quality_issues.py) Notebook
13
+
1. Open the [`05_evaluate_poc_quality`](https://github.com/databricks/genai-cookbook/blob/v0.2.0/agent_app_sample_code/05_evaluate_poc_quality.py) Notebook
14
14
15
15
2. Use the queries to load MLflow traces of the records that retrieval quality issues.
Copy file name to clipboardexpand all lines: genai_cookbook/nbs/5-hands-on-improve-quality-step-1-retrieval.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -11,7 +11,7 @@ Retrieval quality is arguably the most important component of a RAG application.
11
11
12
12
Here's a step-by-step process to address **retrieval quality** issues:
13
13
14
-
1. Open the [`B_quality_iteration/01_root_cause_quality_issues`](https://github.com/databricks/genai-cookbook/blob/main/rag_app_sample_code/B_quality_iteration/01_root_cause_quality_issues.py) Notebook
14
+
1. Open the [`05_evaluate_poc_quality`](https://github.com/databricks/genai-cookbook/blob/v0.2.0/agent_app_sample_code/05_evaluate_poc_quality.py) Notebook
15
15
16
16
2. Use the queries to load MLflow traces of the records that retrieval quality issues.
You can find all of the sample code referenced throughout this section [here](https://github.com/databricks/genai-cookbook/tree/main/rag_app_sample_code).
18
+
You can find all of the sample code referenced throughout this section [here](https://github.com/databricks/genai-cookbook/tree/v0.2.0/agent_app_sample_code).
19
19
```
20
20
21
21
#### **Overview**
@@ -31,7 +31,7 @@ Each row your evaluation set will be tagged as follows:
31
31
32
32
The approach depends on if your evaluation set contains the ground-truth responses to your questions - stored in `expected_response`. If you have `expected_response` available, use the first table below. Otherwise, use the second table.
33
33
34
-
1. Open the [`B_quality_iteration/01_root_cause_quality_issues`](https://github.com/databricks/genai-cookbook/blob/main/rag_app_sample_code/B_quality_iteration/01_root_cause_quality_issues.py) Notebook
34
+
1. Open the [`05_evaluate_poc_quality`](https://github.com/databricks/genai-cookbook/blob/v0.2.0/agent_app_sample_code/05_evaluate_poc_quality.py) Notebook
35
35
2. Run the cells that are relevant to your use case e.g., if you do or don't have `expected_response`
36
36
3. Review the output tables to determine the most frequent root cause in your application
37
37
4. For each root cause, follow the steps below to further debug and identify potential fixes:
0 commit comments