Skip to content

Commit 702a8ad

Browse files
committed
mend
2 parents d62ff2e + e5ca1ea commit 702a8ad

File tree

3 files changed

+10
-4
lines changed

3 files changed

+10
-4
lines changed

Diff for: Readme.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -148,8 +148,8 @@ Open [participant_agent/graph.py](./participant_agent/graph.py)
148148

149149
### Note: instructor will be going through this in detail if you get confused.
150150

151-
- Uncomment lines 19-41
152-
- Delete line 42 (graph = None) - this is just a placeholder.
151+
- Uncomment lines 26-47
152+
- Delete line 48 (graph = None) - this is just a placeholder.
153153
- Define node 1, the agent, by passing a label `"agent"` and the code to execute at that node `call_tool_model`
154154
- Define node 2, the tool node, by passing the label `"tools"` and the code to be executed at that node `tool_node`
155155
- Set the entrypoint for your graph at `"agent"`
@@ -179,7 +179,7 @@ Ex: `restock formula tool used specifically for calculating the amount of food a
179179

180180
### Scenario 2 sub-problem: structured output
181181

182-
At this stage, you may that your agent is returning a "correct" answer to the question but not in the **format** the test script expects. The test script expects answers to multiple choice questions to be the single character "A", "B", "C", or "D". This may seem contrived, but often in production scenarios agents will be expected to work with existing deterministic systems that will require specific schemas. For this reason, LangChain supports an LLM call `with_structured_output` so that response can come from a predictable structure.
182+
At this stage, you may notice that your agent is returning a "correct" answer to the question but not in the **format** the test script expects. The test script expects answers to multiple choice questions to be the single character "A", "B", "C", or "D". This may seem contrived, but often in production scenarios agents will be expected to work with existing deterministic systems that will require specific schemas. For this reason, LangChain supports an LLM call `with_structured_output` so that response can come from a predictable structure.
183183

184184
### Steps:
185185
- Open [participant_agent/utils/state.py](participant_agent/utils/state.py) and uncomment the multi_choice_response attribute on the state parameter. To this point our state has only had one attribute called `messages` but we are adding a specific field that we will add structured outputs to.

Diff for: participant_agent/utils/router.py

+4-1
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,16 @@
11
import os
22

3+
from dotenv import load_dotenv
34
from redisvl.extensions.router import Route, SemanticRouter
45
from redisvl.utils.vectorize import HFTextVectorizer
56

7+
load_dotenv()
8+
69
REDIS_URL = os.environ.get("REDIS_URL", "redis://host.docker.internal:6379/0")
710

811
# Semantic router
912
blocked_references = [
10-
"thinks about aliens",
13+
"things about aliens",
1114
"corporate questions about agile",
1215
"anything about the S&P 500",
1316
]

Diff for: participant_agent/utils/semantic_cache.py

+3
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,10 @@
11
import os
22

3+
from dotenv import load_dotenv
34
from redisvl.extensions.llmcache import SemanticCache
45

6+
load_dotenv()
7+
58
REDIS_URL = os.environ.get("REDIS_URL", "redis://localhost:6379/0")
69

710
# Semantic cache

0 commit comments

Comments
 (0)