Skip to content

Commit aea1d11

Browse files
authored
Example for Coherence LC4J Provider as well as Metrics Usage (#181)
1 parent f8c1fdf commit aea1d11

File tree

24 files changed

+1113
-22
lines changed

24 files changed

+1113
-22
lines changed

examples/integrations/langchain4j/coffee-shop-assistant-mp/README.md

Lines changed: 28 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -64,3 +64,31 @@ Here are some example queries you can try:
6464
- **"Can I order a coffee and a cookie?"**
6565
- *Expected Response:*
6666
*"Your order for a coffee and a chocolate chip cookie has been saved. The total is $5.00. Would you like anything else?"*
67+
68+
## Try metrics
69+
70+
Helidon provides `MetricsChatModelListener` which generates metrics that follow the [OpenTelemetry Semantic Conventions for GenAI Metrics v1.36.0](https://github.com/open-telemetry/semantic-conventions/blob/v1.36.0/docs/gen-ai/gen-ai-metrics.md). This is done out-of-box for Chat API calls. To view the captured metrics use following:
71+
72+
```shell
73+
# Prometheus Format
74+
curl -s -X GET http://localhost:8080/metrics
75+
...
76+
# HELP gen_ai_client_token_usage_token Measures number of input and output tokens used
77+
# TYPE gen_ai_client_token_usage_token histogram
78+
gen_ai_client_token_usage_token{gen_ai_operation_name="chat",gen_ai_request_model="gpt-4o-mini",gen_ai_response_model="gpt-4o-mini-2024-07-18",gen_ai_token_type="output",scope="vendor",quantile="0.5",} 71.0
79+
...
80+
gen_ai_client_token_usage_token{gen_ai_operation_name="chat",gen_ai_request_model="gpt-4o-mini",gen_ai_response_model="gpt-4o-mini-2024-07-18",gen_ai_token_type="input",scope="vendor",quantile="0.5",} 156.0
81+
...
82+
# HELP gen_ai_client_token_usage_token_max Measures number of input and output tokens used
83+
# TYPE gen_ai_client_token_usage_token_max gauge
84+
gen_ai_client_token_usage_token_max{gen_ai_operation_name="chat",gen_ai_request_model="gpt-4o-mini",gen_ai_response_model="gpt-4o-mini-2024-07-18",gen_ai_token_type="output",scope="vendor",} 71.0
85+
gen_ai_client_token_usage_token_max{gen_ai_operation_name="chat",gen_ai_request_model="gpt-4o-mini",gen_ai_response_model="gpt-4o-mini-2024-07-18",gen_ai_token_type="input",scope="vendor",} 156.0
86+
....
87+
# HELP gen_ai_client_operation_duration_seconds_max GenAI operation duration
88+
# TYPE gen_ai_client_operation_duration_seconds_max gauge
89+
gen_ai_client_operation_duration_seconds_max{error_type="",gen_ai_operation_name="chat",gen_ai_request_model="gpt-4o-mini",gen_ai_response_model="gpt-4o-mini-2024-07-18",scope="vendor",} 2.0
90+
# HELP gen_ai_client_operation_duration_seconds GenAI operation duration
91+
# TYPE gen_ai_client_operation_duration_seconds histogram
92+
gen_ai_client_operation_duration_seconds{error_type="",gen_ai_operation_name="chat",gen_ai_request_model="gpt-4o-mini",gen_ai_response_model="gpt-4o-mini-2024-07-18",scope="vendor",quantile="0.5",} 2.0
93+
...
94+
```

examples/integrations/langchain4j/coffee-shop-assistant-mp/pom.xml

Lines changed: 0 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -41,16 +41,6 @@
4141
<mainClass>io.helidon.Main</mainClass>
4242
</properties>
4343

44-
<dependencyManagement>
45-
<dependencies>
46-
<dependency>
47-
<groupId>dev.langchain4j</groupId>
48-
<artifactId>langchain4j-embeddings-all-minilm-l6-v2</artifactId>
49-
<version>${version.lib.langchain4j}</version>
50-
</dependency>
51-
</dependencies>
52-
</dependencyManagement>
53-
5444
<dependencies>
5545
<dependency>
5646
<groupId>io.helidon.integrations.langchain4j</groupId>

examples/integrations/langchain4j/coffee-shop-assistant-mp/src/main/resources/META-INF/microprofile-config.properties

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,6 +25,6 @@ langchain4j.open-ai.chat-model.model-name=gpt-4o-mini
2525
langchain4j.rag.embedding-store-content-retriever.enabled=true
2626
langchain4j.rag.embedding-store-content-retriever.max-results=10
2727
langchain4j.rag.embedding-store-content-retriever.min-score=0.6
28-
langchain4j.rag.embedding-store-content-retriever.embedding-store=EmbeddingStore
28+
langchain4j.rag.embedding-store-content-retriever.embedding-store.service-registry.named=EmbeddingStore
2929

3030
app.menu-items=data/menu.json
Lines changed: 67 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,67 @@
1+
# **Coffee Shop Assistant (Helidon SE Version)**
2+
3+
This is a **demo application** showcasing the **Helidon SE integration with LangChain4J**. It demonstrates how to build an **AI-powered coffee shop assistant** using **Helidon Inject**, Ollama models, and Coherence LangChain4J integrations.
4+
5+
NOTE: LangChain4J integration is a preview feature. The APIs shown here are subject to change. These APIs will be finalized in a future release of Helidon.
6+
7+
## **Features**
8+
9+
- Integration with **OpenAI chat models**.
10+
- Utilization of **embedding models**, **embedding store**, **ingestor**, and **content retriever**.
11+
- **Helidon Inject** for dependency injection.
12+
- **Embedding store initialization** from a JSON file.
13+
- Integration with **Coherence Embedding Store**
14+
- Support for **callback functions** to enhance interactions.
15+
16+
## **Build the Application**
17+
18+
To build the application, run:
19+
20+
```shell
21+
mvn clean package
22+
```
23+
24+
## **Run the Application**
25+
26+
Execute the following command to start the application:
27+
28+
```shell
29+
java -Dcoherence.wka=127.0.0.1 -jar target/helidon-examples-integrations-langchain4j-coffee-shop-assistant-se-coherence.jar
30+
```
31+
32+
Once running, you can interact with the assistant via your browser.
33+
34+
Example:
35+
36+
```
37+
http://localhost:8080/chat?question="What can you offer today?"
38+
```
39+
40+
## Sample Questions and Expected Responses
41+
42+
Here are some example queries you can try:
43+
44+
### Menu and Recommendations
45+
46+
- **"What hot drinks do you have?"**
47+
- *Expected Response:* A list of **hot drinks** such as **Latte, Cappuccino, Espresso, and Hot Chocolate**.
48+
49+
- **"I'm looking for something sweet. What do you recommend?"**
50+
- *Expected Response:* Suggestions like **Caramel Frappuccino, Blueberry Muffin, Chocolate Chip Cookie, and Hot Chocolate**.
51+
52+
- **"What drinks can I get with caramel?"**
53+
- *Expected Response:* Options like **Caramel Frappuccino** and **Latte with caramel syrup add-on**.
54+
55+
### Dietary Preferences
56+
57+
- **"Do you have any vegan options?"**
58+
- *Expected Response:* Items like **Avocado Toast, Iced Matcha Latte (with non-dairy milk), and Blueberry Muffin (if applicable)**.
59+
60+
### Orders and Availability
61+
62+
- **"Do you have any breakfast items?"**
63+
- *Expected Response:* Options such as **Avocado Toast, Blueberry Muffin, and Bagel with Cream Cheese**.
64+
65+
- **"Can I order a coffee and a cookie?"**
66+
- *Expected Response:*
67+
*"Your order for a coffee and a chocolate chip cookie has been saved. The total is $5.00. Would you like anything else?"*
Lines changed: 153 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,153 @@
1+
[
2+
{
3+
"name": "Latte",
4+
"description": "A rich espresso drink with steamed milk.",
5+
"category": "Drink",
6+
"price": 4.5,
7+
"tags": [
8+
"Hot",
9+
"Customizable",
10+
"Classic"
11+
],
12+
"addOns": [
13+
"Oat milk",
14+
"Soy milk",
15+
"Extra shot",
16+
"Caramel syrup"
17+
]
18+
},
19+
{
20+
"name": "Cappuccino",
21+
"description": "Espresso topped with a thick layer of frothy milk.",
22+
"category": "Drink",
23+
"price": 4.0,
24+
"tags": [
25+
"Hot",
26+
"Foamy",
27+
"Classic"
28+
],
29+
"addOns": [
30+
"Vanilla syrup",
31+
"Extra shot"
32+
]
33+
},
34+
{
35+
"name": "Espresso",
36+
"description": "A strong shot of coffee for a quick energy boost.",
37+
"category": "Drink",
38+
"price": 3.0,
39+
"tags": [
40+
"Hot",
41+
"Classic",
42+
"Bold"
43+
],
44+
"addOns": [
45+
"Extra shot"
46+
]
47+
},
48+
{
49+
"name": "Iced Matcha Latte",
50+
"description": "Chilled matcha green tea with milk.",
51+
"category": "Drink",
52+
"price": 5.0,
53+
"tags": [
54+
"Cold",
55+
"Vegan",
56+
"Trendy"
57+
],
58+
"addOns": [
59+
"Oat milk",
60+
"Almond milk",
61+
"Extra matcha"
62+
]
63+
},
64+
{
65+
"name": "Avocado Toast",
66+
"description": "Toasted bread topped with mashed avocado and seasonings.",
67+
"category": "Food",
68+
"price": 6.5,
69+
"tags": [
70+
"Healthy",
71+
"Vegan",
72+
"Breakfast"
73+
],
74+
"addOns": [
75+
"Egg",
76+
"Feta cheese",
77+
"Hot sauce"
78+
]
79+
},
80+
{
81+
"name": "Blueberry Muffin",
82+
"description": "A soft muffin bursting with blueberries.",
83+
"category": "Food",
84+
"price": 2.75,
85+
"tags": [
86+
"Breakfast",
87+
"Sweet",
88+
"Vegetarian"
89+
],
90+
"addOns": [
91+
"Warm up"
92+
]
93+
},
94+
{
95+
"name": "Bagel with Cream Cheese",
96+
"description": "A freshly baked bagel served with creamy cheese spread.",
97+
"category": "Food",
98+
"price": 3.5,
99+
"tags": [
100+
"Breakfast",
101+
"Savory",
102+
"Vegetarian"
103+
],
104+
"addOns": [
105+
"Butter",
106+
"Jam"
107+
]
108+
},
109+
{
110+
"name": "Caramel Frappuccino",
111+
"description": "A blended coffee drink with caramel drizzle and whipped cream.",
112+
"category": "Drink",
113+
"price": 5.5,
114+
"tags": [
115+
"Cold",
116+
"Sweet",
117+
"Indulgent"
118+
],
119+
"addOns": [
120+
"Extra caramel",
121+
"Whipped cream"
122+
]
123+
},
124+
{
125+
"name": "Chocolate Chip Cookie",
126+
"description": "A large, soft-baked cookie loaded with chocolate chips.",
127+
"category": "Food",
128+
"price": 2.0,
129+
"tags": [
130+
"Snack",
131+
"Sweet",
132+
"Vegetarian"
133+
],
134+
"addOns": [
135+
"Warm up"
136+
]
137+
},
138+
{
139+
"name": "Hot Chocolate",
140+
"description": "A creamy hot chocolate drink topped with whipped cream.",
141+
"category": "Drink",
142+
"price": 3.75,
143+
"tags": [
144+
"Hot",
145+
"Sweet",
146+
"Comforting"
147+
],
148+
"addOns": [
149+
"Extra whipped cream",
150+
"Caramel drizzle"
151+
]
152+
}
153+
]

0 commit comments

Comments
 (0)