You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Once the pipeline completes, OpenPipe automatically deploys your fine-tuned model and makes it available through their API. You can immediately use your model with a simple API call:
{"role": "system", "content": "You are a helpful customer service assistant."},
88
+
{"role": "user", "content": "How do I reset my password?"}
89
+
]
90
+
}'
91
+
```
92
+
93
+
For Python applications, you can use the OpenPipe Python SDK:
94
+
95
+
```python
96
+
# pip install openpipe
97
+
98
+
from openpipe import OpenAI
99
+
100
+
client = OpenAI(
101
+
openpipe={"api_key": "opk-your-api-key"}
102
+
)
103
+
104
+
completion = client.chat.completions.create(
105
+
model="openpipe:customer_service_assistant",
106
+
messages=[
107
+
{
108
+
"role": "system",
109
+
"content": "You are a helpful customer service assistant for Ultra electronics products."
110
+
},
111
+
{
112
+
"role": "user",
113
+
"content": "Can I trade in my old device for a new UltraPhone X?"
114
+
}
115
+
],
116
+
temperature=0,
117
+
openpipe={
118
+
"tags": {
119
+
"prompt_id": "counting",
120
+
"any_key": "any_value"
121
+
}
122
+
},
123
+
)
124
+
125
+
print(completion.choices[0].message)
126
+
```
127
+
128
+
When you need to update your model with new data, simply run the pipeline again, and OpenPipe will automatically retrain and redeploy the updated model.
129
+
78
130
## ✨ Key Features
79
131
80
132
### 📊 End-to-End Fine-Tuning Pipeline
@@ -151,6 +203,20 @@ All metadata is accessible in the ZenML dashboard, enabling:
151
203
- Easy reproduction of successful training jobs
152
204
- Audit trails for model governance
153
205
206
+
### 🚀 Automatic Deployment and Redeployment
207
+
208
+
A key advantage of this integration is that OpenPipe automatically deploys your fine-tuned model as soon as training completes. Your model is immediately available via API without any additional deployment steps.
*The OpenPipe console showing a successfully deployed fine-tuned model*
212
+
213
+
When you run the pipeline again with new data, OpenPipe automatically retrains and redeploys your model, ensuring your production model always reflects your latest data. This makes it easy to implement a continuous improvement cycle:
Copy file name to clipboardexpand all lines: blog_post.md
+89
Original file line number
Diff line number
Diff line change
@@ -40,6 +40,8 @@ This integration enables data scientists and ML engineers to:
40
40
3.**Deploy fine-tuned models to production** with confidence
41
41
4.**Schedule recurring fine-tuning jobs** as data evolves
42
42
43
+
A key advantage of this integration is that **OpenPipe automatically deploys your fine-tuned models** as soon as training completes, making them immediately available via API. When you run the pipeline again with new data, your model is automatically retrained and redeployed, ensuring your production model always reflects your latest data.
44
+
43
45
## Building a Fine-Tuning Pipeline
44
46
45
47
Let's examine the core components of an LLM fine-tuning pipeline built with ZenML and OpenPipe.
@@ -217,6 +219,79 @@ python run.py \
217
219
218
220
The implementation follows [OpenPipe's fine-tuning best practices](https://docs.openpipe.ai/) while leveraging [ZenML's orchestration capabilities](https://docs.zenml.io/stack-components/orchestrators).
219
221
222
+
### Using Your Deployed Model
223
+
224
+
Once the fine-tuning process completes, OpenPipe automatically deploys your model and makes it available through their API. You can immediately start using your fine-tuned model with a simple curl request:
{"role": "system", "content": "You are a helpful customer support assistant for RapidTech products."},
237
+
{"role": "user", "content": "I need to reset my password for AccountManager"}
238
+
],
239
+
"temperature": 0.7
240
+
}'
241
+
```
242
+
243
+
For Python applications, you can use the OpenPipe Python SDK, which follows the OpenAI SDK pattern for seamless integration:
244
+
245
+
```python
246
+
# pip install openpipe
247
+
248
+
from openpipe import OpenAI
249
+
250
+
client = OpenAI(
251
+
openpipe={"api_key": "opk-your-api-key"}
252
+
)
253
+
254
+
completion = client.chat.completions.create(
255
+
model="openpipe:rapidtech_support_assistant",
256
+
messages=[
257
+
{
258
+
"role": "system",
259
+
"content": "You are a helpful customer service assistant for RapidTech products."
260
+
},
261
+
{
262
+
"role": "user",
263
+
"content": "Can I trade in my old device for a new RapidTech Pro?"
264
+
}
265
+
],
266
+
temperature=0,
267
+
openpipe={
268
+
"tags": {
269
+
"prompt_id": "customer_query",
270
+
"application": "support_portal"
271
+
}
272
+
},
273
+
)
274
+
275
+
print(completion.choices[0].message)
276
+
```
277
+
278
+
This SDK approach is particularly useful for integrating with existing applications or services, and it supports tagging your requests for analytics and monitoring.
279
+
280
+
This immediate deployment capability eliminates the need for manual model deployment, allowing you to test and integrate your custom model right away.
281
+
282
+
### Automated Redeployment with New Data
283
+
284
+
When product information changes or you collect new training data, simply run the pipeline again:
285
+
286
+
```bash
287
+
python run.py \
288
+
--data-source=updated_support_conversations.csv \
289
+
--model-name=rapidtech_support_assistant \
290
+
--force-overwrite=True
291
+
```
292
+
293
+
OpenPipe will automatically retrain and redeploy your model with the updated data, ensuring your production model always reflects the latest information and examples. This seamless redeployment process makes it easy to keep your models up to date without manual intervention.
294
+
220
295
### Performance Metrics and Cost Analysis
221
296
222
297
The fine-tuned model demonstrates:
@@ -319,6 +394,18 @@ This provides a real-time view of:
319
394
- Error messages or warnings
320
395
- Time spent in each training phase
321
396
397
+
### Continuous Model Improvement
398
+
399
+
A key advantage of the ZenML-OpenPipe integration is the ability to implement a continuous improvement cycle for your fine-tuned models:
400
+
401
+
1.**Initial training**: Fine-tune a model on your current dataset
402
+
2.**Production deployment**: Automatically handled by OpenPipe
403
+
3.**Feedback collection**: Gather new examples and user interactions
404
+
4.**Dataset augmentation**: Add new examples to your training data
405
+
5.**Retraining and redeployment**: Run the pipeline again to update the model
406
+
407
+
With each iteration, both the dataset and model quality improve, creating a virtuous cycle of continuous enhancement. Since OpenPipe automatically redeploys your model with each training run, new capabilities are immediately available in production without additional deployment steps.
408
+
322
409
Check out [OpenPipe's model monitoring documentation](https://docs.openpipe.ai/features/fine-tuning/quick-start) for more information about monitoring your fine-tuned models in production.
323
410
324
411
### Deployment on ZenML Stacks
@@ -345,6 +432,8 @@ From implementing this integration with multiple customers, several key insights
345
432
346
433
5.**Metadata tracking enables governance** – Complete lineage from data to model deployment satisfies compliance requirements.
347
434
435
+
6.**Automatic deployment accelerates time-to-value** – With OpenPipe's instant deployment, fine-tuned models are immediately usable via API without additional DevOps work.
436
+
348
437
## Next Steps
349
438
350
439
For teams looking to implement LLM fine-tuning in production, we recommend:
0 commit comments