Skip to content

Commit 4b839dd

Browse files
committed
Add automatic deployment and redeployment feature
1 parent 75af1e7 commit 4b839dd

File tree

3 files changed

+155
-0
lines changed

3 files changed

+155
-0
lines changed

README.md

+66
Original file line numberDiff line numberDiff line change
@@ -75,6 +75,58 @@ export OPENPIPE_API_KEY=opk-your-api-key
7575
python run.py
7676
```
7777

78+
Once the pipeline completes, OpenPipe automatically deploys your fine-tuned model and makes it available through their API. You can immediately use your model with a simple API call:
79+
80+
```bash
81+
curl https://api.openpipe.ai/v1/chat/completions \
82+
-H "Content-Type: application/json" \
83+
-H "Authorization: Bearer opk-your-api-key" \
84+
-d '{
85+
"model": "customer_service_assistant",
86+
"messages": [
87+
{"role": "system", "content": "You are a helpful customer service assistant."},
88+
{"role": "user", "content": "How do I reset my password?"}
89+
]
90+
}'
91+
```
92+
93+
For Python applications, you can use the OpenPipe Python SDK:
94+
95+
```python
96+
# pip install openpipe
97+
98+
from openpipe import OpenAI
99+
100+
client = OpenAI(
101+
openpipe={"api_key": "opk-your-api-key"}
102+
)
103+
104+
completion = client.chat.completions.create(
105+
model="openpipe:customer_service_assistant",
106+
messages=[
107+
{
108+
"role": "system",
109+
"content": "You are a helpful customer service assistant for Ultra electronics products."
110+
},
111+
{
112+
"role": "user",
113+
"content": "Can I trade in my old device for a new UltraPhone X?"
114+
}
115+
],
116+
temperature=0,
117+
openpipe={
118+
"tags": {
119+
"prompt_id": "counting",
120+
"any_key": "any_value"
121+
}
122+
},
123+
)
124+
125+
print(completion.choices[0].message)
126+
```
127+
128+
When you need to update your model with new data, simply run the pipeline again, and OpenPipe will automatically retrain and redeploy the updated model.
129+
78130
## ✨ Key Features
79131

80132
### 📊 End-to-End Fine-Tuning Pipeline
@@ -151,6 +203,20 @@ All metadata is accessible in the ZenML dashboard, enabling:
151203
- Easy reproduction of successful training jobs
152204
- Audit trails for model governance
153205

206+
### 🚀 Automatic Deployment and Redeployment
207+
208+
A key advantage of this integration is that OpenPipe automatically deploys your fine-tuned model as soon as training completes. Your model is immediately available via API without any additional deployment steps.
209+
210+
![OpenPipe Deployed Model](zenml_openpipe_pipeline_deployed.png)
211+
*The OpenPipe console showing a successfully deployed fine-tuned model*
212+
213+
When you run the pipeline again with new data, OpenPipe automatically retrains and redeploys your model, ensuring your production model always reflects your latest data. This makes it easy to implement a continuous improvement cycle:
214+
215+
1. Fine-tune initial model
216+
2. Collect feedback and new examples
217+
3. Rerun the pipeline to update the model
218+
4. Repeat to continuously improve performance
219+
154220
## 📚 Advanced Usage
155221

156222
### Custom Data Source
350 KB
Loading

blog_post.md

+89
Original file line numberDiff line numberDiff line change
@@ -40,6 +40,8 @@ This integration enables data scientists and ML engineers to:
4040
3. **Deploy fine-tuned models to production** with confidence
4141
4. **Schedule recurring fine-tuning jobs** as data evolves
4242

43+
A key advantage of this integration is that **OpenPipe automatically deploys your fine-tuned models** as soon as training completes, making them immediately available via API. When you run the pipeline again with new data, your model is automatically retrained and redeployed, ensuring your production model always reflects your latest data.
44+
4345
## Building a Fine-Tuning Pipeline
4446

4547
Let's examine the core components of an LLM fine-tuning pipeline built with ZenML and OpenPipe.
@@ -217,6 +219,79 @@ python run.py \
217219

218220
The implementation follows [OpenPipe's fine-tuning best practices](https://docs.openpipe.ai/) while leveraging [ZenML's orchestration capabilities](https://docs.zenml.io/stack-components/orchestrators).
219221

222+
### Using Your Deployed Model
223+
224+
Once the fine-tuning process completes, OpenPipe automatically deploys your model and makes it available through their API. You can immediately start using your fine-tuned model with a simple curl request:
225+
226+
![OpenPipe Deployed Model](zenml_openpipe_pipeline_deployed.png)
227+
*The OpenPipe console showing a successfully deployed fine-tuned model*
228+
229+
```bash
230+
curl https://api.openpipe.ai/v1/chat/completions \
231+
-H "Content-Type: application/json" \
232+
-H "Authorization: Bearer opk-your-api-key" \
233+
-d '{
234+
"model": "rapidtech_support_assistant",
235+
"messages": [
236+
{"role": "system", "content": "You are a helpful customer support assistant for RapidTech products."},
237+
{"role": "user", "content": "I need to reset my password for AccountManager"}
238+
],
239+
"temperature": 0.7
240+
}'
241+
```
242+
243+
For Python applications, you can use the OpenPipe Python SDK, which follows the OpenAI SDK pattern for seamless integration:
244+
245+
```python
246+
# pip install openpipe
247+
248+
from openpipe import OpenAI
249+
250+
client = OpenAI(
251+
openpipe={"api_key": "opk-your-api-key"}
252+
)
253+
254+
completion = client.chat.completions.create(
255+
model="openpipe:rapidtech_support_assistant",
256+
messages=[
257+
{
258+
"role": "system",
259+
"content": "You are a helpful customer service assistant for RapidTech products."
260+
},
261+
{
262+
"role": "user",
263+
"content": "Can I trade in my old device for a new RapidTech Pro?"
264+
}
265+
],
266+
temperature=0,
267+
openpipe={
268+
"tags": {
269+
"prompt_id": "customer_query",
270+
"application": "support_portal"
271+
}
272+
},
273+
)
274+
275+
print(completion.choices[0].message)
276+
```
277+
278+
This SDK approach is particularly useful for integrating with existing applications or services, and it supports tagging your requests for analytics and monitoring.
279+
280+
This immediate deployment capability eliminates the need for manual model deployment, allowing you to test and integrate your custom model right away.
281+
282+
### Automated Redeployment with New Data
283+
284+
When product information changes or you collect new training data, simply run the pipeline again:
285+
286+
```bash
287+
python run.py \
288+
--data-source=updated_support_conversations.csv \
289+
--model-name=rapidtech_support_assistant \
290+
--force-overwrite=True
291+
```
292+
293+
OpenPipe will automatically retrain and redeploy your model with the updated data, ensuring your production model always reflects the latest information and examples. This seamless redeployment process makes it easy to keep your models up to date without manual intervention.
294+
220295
### Performance Metrics and Cost Analysis
221296

222297
The fine-tuned model demonstrates:
@@ -319,6 +394,18 @@ This provides a real-time view of:
319394
- Error messages or warnings
320395
- Time spent in each training phase
321396

397+
### Continuous Model Improvement
398+
399+
A key advantage of the ZenML-OpenPipe integration is the ability to implement a continuous improvement cycle for your fine-tuned models:
400+
401+
1. **Initial training**: Fine-tune a model on your current dataset
402+
2. **Production deployment**: Automatically handled by OpenPipe
403+
3. **Feedback collection**: Gather new examples and user interactions
404+
4. **Dataset augmentation**: Add new examples to your training data
405+
5. **Retraining and redeployment**: Run the pipeline again to update the model
406+
407+
With each iteration, both the dataset and model quality improve, creating a virtuous cycle of continuous enhancement. Since OpenPipe automatically redeploys your model with each training run, new capabilities are immediately available in production without additional deployment steps.
408+
322409
Check out [OpenPipe's model monitoring documentation](https://docs.openpipe.ai/features/fine-tuning/quick-start) for more information about monitoring your fine-tuned models in production.
323410

324411
### Deployment on ZenML Stacks
@@ -345,6 +432,8 @@ From implementing this integration with multiple customers, several key insights
345432

346433
5. **Metadata tracking enables governance** – Complete lineage from data to model deployment satisfies compliance requirements.
347434

435+
6. **Automatic deployment accelerates time-to-value** – With OpenPipe's instant deployment, fine-tuned models are immediately usable via API without additional DevOps work.
436+
348437
## Next Steps
349438

350439
For teams looking to implement LLM fine-tuning in production, we recommend:

0 commit comments

Comments
 (0)