This tutorial provides a comprehensive, hands-on walkthrough for building a secure MLOps workflow using the JFrog Platform. You’ll train a real machine learning model, containerize the serving application, store artifacts securely using JFrog Artifactory, and simulate vulnerability scanning with JFrog Xray.
Whether you're an ML Engineer, AI Engineer, or Data Scientist, this guide will help you integrate DevSecOps principles into your ML lifecycle using tools built for production-grade software delivery.
- How to train and serve a real ML model
- How to store model artifacts and Docker images in JFrog Artifactory
- How to use the JFrog Platform UI for managing and scanning artifacts
- How to automate secure deployment workflows using CI/CD
- Basic knowledge of Python, Git, and Docker
- Docker installed locally
- A free JFrog Platform account (start here)
- GitHub account (for CI integration)
You’ll begin by creating the code and configuration files for your ML application. These include a model training script, a FastAPI-based inference server, dependency definitions, and a Dockerfile to package everything together.
git clone https://github.com/<your-org>/secure-mlops-tutorial.git
cd secure-mlops-tutorial
If you're not using a starter repo, manually create the following files:
This script trains a simple scikit-learn model and saves it locally for use by the server.
from sklearn.datasets import load_iris
from sklearn.ensemble import RandomForestClassifier
import joblib
data = load_iris()
X, y = data.data, data.target
model = RandomForestClassifier()
model.fit(X, y)
joblib.dump(model, "model.pkl")
This FastAPI application loads the model and serves predictions via a /predict
endpoint.
from fastapi import FastAPI, Request
import joblib
import numpy as np
import uvicorn
app = FastAPI()
model = joblib.load("model.pkl")
@app.post("/predict")
async def predict(request: Request):
data = await request.json()
input_array = np.array(data["features"]).reshape(1, -1)
prediction = model.predict(input_array)
return {"prediction": prediction.tolist()}
if __name__ == "__main__":
uvicorn.run(app, host="0.0.0.0", port=8000)
This file specifies the Python dependencies for your project. An older version of scikit-learn is intentionally used to simulate a known vulnerability for later Xray scanning.
fastapi
uvicorn
scikit-learn==0.24.0 # Older version with known CVEs for demo
joblib
This Dockerfile installs the dependencies, trains the model, and sets up the container to run the inference server.
FROM python:3.9-slim-buster
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY train.py .
RUN python train.py
COPY app.py .
COPY model.pkl .
CMD ["python", "app.py"]
Now that you’ve built the model and packaged your application, you’ll publish both the trained model and Docker image to JFrog Artifactory.
- Go to
https://your-instance.jfrog.io
- Log in with your credentials
- You’ll land on the main dashboard with access to Artifactory and Xray
- Navigate to Artifactory → Artifacts
- Select or create a repository called
ml-models
- Click Deploy (top right)
- Upload
model.pkl
- Set the target path as
model/1.0.0/model.pkl
- Click Deploy Artifact
✅ Your trained model is now versioned, trackable, and secured in your central artifact store.
Build and push your container image:
docker login your-instance.jfrog.io
docker build -t your-instance.jfrog.io/your-docker-repo/secure-ml-app:v1.0 .
docker push your-instance.jfrog.io/your-docker-repo/secure-ml-app:v1.0
- Return to Artifacts
- Locate the Docker image and model file in their respective repositories
- Click on each to view metadata, SHA hashes, and associated build info
JFrog Xray scans artifacts in Artifactory for security and license risks. After uploading your assets, you can view their scan results in the UI.
-
Navigate to Xray → Vulnerabilities
-
Filter by your relevant repository (e.g.,
docker-local
,ml-models
) -
Click your Docker image (
secure-ml-app
) or model file (model.pkl
) -
Review the details:
- Vulnerabilities grouped by severity
- Components affected (e.g.,
scikit-learn
) - Suggested fixes or patches
- Policy violations triggered
This ensures your ML artifacts meet your organization’s security and compliance policies.
Once your manual workflow is working, you can automate it using GitHub Actions. Below is a workflow for CI/CD that pushes both the image and model to Artifactory.
name: Secure ML CI Pipeline
on:
push:
branches: [ main ]
tags: [ 'v*' ]
jobs:
build-and-push:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Log in to JFrog
uses: docker/login-action@v2
with:
registry: ${{ secrets.JFROG_URL }}
username: ${{ secrets.JFROG_USER }}
password: ${{ secrets.JFROG_API_KEY }}
- name: Build and Push Docker Image
uses: docker/build-push-action@v4
with:
context: .
push: true
tags: ${{ secrets.JFROG_URL }}/${{ secrets.JFROG_DOCKER_REPO }}/secure-ml-app:${{ github.ref_name }}
- name: Upload Model Artifact
run: |
curl -u ${{ secrets.JFROG_USER }}:${{ secrets.JFROG_API_KEY }} \
-T model.pkl \
"https://${{ secrets.JFROG_URL }}/artifactory/ml-models/model/${{ github.ref_name }}/model.pkl"
- name: Simulate Xray Scan
run: echo "In production, this would trigger a scan and fail the build if issues are found."
- Use the JFrog UI for artifact visibility, traceability, and governance
- Version your models and containers with semantic tags
- Scan both model artifacts and application code for security issues
- Automate builds and scans in CI pipelines
- Establish Xray policies to enforce vulnerability and license compliance
You’ve successfully implemented a secure, reproducible MLOps pipeline using the JFrog Platform. This tutorial covered the full lifecycle—from training and packaging to storing, scanning, and automating your machine learning application.
By combining the JFrog UI with DevOps automation, you empower your team to build safer, more reliable AI solutions at scale.
mlops, secure mlops, docker, jfrog artifactory, jfrog xray, container security, vulnerability scanning, ci/cd, ai engineering, devsecops, model registry, machine learning, data science security