Skip to content

crane136/cortex

 
 

Repository files navigation


WebsiteSlackDocs


Model serving at scale

Cortex is a platform for deploying, managing, and scaling machine learning in production.


Key features

  • Run realtime inference, batch inference, and training workloads.
  • Deploy TensorFlow, PyTorch, ONNX, and other models to production.
  • Scale to handle production workloads with server-side batching and request-based autoscaling.
  • Configure rolling updates and live model reloading to update APIs without downtime.
  • Serve models efficiently with multi-model caching and spot / preemptible instances.
  • Stream performance metrics and structured logs to any monitoring tool.
  • Perform A/B tests with configurable traffic splitting.

How it works

Implement a Predictor

# predictor.py

from transformers import pipeline

class PythonPredictor:
    def __init__(self, config):
        self.model = pipeline(task="text-generation")

    def predict(self, payload):
        return self.model(payload["text"])[0]

Configure a realtime API

# text_generator.yaml

- name: text-generator
  kind: RealtimeAPI
  predictor:
    type: python
    path: predictor.py
  compute:
    gpu: 1
    mem: 8Gi
  autoscaling:
    min_replicas: 1
    max_replicas: 10

Deploy

$ cortex deploy text_generator.yaml

# creating http://example.com/text-generator

Serve prediction requests

$ curl http://example.com/text-generator -X POST -H "Content-Type: application/json" -d '{"text": "hello world"}'

About

Model serving at scale

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Go 68.2%
  • Python 16.3%
  • Jupyter Notebook 10.0%
  • Shell 3.3%
  • HTML 1.0%
  • Dockerfile 0.6%
  • Other 0.6%