Skip to content

MLX Omni Server is a local inference server powered by Apple's MLX framework, specifically designed for Apple Silicon (M-series) chips. It implements OpenAI-compatible API endpoints, enabling seamless integration with existing OpenAI SDK clients while leveraging the power of local ML inference.

License

Notifications You must be signed in to change notification settings

madroidmaq/mlx-omni-server

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

MLX Omni Server

image

alt text

MLX Omni Server is a local inference server powered by Apple's MLX framework, specifically designed for Apple Silicon (M-series) chips. It implements OpenAI-compatible API endpoints, enabling seamless integration with existing OpenAI SDK clients while leveraging the power of local ML inference.

Features

  • πŸš€ Apple Silicon Optimized: Built on MLX framework, optimized for M1/M2/M3/M4 series chips
  • πŸ”Œ OpenAI API Compatible: Drop-in replacement for OpenAI API endpoints
  • 🎯 Multiple AI Capabilities:
    • Audio Processing (TTS & STT)
    • Chat Completion
    • Image Generation
  • ⚑ High Performance: Local inference with hardware acceleration
  • πŸ” Privacy-First: All processing happens locally on your machine
  • πŸ›  SDK Support: Works with official OpenAI SDK and other compatible clients

Supported API Endpoints

The server implements OpenAI-compatible endpoints:

  • Chat completions: /v1/chat/completions
    • βœ… Chat
    • βœ… Tools, Function Calling
    • βœ… Structured Output
    • βœ… LogProbs
    • 🚧 Vision
  • Audio
    • βœ… /v1/audio/speech - Text-to-Speech
    • βœ… /v1/audio/transcriptions - Speech-to-Text
  • Models
    • βœ… /v1/models - List models
    • βœ… /v1/models/{model} - Retrieve or Delete model
  • Images
    • βœ… /v1/images/generations - Image generation

Quick Start

Follow these simple steps to get started with MLX Omni Server:

  1. Install the package
pip install mlx-omni-server
  1. Start the server
mlx-omni-server
  1. Run a simple chat example using curl
curl http://localhost:10240/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "mlx-community/Llama-3.2-1B-Instruct-4bit",
    "messages": [
      {
        "role": "user",
        "content": "What can you do?"
      }
    ]
  }'

That's it! You're now running AI locally on your Mac. See Advanced Usage for more examples.

Server Options

# Start with default settings (port 10240)
mlx-omni-server

# Or specify a custom port
mlx-omni-server --port 8000

# View all available options
mlx-omni-server --help

Basic Client Setup

from openai import OpenAI

# Connect to your local server
client = OpenAI(
    base_url="http://localhost:10240/v1",  # Point to local server
    api_key="not-needed"                   # API key not required
)

# Make a simple chat request
response = client.chat.completions.create(
    model="mlx-community/Llama-3.2-1B-Instruct-4bit",
    messages=[{"role": "user", "content": "Hello, how are you?"}]
)
print(response.choices[0].message.content)

Advanced Usage

MLX Omni Server supports multiple ways of interaction and various AI capabilities. Here's how to use each:

API Usage Options

MLX Omni Server provides flexible ways to interact with AI capabilities:

REST API

Access the server directly using HTTP requests:

# Chat completions endpoint
curl http://localhost:10240/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "mlx-community/Llama-3.2-1B-Instruct-4bit",
    "messages": [{"role": "user", "content": "Hello"}]
  }'

# Get available models
curl http://localhost:10240/v1/models

OpenAI SDK

Use the official OpenAI Python SDK for seamless integration:

from openai import OpenAI

client = OpenAI(
    base_url="http://localhost:10240/v1",  # Point to local server
    api_key="not-needed"                   # API key not required for local server
)

See the FAQ section for information on using TestClient for development.

API Examples

Chat Completion

response = client.chat.completions.create(
    model="mlx-community/Llama-3.2-3B-Instruct-4bit",
    messages=[
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Hello!"}
    ],
    temperature=0,
    stream=True  # this time, we set stream=True
)

for chunk in response:
    print(chunk)
    print(chunk.choices[0].delta.content)
    print("****************")
Curl Example
curl http://localhost:10240/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "mlx-community/Llama-3.2-3B-Instruct-4bit",
    "stream": true,
    "messages": [
      {
        "role": "system",
        "content": "You are a helpful assistant."
      },
      {
        "role": "user",
        "content": "Hello!"
      }
    ]
  }'

Text-to-Speech

speech_file_path = "mlx_example.wav"
response = client.audio.speech.create(
  model="lucasnewman/f5-tts-mlx",
  voice="alloy", # voice si not working for now
  input="MLX project is awsome.",
)
response.stream_to_file(speech_file_path)
Curl Example
curl -X POST "http://localhost:10240/v1/audio/speech" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "lucasnewman/f5-tts-mlx",
    "input": "MLX project is awsome",
    "voice": "alloy"
  }' \
  --output ~/Desktop/mlx.wav

Speech-to-Text

audio_file = open("speech.mp3", "rb")
transcript = client.audio.transcriptions.create(
    model="mlx-community/whisper-large-v3-turbo",
    file=audio_file
)

print(transcript.text)
Curl Example
curl -X POST "http://localhost:10240/v1/audio/transcriptions" \
  -H "Content-Type: multipart/form-data" \
  -F "file=@mlx_example.wav" \
  -F "model=mlx-community/whisper-large-v3-turbo"

Response:

{
  "text": " MLX Project is awesome!"
}

Image Generation

image_response = client.images.generate(
    model="argmaxinc/mlx-FLUX.1-schnell",
    prompt="A serene landscape with mountains and a lake",
    n=1,
    size="512x512"
)
Curl Example
curl http://localhost:10240/v1/images/generations \
  -H "Content-Type: application/json" \
  -d '{
    "model": "argmaxinc/mlx-FLUX.1-schnell",
    "prompt": "A cute baby sea otter",
    "n": 1,
    "size": "1024x1024"
  }'

For more detailed examples, check out the examples directory.

FAQ

How are models managed?

MLX Omni Server uses Hugging Face for model downloading and management. When you specify a model ID that hasn't been downloaded yet, the framework will automatically download it. However, since download times can vary significantly:

  • It's recommended to pre-download models through Hugging Face before using them in your service
  • To use a locally downloaded model, simply set the model parameter to the local model path
# Using a model from Hugging Face
response = client.chat.completions.create(
    model="mlx-community/Llama-3.2-1B-Instruct-4bit",  # Will download if not available
    messages=[{"role": "user", "content": "Hello"}]
)

# Using a local model
response = client.chat.completions.create(
    model="/path/to/your/local/model",  # Local model path
    messages=[{"role": "user", "content": "Hello"}]
)

The models currently supported on the machine can also be accessed through the following methods

curl http://localhost:10240/v1/models

How do I specify which model to use?

Use the model parameter when creating a request:

response = client.chat.completions.create(
    model="mlx-community/Llama-3.2-1B-Instruct-4bit",  # Specify model here
    messages=[{"role": "user", "content": "Hello"}]
)

Can I use TestClient for development?

Yes, TestClient allows you to use the OpenAI client without starting a local server. This is particularly useful for development and testing scenarios:

from openai import OpenAI
from fastapi.testclient import TestClient
from mlx_omni_server.main import app

# Use TestClient directly - no network service needed
client = OpenAI(
    http_client=TestClient(app)
)

# Now you can use the client just like with a running server
response = client.chat.completions.create(
    model="mlx-community/Llama-3.2-1B-Instruct-4bit",
    messages=[{"role": "user", "content": "Hello"}]
)

This approach bypasses the HTTP server entirely, making it ideal for unit testing and quick development iterations.

What if I get errors when starting the server?

  • Confirm you're using an Apple Silicon Mac (M1/M2/M3/M4)
  • Check that your Python version is 3.9 or higher
  • Verify you have the latest version of mlx-omni-server installed
  • Check the log output for more detailed error information

Contributing

We welcome contributions! If you're interested in contributing to MLX Omni Server, please check out our Development Guide for detailed information about:

  • Setting up the development environment
  • Running the server in development mode
  • Contributing guidelines
  • Testing and documentation

For major changes, please open an issue first to discuss what you would like to change.

License

This project is licensed under the MIT License - see the LICENSE file for details.

Acknowledgments

Disclaimer

This project is not affiliated with or endorsed by OpenAI or Apple. It's an independent implementation that provides OpenAI-compatible APIs using Apple's MLX framework.

Star History 🌟

Star History Chart

About

MLX Omni Server is a local inference server powered by Apple's MLX framework, specifically designed for Apple Silicon (M-series) chips. It implements OpenAI-compatible API endpoints, enabling seamless integration with existing OpenAI SDK clients while leveraging the power of local ML inference.

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages