Skip to content

Implement instructor for MLX support to interact with LLM on Apple platforms (M1/M2/M3) #1621

@matiasdev30

Description

@matiasdev30

Is your feature request related to a problem? Please describe.

I'm interested in running LLMs locally on Apple Silicon (M1/M2/M3) using Instructor, but currently the library only supports OpenAI and compatible APIs. There is no native support for Apple's MLX framework, which is optimized for these devices. As a result, it's not possible to fully leverage the privacy, speed, and cost benefits of running LLMs directly on Mac hardware using Instructor.

Describe the solution you'd like

I'd like to see Instructor support MLX as a backend for model inference. This could be implemented as a new client or adapter, allowing users to pass prompts and receive structured outputs from locally hosted LLMs (such as Llama, Mistral, or Phi models running via MLX) in the same way they would with OpenAI. Ideally, the API would remain consistent, just swapping the backend.

Describe alternatives you've considered

I've considered using other frameworks or creating custom wrappers for MLX, but none offer the seamless, schema-driven and robust structured output experience Instructor provides. Other projects like Toolio are exploring MLX agents, but they don't have the same Pythonic interface or validation features.

Additional context

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions