Skip to content

gguf / mlx format? #34

@alexander-potemkin

Description

@alexander-potemkin

Hello and thanks for open-sourcing the model!

As it doesn't seem to be any ready to use gguf or mlx formats (for llama.cpp and macos respectively) - is there any chance you can give a hint on how to convert YaLM there?

It would be of real help to enable model to run on non-Nvidia enabled HW, like any modern pc and mobile.

Thanks in advance!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions