Skip to content

[Feature Request] baracuda able to run GPT4ALL4bit and AlpacaLora4bit #322

@elephantpanda

Description

@elephantpanda

Will Baracuda be able to run this model? Or similar models:

https://huggingface.co/Sosaka/GPT4All-7B-4bit-ggml

or

https://github.com/johnsmith0031/alpaca_lora_4bit

This would be excellent if it could. This is the way AI is heading with these small optimised models.

I have been able to run 4bit Llama model on Quadro P5000 GPU (similar to GeForce 1080) using pytorch.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions