Skip to content

Misc. bug: AMX is not ready to be used! #13678

Closed
@kandakji

Description

@kandakji

Name and Version

version: 5439 (33983057)
built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu

Operating systems

Linux

Which llama.cpp modules do you know to be affected?

llama-server

Command line

Problem description & steps to reproduce

When running llama-server through the .devops/cpu.Dockerfile to deploy a model on an c7i.2xlarge instance (which supports AMX) on Amazon SageMaker, however, the logs say AMX is not ready to be used!

model used: qwen2.5-3b-instruct-q8_0.gguf

I've tried adding the flags for AMX but still getting the same log entry.

-DGGML_NATIVE=OFF
-DGGML_AVX512=ON
-DGGML_AVX512_BF16=ON
-DGGML_AVX512_VBMI=ON
-DGGML_AVX512_VNNI=ON
-DGGML_AMX_TILE=ON
-DGGML_AMX_INT8=ON
-DGGML_AMX_BF16=ON

First Bad Commit

No response

Relevant log output

AMX is not ready to be used!

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions