Skip to content

llama.cpp full-cuda-b5648 Public Latest

Install from the command line
Learn more about packages
$ docker pull ghcr.io/ggml-org/llama.cpp:full-cuda-b5648

Recent tagged image versions

  • Published 15 minutes ago · Digest
    sha256:d82c059014cfc9afeb2073bc70666f896563a99e6300c05d2a361692ec1424a0
    0 Version downloads
  • Published 18 minutes ago · Digest
    sha256:f07a8665d13693ed9cf2e89a09d7de8549c274c8e5f55f1c446a4d072f4b3248
    0 Version downloads
  • Published 18 minutes ago · Digest
    sha256:9ee857ce410ef05f52d44954aaeb18cd2e27608475aa0249f448c7df1db10e28
    0 Version downloads
  • Published 22 minutes ago · Digest
    sha256:c7acfe8100c02330525b93205ff70d1e00ebd2a3704057f5b7677e3c9d818e54
    0 Version downloads
  • Published 36 minutes ago · Digest
    sha256:c749a75fedee01bd37edd1fe2b4f6599931f4c58a08e574734810758409d4f85
    1 Version downloads

Loading

Details


Last published

15 minutes ago

Discussions

2.33K

Issues

795

Total downloads

260K