Skip to content

llama.cpp

Install from the command line
$ docker pull ghcr.io/ggerganov/llama.cpp:server-cuda-b4038

Recent tagged image versions

  • Published about 2 hours ago · Digest
    sha256:0bb2d59476d36b28afbe1ee775103896fe4938fd259e1d1e4d6cdbf4198d67e9
    13 Version downloads
  • Published about 2 hours ago · Digest
    sha256:e09bbcd90a9ee422560fc0093c4a2d32dd88e0be18b57eef83a000498884ba7e
    1 Version downloads
  • Published about 2 hours ago · Digest
    sha256:0f14c965c72181232a87888b88853b7a039fc3d3b0e1d4d28de91c2ee014a8c2
    0 Version downloads
  • Published about 2 hours ago · Digest
    sha256:fba0c6251689cc50f7a423fc47f9f7200154f91ac2db4d9bdf57faae71089077
    8 Version downloads
  • Published about 2 hours ago · Digest
    sha256:2a6eb92fa653fe3040e13d742b658d5568b9d7c182d261b97b52ac7aa9f82a15
    0 Version downloads

Loading


Last published

2 hours ago

Discussions

1.68K

Issues

560

Total downloads

3.39M