Skip to content

llama.cpp buildcache-cuda Public Latest

Install from the command line
Learn more about packages
$ docker pull ghcr.io/ggml-org/llama.cpp:buildcache-cuda

Recent tagged image versions

  • Published about 1 hour ago · Digest
    sha256:708c2097552b70af914e35429866ff0126400009daee793690846d99d6906035
    0 Version downloads
  • Published about 1 hour ago · Digest
    sha256:306f9d8d9d42951762c5a4e8ef7d6e033e2537c291cd1075eae503fc9a82bdcc
    38 Version downloads
  • Published about 1 hour ago · Digest
    sha256:4803fb53de71af100644bb66aa99f908ab1950948cbd67f2cdf1d9b1d6f90a7c
    0 Version downloads
  • Published about 1 hour ago · Digest
    sha256:885c38a6974895c71f5fdde90461d02f10abb6a6437781f3fa2bab1766440073
    4 Version downloads
  • Published about 1 hour ago · Digest
    sha256:9bf91ad60fd064a28b2300607a6e19671dc4968621bf84b611a0a6095d3c6df0
    0 Version downloads

Loading

Details


Last published

1 hour ago

Discussions

2.76K

Issues

954

Total downloads

754K