mirror of
https://github.com/ggerganov/llama.cpp
synced 2026-03-10 00:59:32 +01:00
* server: add cURL support to `full.Dockerfile` * server: add cURL support to `full-cuda.Dockerfile` and `server-cuda.Dockerfile` * server: add cURL support to `full-rocm.Dockerfile` and `server-rocm.Dockerfile` * server: add cURL support to `server-intel.Dockerfile` * server: add cURL support to `server-vulkan.Dockerfile` * fix typo in `server-vulkan.Dockerfile` Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> |
||
|---|---|---|
| .. | ||
| nix | ||
| cloud-v-pipeline | ||
| full-cuda.Dockerfile | ||
| full-rocm.Dockerfile | ||
| full.Dockerfile | ||
| llama-cpp-clblast.srpm.spec | ||
| llama-cpp-cuda.srpm.spec | ||
| llama-cpp.srpm.spec | ||
| main-cuda.Dockerfile | ||
| main-intel.Dockerfile | ||
| main-rocm.Dockerfile | ||
| main-vulkan.Dockerfile | ||
| main.Dockerfile | ||
| server-cuda.Dockerfile | ||
| server-intel.Dockerfile | ||
| server-rocm.Dockerfile | ||
| server-vulkan.Dockerfile | ||
| server.Dockerfile | ||
| tools.sh | ||