| .. |
|
nix
|
addOpenGLRunpath -> autoAddDriverRunpath in .devops/nix/package.nix (#1135)
|
2026-01-12 15:16:37 +02:00 |
|
cloud-v-pipeline
|
build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
|
full-cuda.Dockerfile
|
Merge mainline llama.cpp (#3)
|
2024-07-27 07:55:01 +02:00 |
|
full-rocm.Dockerfile
|
Merge mainline llama.cpp (#3)
|
2024-07-27 07:55:01 +02:00 |
|
full.Dockerfile
|
Merge mainline llama.cpp (#3)
|
2024-07-27 07:55:01 +02:00 |
|
llama-cli-cuda.Dockerfile
|
Merge mainline llama.cpp (#3)
|
2024-07-27 07:55:01 +02:00 |
|
llama-cli-intel.Dockerfile
|
Merge mainline llama.cpp (#3)
|
2024-07-27 07:55:01 +02:00 |
|
llama-cli-rocm.Dockerfile
|
Merge mainline llama.cpp (#3)
|
2024-07-27 07:55:01 +02:00 |
|
llama-cli-vulkan.Dockerfile
|
Merge mainline llama.cpp (#3)
|
2024-07-27 07:55:01 +02:00 |
|
llama-cli.Dockerfile
|
Merge mainline llama.cpp (#3)
|
2024-07-27 07:55:01 +02:00 |
|
llama-cpp-cuda.srpm.spec
|
Merge mainline llama.cpp (#3)
|
2024-07-27 07:55:01 +02:00 |
|
llama-cpp.srpm.spec
|
build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
|
llama-server-cuda.Dockerfile
|
Fix llama-server-cuda Dockerfile to build ik_llama.cpp correctly (#1224)
|
2026-02-04 16:08:00 +02:00 |
|
llama-server-intel.Dockerfile
|
A few server commits from mainline. (#872)
|
2025-10-28 09:58:31 +02:00 |
|
llama-server-rocm.Dockerfile
|
A few server commits from mainline. (#872)
|
2025-10-28 09:58:31 +02:00 |
|
llama-server-vulkan.Dockerfile
|
A few server commits from mainline. (#872)
|
2025-10-28 09:58:31 +02:00 |
|
llama-server.Dockerfile
|
A few server commits from mainline. (#872)
|
2025-10-28 09:58:31 +02:00 |
|
tools.sh
|
Merge mainline llama.cpp (#3)
|
2024-07-27 07:55:01 +02:00 |