llama.cpp/ggml
2026-04-21 11:04:21 +03:00
..
cmake ggml: backend-agnostic tensor parallelism (experimental) (#19378) 2026-04-09 16:42:19 +02:00
include CUDA: manage NCCL communicators in context (#21891) 2026-04-15 15:58:40 +02:00
src ggml-cuda: flush legacy pool on OOM and retry (#22155) 2026-04-20 23:30:38 +02:00
.gitignore vulkan : cmake integration (#8119) 2024-07-13 18:12:39 +02:00
CMakeLists.txt ggml : bump version to 0.10.0 (ggml/1463) 2026-04-21 11:04:21 +03:00