llama.cpp/ggml
2026-04-16 12:08:33 -07:00
..
cmake ggml: backend-agnostic tensor parallelism (experimental) (#19378) 2026-04-09 16:42:19 +02:00
include CUDA: manage NCCL communicators in context (#21891) 2026-04-15 15:58:40 +02:00
src opencl: add q5_K gemm and gemv kernels for Adreno (#21595) 2026-04-16 12:08:33 -07:00
.gitignore
CMakeLists.txt [SYCL] Fix Q8_0 reorder: garbage on 2nd prompt + crash on full VRAM (#21638) 2026-04-16 08:34:05 +03:00