llama.cpp/ggml
2026-01-09 05:34:56 +08:00
..
cmake
include ggml-webgpu: Fix GGML_MEM_ALIGN to 8 for emscripten. (#18628) 2026-01-08 08:36:42 -08:00
src llama: use host memory if device reports 0 memory (#18587) 2026-01-09 05:34:56 +08:00
.gitignore
CMakeLists.txt ggml : bump version to 0.9.5 (ggml/1410) 2025-12-31 18:54:43 +02:00