ik_llama.cpp/ggml
2025-07-04 09:08:24 +03:00
..
cmake Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
include Chnage KQ mask padding to 64 (#574) 2025-07-03 10:43:27 +02:00
src Vulkan: adding GGML_OP_MULTI_ADD implementation 2025-07-04 09:08:24 +03:00
.gitignore Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
CMakeLists.txt Merge vulkan code from mainline up to commit of 6/28/2025 (#563) 2025-07-02 08:49:42 +02:00