ik_llama.cpp/ggml
2025-09-05 20:06:17 +03:00
..
cmake Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
include Offload only activated experts to the GPU (#698) 2025-09-04 12:22:30 +02:00
src This is very slightly better 2025-09-05 20:06:17 +03:00
.gitignore Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
CMakeLists.txt Set default value of GGML_SCHED_MAX_COPIES to 1 (#751) 2025-09-02 07:04:39 +02:00