ik_llama.cpp/ggml
Iwan Kawrakow 0a70ca0bc0 Fix #772
2025-09-23 17:25:47 +03:00
..
cmake Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
include Offload only activated experts to the GPU (#698) 2025-09-04 12:22:30 +02:00
src Fix #772 2025-09-23 17:25:47 +03:00
.gitignore Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
CMakeLists.txt Set default value of GGML_SCHED_MAX_COPIES to 1 (#751) 2025-09-02 07:04:39 +02:00