ik_llama.cpp/ggml
Kawrakow 03da76eb05 Fix RoPE cache on multi-GPU setup (#966)
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2025-11-16 11:50:48 +02:00
..
cmake Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
include CUDA: set compute parameters via command line arguments (#910) 2025-11-07 07:11:23 +02:00
src Fix RoPE cache on multi-GPU setup (#966) 2025-11-16 11:50:48 +02:00
.gitignore Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
CMakeLists.txt Enable fusion by default (#939) 2025-11-11 10:35:48 +02:00