ik_llama.cpp/src
Kawrakow cd8d0b0832
Disable some fusion, RoPE cache off by default (#894)
* Disable some fusion and make rope cahe off by default

* Minor

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2025-11-04 07:50:14 +02:00
..
CMakeLists.txt Enable and clean up compiler warnings in src (#824) 2025-10-11 16:01:13 +03:00
llama-arch.cpp Adding Ling/Ring (a.k.a., Bailing-MoE2) support (#833) 2025-10-15 14:20:40 +03:00
llama-arch.h Adding Ling/Ring (a.k.a., Bailing-MoE2) support (#833) 2025-10-15 14:20:40 +03:00
llama-build-context.cpp RoPE cache (#887) 2025-11-03 18:42:20 +02:00
llama-build-context.h RoPE cache (#887) 2025-11-03 18:42:20 +02:00
llama-context.h Support --device and --device-draft parameter (#866) 2025-10-27 18:13:28 +02:00
llama-cparams.h RoPE cache (#887) 2025-11-03 18:42:20 +02:00
llama-grammar.cpp Tool calls support from mainline (#723) 2025-09-01 08:38:49 +03:00
llama-grammar.h Tool calls support from mainline (#723) 2025-09-01 08:38:49 +03:00
llama-hparams.cpp Adding Ling/Ring (a.k.a., Bailing-MoE2) support (#833) 2025-10-15 14:20:40 +03:00
llama-hparams.h Adding Ling/Ring (a.k.a., Bailing-MoE2) support (#833) 2025-10-15 14:20:40 +03:00
llama-impl.h Fix warnings about LLAMA_DEBUG being redefined 2025-10-27 18:41:03 +02:00
llama-load-tensors.cpp Fused Q and K fused_rms_norm for TG on CUDA (#882) 2025-10-31 14:41:28 +02:00
llama-mmap.cpp Enable CUDA graphs for MoE models + GPT-OSS support (#689) 2025-08-15 09:18:07 +03:00
llama-mmap.h Enable CUDA graphs for MoE models + GPT-OSS support (#689) 2025-08-15 09:18:07 +03:00
llama-model-loader.cpp Merge Q, K, V (#878) 2025-10-30 10:49:48 +02:00
llama-model-loader.h Merge Q, K, V (#878) 2025-10-30 10:49:48 +02:00
llama-model.cpp Disable pipeline parallel for tensor override or allocation failed (#879) 2025-10-31 14:20:48 +02:00
llama-model.h Compiler warning 2025-10-31 14:58:00 +02:00
llama-quantize.cpp Merge Q, K, V (#878) 2025-10-30 10:49:48 +02:00
llama-sampling.cpp Enable and clean up compiler warnings in src (#824) 2025-10-11 16:01:13 +03:00
llama-sampling.h add dry sampler (#513) 2025-06-19 10:24:53 +03:00
llama-vocab.cpp Adding Ling/Ring (a.k.a., Bailing-MoE2) support (#833) 2025-10-15 14:20:40 +03:00
llama-vocab.h model : add grok-2 support (#782) 2025-09-23 16:31:01 +02:00
llama.cpp Disable some fusion, RoPE cache off by default (#894) 2025-11-04 07:50:14 +02:00
unicode-data.cpp Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
unicode-data.h Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
unicode.cpp Enable CUDA graphs for MoE models + GPT-OSS support (#689) 2025-08-15 09:18:07 +03:00
unicode.h Enable CUDA graphs for MoE models + GPT-OSS support (#689) 2025-08-15 09:18:07 +03:00