ik_llama.cpp/examples
Kawrakow fc06bc9d27
Enable CUDA graphs for MoE models + GPT-OSS support (#689)
* gmp-oss: common

* gpt-oss: attnetion sinks, swiglu_oai

* gpt-oss: WIP llama

Model loads and runs (CPU only), but PPL is much to high
(~1500 for 1st batch vs ~200 in mainline).
Is it because of SWA, because of vocab, or did I introduce a bug somewhere?

* gpt-oss: CPU seems to be working

It was the SWA thta was missing in the previous commit.

There are issues with EOG tokens, so this still needs to be added.

* CUDA: ADD_ID

Just a copy from mainline

* gpt-oss: Seems to be working on CUDA

* gpt-oss: add sinks to the attn-vec kernels

* CUDA: add head size of 64 to new mma

Haven't turned it on yet, but observe slightly better PP and slightly
worse TG performance with that.

* gpt-oss: add ability to use -fmoe (only CUDA for now)

* Move row sums to the write place

* Add sinks to iqk flash attention

* gpt_oss: Implement -fmoe on the CPU

* Simdify swiglu_oai

Turning it off for now as performance becomes more variable,
so perhaps I'm running into thermal trottling imore often
because of making the CPU work too hard.

* llama: factor out model loader

* Builds successfully

* It runs, but mmap does not work

* Fix llama_mmap so mmap works

* Minor

* Fix CUDA after latest changes

* Attempt to use CUDA graphs with MoE models - not working

* CUDA graphs WIP - still not working

* CUDA graphs - seems to be working

Likely not all MLA variants are working.
I no longer remember why I added the q8_0 cpy that
transposes the tensor, but if really needed, this is now
missing. Also missing is q6_0.

* Make q8_0 cache work for DeepSeek models with CUDA graphs

* cuda: cpy for q6_0

* Fix llama_mmap on non-Linux platforms

* Adding forgotten file

* Iterating on Windows build failures

* cuda: re-add q8_0 -> q8_0 transpose

so mla = 2 can be used with CUDA graphs and q8_0 cache.

* Disable graphs without -fmoe

* Minor

* Turn graphs on by default

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2025-08-15 09:18:07 +03:00
..
baby-llama Merge mainline - Aug 12 2024 (#17) 2024-08-12 15:14:32 +02:00
batched Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
batched-bench MoE fix for R4 quants (#170) 2025-01-12 13:19:14 +02:00
batched.swift Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
benchmark build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
convert-llama2c-to-ggml build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
cvector-generator Merge vulkan code from mainline up to commit of 6/28/2025 (#563) 2025-07-02 08:49:42 +02:00
deprecation-warning Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
embedding Merge mainline - Aug 12 2024 (#17) 2024-08-12 15:14:32 +02:00
eval-callback Merge mainline - Aug 12 2024 (#17) 2024-08-12 15:14:32 +02:00
export-lora Merge vulkan code from mainline up to commit of 6/28/2025 (#563) 2025-07-02 08:49:42 +02:00
gbnf-validator Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
gguf Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
gguf-hash Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
gguf-split gguf-split : update (#444) 2025-05-23 08:07:42 +03:00
gritlm llama : allow pooled embeddings on any model (#7477) 2024-06-21 08:38:22 +03:00
imatrix Fix imatrix calculation for MLA models (#411) 2025-05-13 17:53:38 +03:00
infill add dry sampler (#513) 2025-06-19 10:24:53 +03:00
jeopardy build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
llama-bench Add copyright notices (#317) 2025-04-07 10:43:26 +02:00
llama.android Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
llama.swiftui Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
llava add dry sampler (#513) 2025-06-19 10:24:53 +03:00
lookahead add dry sampler (#513) 2025-06-19 10:24:53 +03:00
lookup add dry sampler (#513) 2025-06-19 10:24:53 +03:00
main add jinja template support (#677) 2025-08-09 12:50:30 +00:00
main-cmake-pkg Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
parallel add dry sampler (#513) 2025-06-19 10:24:53 +03:00
passkey Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
perplexity Fix KLD precision (#325) 2025-04-12 16:17:50 +02:00
quantize MXFP4 (#682) 2025-08-09 08:40:18 +03:00
quantize-stats Adding IQ2_KL (#602) 2025-07-14 18:55:08 +02:00
retrieval Merge mainline - Aug 12 2024 (#17) 2024-08-12 15:14:32 +02:00
rpc add dry sampler (#513) 2025-06-19 10:24:53 +03:00
save-load-state Merge mainline - Aug 12 2024 (#17) 2024-08-12 15:14:32 +02:00
server Fix completions endpoint (#684) 2025-08-11 09:43:20 +03:00
simple Merge mainline - Aug 12 2024 (#17) 2024-08-12 15:14:32 +02:00
speculative add dry sampler (#513) 2025-06-19 10:24:53 +03:00
sweep-bench Enable CUDA graphs for MoE models + GPT-OSS support (#689) 2025-08-15 09:18:07 +03:00
sycl Merge mainline - Aug 12 2024 (#17) 2024-08-12 15:14:32 +02:00
tokenize Merge mainline - Aug 12 2024 (#17) 2024-08-12 15:14:32 +02:00
base-translate.sh build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
chat-13B.bat Create chat-13B.bat (#592) 2023-03-29 20:21:09 +03:00
chat-13B.sh build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
chat-persistent.sh build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
chat-vicuna.sh build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
chat.sh build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
CMakeLists.txt Add new sweep-bench benchmark (#225) 2025-02-23 00:16:27 -06:00
convert_legacy_llama.py Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
json_schema_pydantic_example.py Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
json_schema_to_grammar.py Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
llama.vim llama.vim : added api key support (#5090) 2024-01-23 08:51:27 +02:00
llm.vim llm.vim : stop generation at multiple linebreaks, bind to <F2> (#2879) 2023-08-30 09:50:55 +03:00
Miku.sh build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
pydantic_models_to_grammar_examples.py Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
pydantic_models_to_grammar.py Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
reason-act.sh build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
regex_to_grammar.py Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
server_embd.py Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
server-llama2-13B.sh build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
ts-type-to-grammar.sh JSON schema conversion: ️ faster repetitions, min/maxLength for strings, cap number length (#6555) 2024-04-12 19:43:38 +01:00