ik_llama.cpp/examples
Kawrakow dcdfad29f7
FlashMLA-2: reduce compute buffer size (CUDA and CPU) (#260)
* FlashMLA-2: eliminate intermediate f32 tensors

This works on the CPU. PP performance is ~13% better for 16k tokens
and compute buffer is quite a bit smaller.

* FlashMLA-2: enable fast path only on the CPU for now

I did implement the necessary ops on CUDA, but something is
still wrong there, so for now we only use it when running
CPU-only.

* FlashMLA-2: slightly smaller computer buffer size

* Prepare wk_b when loading DeepSeek models (if wk_b is missing)

* Add some comments

* Fix case where wkv_b is quantized with k- or i-quants.

* Fix CUDA

There is an issue with quantized GEMV on CUDA when the left operand
(the matrix) is not contiguous. So, for now, we also create wv_b
during model loading and use that instead of the 3D view of wkv_b.

* FlashMLA-2: avoid conversions to f32 also on CUDA

* Be able to compute for more than 65535 tokens

On CUDA just a quick hack that allows us to cancatenate tensors
with more than 65535 rows along zroth dimension as needed by
FlashMLA-2. Also needed some care in the perplexity tool to
avoid int overflows when evaluating the computed logits.

* Reduce memory usage for FlashMLA-2

Oh, also fix int overflow in the CUDA concat implementation.

It is funny how the llama.cpp 64-bit police has gone (almost) everywhere
and replaced 32-bit ints with 64-bit ints, needed or not,
but hasn't done it where it is actually needed.

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2025-03-18 07:36:42 +01:00
..
baby-llama Merge mainline - Aug 12 2024 (#17) 2024-08-12 15:14:32 +02:00
batched Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
batched-bench MoE fix for R4 quants (#170) 2025-01-12 13:19:14 +02:00
batched.swift Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
benchmark build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
convert-llama2c-to-ggml build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
cvector-generator Merge mainline - Aug 12 2024 (#17) 2024-08-12 15:14:32 +02:00
deprecation-warning Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
embedding Merge mainline - Aug 12 2024 (#17) 2024-08-12 15:14:32 +02:00
eval-callback Merge mainline - Aug 12 2024 (#17) 2024-08-12 15:14:32 +02:00
export-lora Merge mainline - Aug 12 2024 (#17) 2024-08-12 15:14:32 +02:00
gbnf-validator Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
gguf Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
gguf-hash Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
gguf-split build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
gritlm llama : allow pooled embeddings on any model (#7477) 2024-06-21 08:38:22 +03:00
imatrix DeepSeek imatrix stuff (#250) 2025-03-10 16:19:09 +02:00
infill Merge mainline - Aug 12 2024 (#17) 2024-08-12 15:14:32 +02:00
jeopardy build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
llama-bench SER - Smart Expert Reduction (#239) 2025-03-02 13:47:38 +02:00
llama.android Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
llama.swiftui Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
llava Merge mainline - Aug 12 2024 (#17) 2024-08-12 15:14:32 +02:00
lookahead Merge mainline - Aug 12 2024 (#17) 2024-08-12 15:14:32 +02:00
lookup Merge mainline - Aug 12 2024 (#17) 2024-08-12 15:14:32 +02:00
main Merge mainline - Aug 12 2024 (#17) 2024-08-12 15:14:32 +02:00
main-cmake-pkg Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
parallel Merge mainline - Aug 12 2024 (#17) 2024-08-12 15:14:32 +02:00
passkey Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
perplexity FlashMLA-2: reduce compute buffer size (CUDA and CPU) (#260) 2025-03-18 07:36:42 +01:00
quantize Custom quantization rules with regular expressions (#244) 2025-03-07 08:54:09 +02:00
quantize-stats Adding IQ4_KSS: 4.0 bpw quants (#89) 2024-10-16 15:18:26 +03:00
retrieval Merge mainline - Aug 12 2024 (#17) 2024-08-12 15:14:32 +02:00
rpc Merge mainline - Aug 12 2024 (#17) 2024-08-12 15:14:32 +02:00
save-load-state Merge mainline - Aug 12 2024 (#17) 2024-08-12 15:14:32 +02:00
server Merge mainline - Aug 12 2024 (#17) 2024-08-12 15:14:32 +02:00
simple Merge mainline - Aug 12 2024 (#17) 2024-08-12 15:14:32 +02:00
speculative Merge mainline - Aug 12 2024 (#17) 2024-08-12 15:14:32 +02:00
sweep-bench Add new sweep-bench benchmark (#225) 2025-02-23 00:16:27 -06:00
sycl Merge mainline - Aug 12 2024 (#17) 2024-08-12 15:14:32 +02:00
tokenize Merge mainline - Aug 12 2024 (#17) 2024-08-12 15:14:32 +02:00
base-translate.sh build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
chat-13B.bat Create chat-13B.bat (#592) 2023-03-29 20:21:09 +03:00
chat-13B.sh build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
chat-persistent.sh build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
chat-vicuna.sh build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
chat.sh build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
CMakeLists.txt Add new sweep-bench benchmark (#225) 2025-02-23 00:16:27 -06:00
convert_legacy_llama.py Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
json_schema_pydantic_example.py Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
json_schema_to_grammar.py Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
llama.vim llama.vim : added api key support (#5090) 2024-01-23 08:51:27 +02:00
llm.vim llm.vim : stop generation at multiple linebreaks, bind to <F2> (#2879) 2023-08-30 09:50:55 +03:00
Miku.sh build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
pydantic_models_to_grammar_examples.py Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
pydantic_models_to_grammar.py Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
reason-act.sh build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
regex_to_grammar.py Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
server_embd.py Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
server-llama2-13B.sh build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
ts-type-to-grammar.sh JSON schema conversion: ️ faster repetitions, min/maxLength for strings, cap number length (#6555) 2024-04-12 19:43:38 +01:00