ik_llama.cpp/examples
Kawrakow a0ebfdd661
Q8_KV: 8-bit quantization type targeting the KV cache (#208)
* Adding q8_KV - Basics + AVX2 gemm/gemv

* q8_KV: Better AVX2 gemm

* q8_KV: Better Zen4 gemm

We get 225.7 t/s for L3-8B. In comparison q8_0 without
run-tinme-repacking is at 169 t/s.

* q8_KV: AVX2 gemm/gemv

We get 254 t/s for L3-8B vs 194 t/s for q8_0 without rtr.

* q8_KV: be able to use it for K cache

This required quite a few fixes in ggml and llama.cpp:
* ggml: do not calculate row size as n/block_size*type_size. I had
  removed most of it when implementing the quants with per row scale,
  bit it was stull lurking in ggml_copy. Not sure if these were the last
  remnants of ggmil-style row sizes, or if there are still places left
* llama.cpp: get rid of the the 1d K cache assumption. Create and manage
  the K-cache as a 2D tensor so we can have per row meta data as needed
  by q8_KV.

Using q8_KV for K-cache results in non-negligible performance gains.
More details to follow, but for DeepSeek-Lite with MLA, we get
18% speedup for PP-8192 compared to q8_0 K-cache.

* q8_KV: be able to use it for K cache in FA

* q8_KV: repack it for K*Q in FA

* q8_KV: slightly faster gemv on Zen4

* q8_KV: slightly faster gemv on Zen4

* q8_KV: ARM_NEON

We get PP-512 = 167 t/s for L3-8B without interleaving!
We do the interleaving on the fly, so I wonder if this
could be done for other quants as well.

* q8_KV: use it in FA on NEON

* q8_KV_r8 - repacked q8_KV

On Zen4 it is slower than q8_k_r8 (292 vs 370 t/s)
This makes no sense whatsoever as the q8_KV_r8 GEMM is
basically the q8_k_r8 GEMM with the unnecessary block stuff
removed (so, one would think that it would be faster).

* q8_KV_r8: don't use nrc_y = 16 on Zen4

This is faster - 350 t/s. Why?
Much better than the 290 t/s we had before, but still slower
than the 370 t/s for q8_k_r8.

* q8_KV: nrc_y = 16 also doesn't pay off in FA

* Minor

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2025-02-19 11:47:07 +02:00
..
baby-llama Merge mainline - Aug 12 2024 (#17) 2024-08-12 15:14:32 +02:00
batched Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
batched-bench MoE fix for R4 quants (#170) 2025-01-12 13:19:14 +02:00
batched.swift Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
benchmark build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
convert-llama2c-to-ggml build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
cvector-generator Merge mainline - Aug 12 2024 (#17) 2024-08-12 15:14:32 +02:00
deprecation-warning Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
embedding Merge mainline - Aug 12 2024 (#17) 2024-08-12 15:14:32 +02:00
eval-callback Merge mainline - Aug 12 2024 (#17) 2024-08-12 15:14:32 +02:00
export-lora Merge mainline - Aug 12 2024 (#17) 2024-08-12 15:14:32 +02:00
gbnf-validator Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
gguf Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
gguf-hash Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
gguf-split build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
gritlm llama : allow pooled embeddings on any model (#7477) 2024-06-21 08:38:22 +03:00
imatrix Fix imatrix overprotectiveness (#202) 2025-02-12 07:20:38 +02:00
infill Merge mainline - Aug 12 2024 (#17) 2024-08-12 15:14:32 +02:00
jeopardy build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
llama-bench Q8_KV: 8-bit quantization type targeting the KV cache (#208) 2025-02-19 11:47:07 +02:00
llama.android Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
llama.swiftui Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
llava Merge mainline - Aug 12 2024 (#17) 2024-08-12 15:14:32 +02:00
lookahead Merge mainline - Aug 12 2024 (#17) 2024-08-12 15:14:32 +02:00
lookup Merge mainline - Aug 12 2024 (#17) 2024-08-12 15:14:32 +02:00
main Merge mainline - Aug 12 2024 (#17) 2024-08-12 15:14:32 +02:00
main-cmake-pkg Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
parallel Merge mainline - Aug 12 2024 (#17) 2024-08-12 15:14:32 +02:00
passkey Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
perplexity Merge mainline - Aug 12 2024 (#17) 2024-08-12 15:14:32 +02:00
quantize Q8_KV: 8-bit quantization type targeting the KV cache (#208) 2025-02-19 11:47:07 +02:00
quantize-stats Adding IQ4_KSS: 4.0 bpw quants (#89) 2024-10-16 15:18:26 +03:00
retrieval Merge mainline - Aug 12 2024 (#17) 2024-08-12 15:14:32 +02:00
rpc Merge mainline - Aug 12 2024 (#17) 2024-08-12 15:14:32 +02:00
save-load-state Merge mainline - Aug 12 2024 (#17) 2024-08-12 15:14:32 +02:00
server Merge mainline - Aug 12 2024 (#17) 2024-08-12 15:14:32 +02:00
simple Merge mainline - Aug 12 2024 (#17) 2024-08-12 15:14:32 +02:00
speculative Merge mainline - Aug 12 2024 (#17) 2024-08-12 15:14:32 +02:00
sycl Merge mainline - Aug 12 2024 (#17) 2024-08-12 15:14:32 +02:00
tokenize Merge mainline - Aug 12 2024 (#17) 2024-08-12 15:14:32 +02:00
base-translate.sh build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
chat-13B.bat Create chat-13B.bat (#592) 2023-03-29 20:21:09 +03:00
chat-13B.sh build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
chat-persistent.sh build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
chat-vicuna.sh build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
chat.sh build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
CMakeLists.txt Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
convert_legacy_llama.py Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
json_schema_pydantic_example.py Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
json_schema_to_grammar.py Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
llama.vim llama.vim : added api key support (#5090) 2024-01-23 08:51:27 +02:00
llm.vim llm.vim : stop generation at multiple linebreaks, bind to <F2> (#2879) 2023-08-30 09:50:55 +03:00
Miku.sh build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
pydantic_models_to_grammar_examples.py Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
pydantic_models_to_grammar.py Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
reason-act.sh build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
regex_to_grammar.py Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
server_embd.py Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
server-llama2-13B.sh build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
ts-type-to-grammar.sh JSON schema conversion: ️ faster repetitions, min/maxLength for strings, cap number length (#6555) 2024-04-12 19:43:38 +01:00