llama.cpp/src
Richard Davison 5eae9cb1d9
ggml : add NVFP4 quantization type support (#19769)
* WIP: add NVFP4 quantization support

* tests

* improve NVFP4 dot product implementation performance and fix bad super call

* typo

* Use nvfp4 kvalues

* vulkan : fix NVFP4 shader compilation by including kvalues_mxfp4 lookup table

* vulcal and perf fixes

* wip

* Fix metal

* fix vulcan

* Rename threshold & fix wrong scale

* Fix MOE

* Shelf backend implementations (CUDA, Metal, Vulkan, arch-specific SIMD)

Remove NVFP4 support from GPU backends and architecture-specific
optimized dot products. These should be added in separate PRs so
backend specialists can review them independently.

Reverted files:
- ggml-cuda: common.cuh, convert.cu, mmq.cu/cuh, mmvq.cu, vecdotq.cuh,
  quantize.cu/cuh, mma.cuh, ggml-cuda.cu, fattn-tile.cuh
- ggml-metal: ggml-metal.metal, ggml-metal-device.cpp, ggml-metal-impl.h,
  ggml-metal-ops.cpp
- ggml-vulkan: ggml-vulkan.cpp, all vulkan-shaders/*
- ggml-cpu arch: arm/quants.c, x86/quants.c, powerpc/quants.c, s390/quants.c

Core NVFP4 support (type definition, CPU fallback dot product,
quantization, dequantization, conversion) is retained.

* Fix arch-fallback.h: add NVFP4 generic fallback for all platforms

After shelving backend-specific SIMD implementations, the generic
CPU dot product needs to be aliased on ARM, x86, PowerPC, and s390
platforms that previously relied on arch-specific versions.

* quantize: add NVFP4 as a quantization type option

* Fix ggml_fp32_to_ue4m3: handle subnormal values

Previously, values with ue4m3_exp <= 0 were clamped to 0, causing
all small scales to underflow. This made NVFP4 quantization via
llama-quantize produce garbage (PPL = 5.8M) since typical transformer
weights have amax/6.0 in the range 0.001-0.01, which falls in the
UE4M3 subnormal range.

Now subnormals are properly encoded as man * 2^-9 (exp=0, man=1..7),
matching the decode path in ggml_ue4m3_to_fp32.

Result: NVFP4 requantization now produces PPL = 15.25 (vs F16 = 14.33),
comparable to Q4_1 (PPL = 15.81) at slightly lower BPW (4.70 vs 5.15).

* Restore ARM NEON NVFP4 dot product implementation

Restores the optimized ggml_vec_dot_nvfp4_q8_0 for ARM NEON using
vqtbl1q_s8 lookup and ggml_vdotq_s32 dot products.

tg128 performance: 4.37 t/s (generic) -> 13.66 t/s (NEON) = 3.1x speedup

* Optimize ARM NEON NVFP4 dot product: LUT + vpaddq + vfmaq

- Add ue4m3_scale_lut[128] to ggml-common.h replacing branch-heavy
  ggml_ue4m3_to_fp32() in the hot loop
- Use vpaddq_s32 for pairwise int32 reduction instead of vaddvq_s32
- Accumulate with vfmaq_f32 into float32x4_t vector accumulators

tg128: 8.1 -> 31.0 t/s (3.8x speedup, 77% of Q4_1 speed)

* ARM NEON NVFP4: rearrange q8 to match nibble layout

Alternative approach: rearrange q8 data to match the NVFP4 lo/hi
nibble layout instead of rearranging the looked-up NVFP4 values.
Eliminates vcombine_s8(vget_low, vget_low) shuffles.

Performance is equivalent (~18.5 t/s) - the bottleneck is the 2x
block overhead from QK=16 vs QK=32, not the shuffle instructions.

* CPU only backend 64 super-block layout

* cleanup

* Remove unused LUT

* int

* exclude NVFP4 from unsupported ops in metal build

* remove quantization for now

* store scales as native UE4M3, preserve original model bits when possible

* Update convert_hf_to_gguf.py

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* correct comment

* format

* reduce duplication and cleanup

* Address comments

* move detection to prepare_tensors

* Use math instead of const

* Move

* fix comment

* Shelf quantize tests

* Rebase and move check

* cleanup

* lint

* Update gguf-py/gguf/scripts/gguf_convert_endian.py

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Use fallback quant config

* Simplify

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* organize

* Refactor

* Update convert_hf_to_gguf.py

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Update convert_hf_to_gguf.py

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Update convert_hf_to_gguf.py

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* add quantize_nvfp4 (required for test_quants.py)

* add quantize_nvfp4 (required for test_quants.py)

* add quantize_nvfp4 (required for test_quants.py)

* fix return type

---------

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
2026-03-11 21:02:54 +01:00
..
models ggml : add NVFP4 quantization type support (#19769) 2026-03-11 21:02:54 +01:00
CMakeLists.txt model : add Jina Embeddings v5 Nano (partial EuroBERT) support (#19826) 2026-02-26 12:14:09 +01:00
llama-adapter.cpp lora: make sure model keep track of associated adapters (#18490) 2026-01-15 10:24:28 +01:00
llama-adapter.h graph : fix KQ mask, lora, cvec reuse checks (#19644) 2026-02-16 09:21:11 +02:00
llama-arch.cpp llama : add support for Nemotron 3 Super (#20411) 2026-03-11 19:27:53 +01:00
llama-arch.h llama : add support for Nemotron 3 Super (#20411) 2026-03-11 19:27:53 +01:00
llama-batch.cpp kv-cache : fix M-RoPE checkpoints (#20132) 2026-03-06 08:46:51 +02:00
llama-batch.h batch : fix sequence id ownership (#17915) 2025-12-11 14:29:47 +02:00
llama-chat.cpp docs : Minor cleanups (#19252) 2026-02-02 08:38:55 +02:00
llama-chat.h model : add EXAONE MoE (#18543) 2026-01-13 23:28:38 +01:00
llama-context.cpp llama: dynamic head_dim and n_rot for SWA (#20301) 2026-03-09 22:22:39 +01:00
llama-context.h graph : fix KQ mask, lora, cvec reuse checks (#19644) 2026-02-16 09:21:11 +02:00
llama-cparams.cpp cparams : rename LLAMA_MAX_PARALLEL_SEQUENCES to LLAMA_MAX_SEQ (#14188) 2025-06-15 10:08:58 +03:00
llama-cparams.h ggml: add GATED_DELTA_NET op (#19504) 2026-03-07 15:41:10 +08:00
llama-grammar.cpp common : fix incorrect uses of stoul (#20313) 2026-03-10 11:40:26 +01:00
llama-grammar.h common/grammar : replace problematic backtracking regex [\s\S]* (#18342) 2026-01-03 16:02:43 -06:00
llama-graph.cpp ggml : add NVFP4 quantization type support (#19769) 2026-03-11 21:02:54 +01:00
llama-graph.h ggml : add NVFP4 quantization type support (#19769) 2026-03-11 21:02:54 +01:00
llama-hparams.cpp llama: dynamic head_dim and n_rot for SWA (#20301) 2026-03-09 22:22:39 +01:00
llama-hparams.h llama : add support for Nemotron 3 Super (#20411) 2026-03-11 19:27:53 +01:00
llama-impl.cpp impl : use 6 digits for tensor dims (#20094) 2026-03-04 09:53:38 +01:00
llama-impl.h ggml: add GATED_DELTA_NET op (#19504) 2026-03-07 15:41:10 +08:00
llama-io.cpp llama : refactor llama_context, llama_kv_cache, llm_build_context (#12181) 2025-03-13 12:35:44 +02:00
llama-io.h llama : refactor llama_context, llama_kv_cache, llm_build_context (#12181) 2025-03-13 12:35:44 +02:00
llama-kv-cache-iswa.cpp model : support Step3.5-Flash (#19283) 2026-02-06 21:06:14 +01:00
llama-kv-cache-iswa.h llama: print memory breakdown on exit (#15860) 2025-09-24 16:53:48 +02:00
llama-kv-cache.cpp llama: dynamic head_dim and n_rot for SWA (#20301) 2026-03-09 22:22:39 +01:00
llama-kv-cache.h llama: dynamic head_dim and n_rot for SWA (#20301) 2026-03-09 22:22:39 +01:00
llama-kv-cells.h llama: store mrope data in KV cell (#16825) 2025-10-29 18:09:18 +01:00
llama-memory-hybrid-iswa.cpp memory : add llama_memory_hybrid_iswa (#18601) 2026-01-21 14:30:23 +02:00
llama-memory-hybrid-iswa.h memory : add llama_memory_hybrid_iswa (#18601) 2026-01-21 14:30:23 +02:00
llama-memory-hybrid.cpp graph : reuse SSM graphs (#16490) 2025-12-16 09:36:21 +02:00
llama-memory-hybrid.h llama: print memory breakdown on exit (#15860) 2025-09-24 16:53:48 +02:00
llama-memory-recurrent.cpp server : support multi-modal context checkpoints (#19849) 2026-02-25 15:14:27 +02:00
llama-memory-recurrent.h llama: consistent ctx <-> buf order for KV cache (#16746) 2025-10-28 11:23:54 +01:00
llama-memory.cpp memory : correctly handle failure in apply() (#14438) 2025-06-30 18:03:03 +03:00
llama-memory.h llama: print memory breakdown on exit (#15860) 2025-09-24 16:53:48 +02:00
llama-mmap.cpp mmap: Fix Windows handle lifetime (#19598) 2026-02-14 10:05:12 +02:00
llama-mmap.h llama : add use_direct_io flag for model loading (#18166) 2026-01-08 08:35:30 +02:00
llama-model-loader.cpp ggml : add NVFP4 quantization type support (#19769) 2026-03-11 21:02:54 +01:00
llama-model-loader.h llama: end-to-end tests (#19802) 2026-03-08 12:30:21 +01:00
llama-model-saver.cpp llama: dynamic head_dim and n_rot for SWA (#20301) 2026-03-09 22:22:39 +01:00
llama-model-saver.h llama: end-to-end tests (#19802) 2026-03-08 12:30:21 +01:00
llama-model.cpp ggml : add NVFP4 quantization type support (#19769) 2026-03-11 21:02:54 +01:00
llama-model.h ggml : add NVFP4 quantization type support (#19769) 2026-03-11 21:02:54 +01:00
llama-quant.cpp llama-quant : correct n_attention_wv usage (#20357) 2026-03-10 21:43:29 +02:00
llama-quant.h llama : refactor src/llama.cpp (#10902) 2025-01-03 10:18:53 +02:00
llama-sampler.cpp llama : rename llama-sampling to llama-sampler (#19363) 2026-02-06 07:26:54 +01:00
llama-sampler.h llama : rename llama-sampling to llama-sampler (#19363) 2026-02-06 07:26:54 +01:00
llama-vocab.cpp llama: end-to-end tests (#19802) 2026-03-08 12:30:21 +01:00
llama-vocab.h model : add JAIS-2 architecture support (#19488) 2026-02-19 13:30:17 +01:00
llama.cpp llama: end-to-end tests (#19802) 2026-03-08 12:30:21 +01:00
unicode-data.cpp server : better security control for public deployments (#9776) 2024-10-08 13:27:04 +02:00
unicode-data.h llama : reduce compile time and binary size (#9712) 2024-10-02 15:49:55 +02:00
unicode.cpp chore : correct typos [no ci] (#20041) 2026-03-05 08:50:21 +01:00
unicode.h devops: add s390x & ppc64le CI (#15925) 2025-09-27 02:03:33 +08:00