llama.cpp/src
manayang 7bfe60fdf9
mtmd, llama : Update HunyuanVL vision-language model support (#22037)
* mtmd, llama : add HunyuanVL vision-language model support

- add LLM_ARCH_HUNYUAN_VL with M-RoPE (XD-RoPE) support
- add PROJECTOR_TYPE_HUNYUANVL with PatchMerger vision encoder
- add HunyuanVL-specific M-RoPE position encoding for image tokens
- add GGUF conversion for HunyuanVL vision and text models
- add smoke test in tools/mtmd/tests.sh

* fix: fix HunyuanVL XD-RoPE h/w section order

* fix: Remove redundant code

* convert : fix HunyuanOCR / HunyuanVL conversion
 - Tested locally: both HunyuanOCR and HunyuanVL-4B convert to GGUF
 - successfully and produce correct inference output on Metal (F16 / Q8_0).

* clip : fix -Werror=misleading-indentation in bilinear resize

* fix CI: convert_hf_to_gguf type check error
 - convert_hf_to_gguf.py: give HunyuanVLTextModel.__init__ an explicit `dir_model: Path` parameter so ty can infer the type for load_hparams instead of reporting `Unknown | None`.

---------

Co-authored-by: wendadawen <wendadawen@tencent.com>
2026-04-22 11:58:43 +02:00
..
models mtmd, llama : Update HunyuanVL vision-language model support (#22037) 2026-04-22 11:58:43 +02:00
CMakeLists.txt cmake: use glob to collect src/models sources (#22005) 2026-04-16 23:25:16 +02:00
llama-adapter.cpp fix: correct misspellings in code comments (#21217) 2026-03-31 13:50:51 +02:00
llama-adapter.h llama : re-enable manual LoRA adapter free (#19983) 2026-03-18 12:03:26 +02:00
llama-arch.cpp mtmd, llama : Update HunyuanVL vision-language model support (#22037) 2026-04-22 11:58:43 +02:00
llama-arch.h mtmd, llama : Update HunyuanVL vision-language model support (#22037) 2026-04-22 11:58:43 +02:00
llama-batch.cpp kv-cache : fix M-RoPE checkpoints (#20132) 2026-03-06 08:46:51 +02:00
llama-batch.h fix: correct misspellings in code comments (#21217) 2026-03-31 13:50:51 +02:00
llama-chat.cpp model : add HunyuanOCR support (#21395) 2026-04-05 23:32:14 +02:00
llama-chat.h model : add HunyuanOCR support (#21395) 2026-04-05 23:32:14 +02:00
llama-context.cpp fit-params : refactor + add option to output estimated memory per device (#22171) 2026-04-21 09:54:36 +03:00
llama-context.h fit-params : refactor + add option to output estimated memory per device (#22171) 2026-04-21 09:54:36 +03:00
llama-cparams.cpp cparams : rename LLAMA_MAX_PARALLEL_SEQUENCES to LLAMA_MAX_SEQ (#14188) 2025-06-15 10:08:58 +03:00
llama-cparams.h llama : enable chunked fused GDN path (#20340) 2026-03-11 22:46:40 +02:00
llama-ext.h llama-ext : fix exports (#22202) 2026-04-21 11:04:46 +03:00
llama-grammar.cpp common/grammar: fix grammar parsing issues to prevent stack overflow and hangs (#18604) 2026-03-21 18:43:35 +01:00
llama-grammar.h common/grammar : replace problematic backtracking regex [\s\S]* (#18342) 2026-01-03 16:02:43 -06:00
llama-graph.cpp model : refactor bias tensor variable names (#22079) 2026-04-18 20:12:00 +02:00
llama-graph.h model : refactor QKV into common build_qkv and create_tensor_qkv helpers (#21245) 2026-04-16 17:41:34 +02:00
llama-hparams.cpp llama: dynamic head_dim and n_rot for SWA (#20301) 2026-03-09 22:22:39 +01:00
llama-hparams.h mtmd, llama : Update HunyuanVL vision-language model support (#22037) 2026-04-22 11:58:43 +02:00
llama-impl.cpp llama : correct platform-independent loading of BOOL metadata (#21428) 2026-04-06 01:40:38 +02:00
llama-impl.h llama : enable chunked fused GDN path (#20340) 2026-03-11 22:46:40 +02:00
llama-io.cpp llama : refactor llama_context, llama_kv_cache, llm_build_context (#12181) 2025-03-13 12:35:44 +02:00
llama-io.h llama : refactor llama_context, llama_kv_cache, llm_build_context (#12181) 2025-03-13 12:35:44 +02:00
llama-kv-cache-iswa.cpp (revert) kv-cache : do not quantize SWA KV cache (#21332) 2026-04-03 09:07:01 +03:00
llama-kv-cache-iswa.h llama: print memory breakdown on exit (#15860) 2025-09-24 16:53:48 +02:00
llama-kv-cache.cpp kv-cache : support attention rotation for heterogeneous iSWA (#21513) 2026-04-07 20:31:28 +03:00
llama-kv-cache.h kv-cache : support attention rotation for heterogeneous iSWA (#21513) 2026-04-07 20:31:28 +03:00
llama-kv-cells.h llama: store mrope data in KV cell (#16825) 2025-10-29 18:09:18 +01:00
llama-memory-hybrid-iswa.cpp memory: respect unified KV cache in hybrid memory for eval tasks (#21224) 2026-04-01 12:50:17 +03:00
llama-memory-hybrid-iswa.h memory : add llama_memory_hybrid_iswa (#18601) 2026-01-21 14:30:23 +02:00
llama-memory-hybrid.cpp memory: respect unified KV cache in hybrid memory for eval tasks (#21224) 2026-04-01 12:50:17 +03:00
llama-memory-hybrid.h llama: print memory breakdown on exit (#15860) 2025-09-24 16:53:48 +02:00
llama-memory-recurrent.cpp ggml: backend-agnostic tensor parallelism (experimental) (#19378) 2026-04-09 16:42:19 +02:00
llama-memory-recurrent.h llama: consistent ctx <-> buf order for KV cache (#16746) 2025-10-28 11:23:54 +01:00
llama-memory.cpp memory : correctly handle failure in apply() (#14438) 2025-06-30 18:03:03 +03:00
llama-memory.h llama: print memory breakdown on exit (#15860) 2025-09-24 16:53:48 +02:00
llama-mmap.cpp llama: fix llama-model-saver (#20503) 2026-03-25 12:53:16 +02:00
llama-mmap.h llama: fix llama-model-saver (#20503) 2026-03-25 12:53:16 +02:00
llama-model-loader.cpp ggml: add Q1_0 1-bit quantization support (CPU) (#21273) 2026-04-06 20:55:21 +02:00
llama-model-loader.h llama: fix llama-model-saver (#20503) 2026-03-25 12:53:16 +02:00
llama-model-saver.cpp llama: fix llama-model-saver (#20503) 2026-03-25 12:53:16 +02:00
llama-model-saver.h llama: fix llama-model-saver (#20503) 2026-03-25 12:53:16 +02:00
llama-model.cpp mtmd, llama : Update HunyuanVL vision-language model support (#22037) 2026-04-22 11:58:43 +02:00
llama-model.h model : refactor bias tensor variable names (#22079) 2026-04-18 20:12:00 +02:00
llama-quant.cpp ggml: add Q1_0 1-bit quantization support (CPU) (#21273) 2026-04-06 20:55:21 +02:00
llama-quant.h llama : refactor src/llama.cpp (#10902) 2025-01-03 10:18:53 +02:00
llama-sampler.cpp llama : rename llama-sampling to llama-sampler (#19363) 2026-02-06 07:26:54 +01:00
llama-sampler.h llama : rename llama-sampling to llama-sampler (#19363) 2026-02-06 07:26:54 +01:00
llama-vocab.cpp vocab: add gemma4 tokenizer tests, fix edge case (#21534) 2026-04-09 11:41:14 +02:00
llama-vocab.h vocab: fix Gemma4 tokenizer (#21343) 2026-04-03 10:33:03 +02:00
llama.cpp fit-params : refactor + add option to output estimated memory per device (#22171) 2026-04-21 09:54:36 +03:00
unicode-data.cpp server : better security control for public deployments (#9776) 2024-10-08 13:27:04 +02:00
unicode-data.h llama : reduce compile time and binary size (#9712) 2024-10-02 15:49:55 +02:00
unicode.cpp unicode : add custom Qwen2 regex handler to fix segfault on long input (#21257) 2026-04-07 16:13:38 +03:00
unicode.h vocab: fix Gemma4 tokenizer (#21343) 2026-04-03 10:33:03 +02:00