mirror of
https://github.com/ggerganov/llama.cpp
synced 2026-03-07 07:39:25 +01:00
* support PaddleOCR-VL * clip: update PaddleOCR model loader parameters to prevent OOM during warmup * [update] add paddleocr vl text model instead of ernie4.5 * [update] restore change of minicpmv * [update] format * [update] format * [update] positions and patch merge permute * [update] mtmd_decode_use_mrope for paddleocr * [update] image min/max pixels * [update] remove set_limit_image_tokens * upate: preprocess without padding * clean up * Update convert_hf_to_gguf.py Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com> * Update convert_hf_to_gguf.py Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com> --------- Co-authored-by: Xuan Son Nguyen <son@huggingface.co> Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com> |
||
|---|---|---|
| .. | ||
| batched-bench | ||
| cli | ||
| completion | ||
| cvector-generator | ||
| export-lora | ||
| fit-params | ||
| gguf-split | ||
| imatrix | ||
| llama-bench | ||
| mtmd | ||
| perplexity | ||
| quantize | ||
| rpc | ||
| server | ||
| tokenize | ||
| tts | ||
| CMakeLists.txt | ||