llama.cpp/tools
2026-02-27 18:28:36 +01:00
..
batched-bench tool/ex/tests: consistently free ctx, then model (#18168) 2025-12-22 11:00:37 +01:00
cli server : support multiple model aliases via comma-separated --alias (#19926) 2026-02-27 07:05:23 +01:00
completion server : support multiple model aliases via comma-separated --alias (#19926) 2026-02-27 07:05:23 +01:00
cvector-generator docs : Minor cleanups (#19252) 2026-02-02 08:38:55 +02:00
export-lora docs : Minor cleanups (#19252) 2026-02-02 08:38:55 +02:00
fit-params llama-fit-params: keep explicit --ctx-size 0 (#19070) 2026-01-24 22:13:08 +01:00
gguf-split cli: new CLI experience (#17824) 2025-12-10 15:28:59 +01:00
imatrix model : add Jina Embeddings v5 Nano (partial EuroBERT) support (#19826) 2026-02-26 12:14:09 +01:00
llama-bench Setting mmap and direct_io to false as default in llama-bench.cpp (#18841) 2026-01-16 09:46:51 +01:00
mtmd mtmd : fix padding of n_tokens (#19930) 2026-02-26 18:39:49 +02:00
perplexity perplexity: add proper batching (#19661) 2026-02-16 18:44:44 +02:00
quantize quantize : add --dry-run option (#19526) 2026-02-20 09:20:16 +01:00
rpc NetBSD build support (#19589) 2026-02-14 09:47:01 +01:00
server server: Add pragma once to server-context.h (#19944) 2026-02-27 18:28:36 +01:00
tokenize cmake : Do not install tools on iOS targets (#15903) 2025-09-16 09:54:44 +07:00
tts model : fix wavtokenizer embedding notions (#19479) 2026-02-11 07:52:20 +02:00
CMakeLists.txt cmake: only build cli when server is enabled (#18670) 2026-01-09 16:43:26 +01:00