llama.cpp/examples
2025-03-20 18:20:54 +02:00
..
batched Apply suggestions from code review 2025-03-17 12:17:14 +01:00
batched-bench Apply suggestions from code review 2025-03-17 12:17:14 +01:00
batched.swift llama : refactor llama_context, llama_kv_cache, llm_build_context (#12181) 2025-03-13 12:35:44 +02:00
convert-llama2c-to-ggml llama : add llama_vocab, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
cvector-generator rename to init_from_text 2025-03-14 22:17:07 +01:00
deprecation-warning Update deprecation-warning.cpp (#10619) 2024-12-04 23:19:20 +01:00
embedding embedding : avoid common_batch 2025-03-19 14:29:04 +02:00
eval-callback rename to init_from_text 2025-03-14 22:17:07 +01:00
export-lora common : refactor '-o' option (#12278) 2025-03-10 13:34:13 +02:00
gbnf-validator Tool call support (generic + native for Llama, Functionary, Hermes, Mistral, Firefunction, DeepSeek) w/ lazy grammars (#9639) 2025-01-30 19:13:58 +00:00
gen-docs ggml : move AMX to the CPU backend (#10570) 2024-11-29 21:54:58 +01:00
gguf GGUF: C++ refactor, backend support, misc fixes (#11030) 2025-01-07 18:01:58 +01:00
gguf-hash GGUF: C++ refactor, backend support, misc fixes (#11030) 2025-01-07 18:01:58 +01:00
gguf-split ci : use -no-cnv in gguf-split tests (#11254) 2025-01-15 18:28:35 +02:00
gritlm Merge branch 'master' into xsn/private_batch_api 2025-03-13 15:55:18 +01:00
imatrix Merge branch 'master' into xsn/private_batch_api 2025-03-13 15:55:18 +01:00
infill rename to init_from_text 2025-03-14 22:17:07 +01:00
jeopardy build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
llama-bench rename to init_from_text 2025-03-14 22:17:07 +01:00
llama.android android : fix permission 2025-03-19 10:49:30 +01:00
llama.swiftui swift : adapt to new API 2025-03-19 10:48:42 +02:00
llava rename to init_from_text 2025-03-14 22:17:07 +01:00
lookahead fix missing n_past in various places 2025-03-14 10:47:08 +01:00
lookup fix missing n_past in various places 2025-03-14 10:47:08 +01:00
main Merge branch 'master' into xsn/private_batch_api 2025-03-18 15:45:22 +01:00
parallel apply to the rest 2025-03-13 22:36:27 +01:00
passkey apply to the rest 2025-03-13 22:36:27 +01:00
perplexity perplexity : avoid common_batch 2025-03-20 12:28:39 +02:00
quantize ggml : portability fixes for VS 2017 (#12150) 2025-03-04 18:53:26 +02:00
quantize-stats llama : refactor llama_context, llama_kv_cache, llm_build_context (#12181) 2025-03-13 12:35:44 +02:00
retrieval embedding : avoid common_batch 2025-03-19 14:29:04 +02:00
rpc rpc-server : add support for the SYCL backend (#10934) 2024-12-23 10:39:30 +02:00
run Merge branch 'master' into xsn/private_batch_api 2025-03-18 15:45:22 +01:00
save-load-state fix missing n_past in various places 2025-03-14 10:47:08 +01:00
server server : remove old commented code [no ci] 2025-03-20 18:20:54 +02:00
simple fix missing n_past in various places 2025-03-14 10:47:08 +01:00
simple-chat rm redundant llama_batch_ext_set_output_last 2025-03-13 23:14:16 +01:00
simple-cmake-pkg repo : update links to new url (#11886) 2025-02-15 16:40:57 +02:00
speculative speculative : adapt to new llama API 2025-03-18 22:05:44 +02:00
speculative-simple rename to init_from_text 2025-03-14 22:17:07 +01:00
sycl [SYCL] Optimize mul_mat for Q4_0 on Intel GPU (#12035) 2025-02-24 22:33:23 +08:00
tokenize llama : add llama_vocab, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
tts Merge branch 'master' into xsn/private_batch_api 2025-03-18 15:45:22 +01:00
chat-13B.bat Create chat-13B.bat (#592) 2023-03-29 20:21:09 +03:00
chat-13B.sh build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
chat-persistent.sh scripts : fix pattern and get n_tokens in one go (#10221) 2024-11-09 09:06:54 +02:00
chat-vicuna.sh build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
chat.sh build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
CMakeLists.txt tts : add OuteTTS support (#10784) 2024-12-18 19:27:21 +02:00
convert_legacy_llama.py metadata: Detailed Dataset Authorship Metadata (#8875) 2024-11-13 21:10:38 +11:00
json_schema_pydantic_example.py py : type-check all Python scripts with Pyright (#8341) 2024-07-07 15:04:39 -04:00
json_schema_to_grammar.py tool-call: fix Qwen 2.5 Coder support, add micro benchmarks, support trigger patterns for lazy grammars (#12034) 2025-03-05 13:05:13 +00:00
llama.vim repo : update links to new url (#11886) 2025-02-15 16:40:57 +02:00
llm.vim llm.vim : stop generation at multiple linebreaks, bind to <F2> (#2879) 2023-08-30 09:50:55 +03:00
Miku.sh build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
pydantic_models_to_grammar_examples.py repo : update links to new url (#11886) 2025-02-15 16:40:57 +02:00
pydantic_models_to_grammar.py pydantic : replace uses of __annotations__ with get_type_hints (#8474) 2024-07-14 19:51:21 -04:00
reason-act.sh build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
regex_to_grammar.py py : switch to snake_case (#8305) 2024-07-05 07:53:33 +03:00
server_embd.py py : type-check all Python scripts with Pyright (#8341) 2024-07-07 15:04:39 -04:00
server-llama2-13B.sh build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
ts-type-to-grammar.sh JSON schema conversion: ️ faster repetitions, min/maxLength for strings, cap number length (#6555) 2024-04-12 19:43:38 +01:00