mirror of
https://github.com/ggerganov/llama.cpp
synced 2026-03-03 13:50:01 +01:00
* save generated text for the /slots endpoint * update debug_generated_text only when LLAMA_SERVER_SLOTS_DEBUG > 0 * Apply suggestions from code review --------- Co-authored-by: Matteo <matteo@matteo> Co-authored-by: Xuan-Son Nguyen <thichthat@gmail.com> |
||
|---|---|---|
| .. | ||
| batched-bench | ||
| cli | ||
| completion | ||
| cvector-generator | ||
| export-lora | ||
| fit-params | ||
| gguf-split | ||
| imatrix | ||
| llama-bench | ||
| mtmd | ||
| perplexity | ||
| quantize | ||
| rpc | ||
| server | ||
| tokenize | ||
| tts | ||
| CMakeLists.txt | ||