mirror of
https://github.com/ggerganov/llama.cpp
synced 2026-03-29 03:15:32 +02:00
* minor : code style * server : fix prompt similarity calculation * server : initial host-memory prompt caching * cont * server : refactor * cont * cont : make the server task of the slot const * cont : minor [no ci] * server : cache prompts and checkpoints only for completion tasks * server : improve prompt caching logic * cont : fix check for number of cached prompts [no ci] * server : improve caching logic, add -cram CLI arg * server : print prompt mismatch info * cont : better naming [no ci] * server : improve prompt cache loading logic * server : add option to debug the slot contents (#16482) * server : add option to debug the slot contents * Update tools/server/server.cpp --------- Co-authored-by: Xuan-Son Nguyen <son@huggingface.co> * server : add option to disable prompt cache --------- Co-authored-by: Xuan-Son Nguyen <son@huggingface.co> |
||
|---|---|---|
| .. | ||
| arg.cpp | ||
| arg.h | ||
| base64.hpp | ||
| build-info.cpp.in | ||
| chat-parser.cpp | ||
| chat-parser.h | ||
| chat.cpp | ||
| chat.h | ||
| CMakeLists.txt | ||
| common.cpp | ||
| common.h | ||
| console.cpp | ||
| console.h | ||
| http.h | ||
| json-partial.cpp | ||
| json-partial.h | ||
| json-schema-to-grammar.cpp | ||
| json-schema-to-grammar.h | ||
| llguidance.cpp | ||
| log.cpp | ||
| log.h | ||
| ngram-cache.cpp | ||
| ngram-cache.h | ||
| regex-partial.cpp | ||
| regex-partial.h | ||
| sampling.cpp | ||
| sampling.h | ||
| speculative.cpp | ||
| speculative.h | ||