mirror of
https://github.com/ggerganov/llama.cpp
synced 2026-03-20 06:00:59 +01:00
* llama : require first token to be BOS * scripts : add ppl-run-all.sh * perplexity : add BOS for each chunk * readme : update perplexity values after BOS fix * perplexity : add clarifying comments |
||
|---|---|---|
| .. | ||
| CMakeLists.txt | ||
| perplexity.cpp | ||
| README.md | ||
perplexity
TODO