mirror of
https://github.com/ggerganov/llama.cpp
synced 2026-04-03 13:56:07 +02:00
* Sample interface, new samplers. New samplers: - locally typical sampling - tail free sampling - frequency and presence penalty - mirostat Ignore EOS fix: -inf should be used. * mirostat * Added --logit-bias and --no-penalize-nl, removed std::span * Use C++11, clarify llama API documentation, rename Mirostat parameters to --mirostat_lr and --mirostat_ent, add temperature sampling for Mirostat, simplify Mirostat sampling API parameters (removed N and *k) Use C++11, clarify llama API documentation, rename Mirostat parameters to --mirostat_lr and --mirostat_ent, add temperature sampling for Mirostat, simplify Mirostat sampling API parameters (removed N and *k) * Save and load example adjust * Tests * Windows build fix * Windows test fix |
||
|---|---|---|
| .. | ||
| benchmark | ||
| embedding | ||
| jeopardy | ||
| main | ||
| perplexity | ||
| quantize | ||
| quantize-stats | ||
| save-load-state | ||
| alpaca.sh | ||
| chat-13B.bat | ||
| chat-13B.sh | ||
| chat.sh | ||
| CMakeLists.txt | ||
| common.cpp | ||
| common.h | ||
| gpt4all.sh | ||
| Miku.sh | ||
| reason-act.sh | ||