| .. |
|
baby-llama
|
|
|
|
batched
|
|
|
|
batched-bench
|
ggml : add ggml_soft_max_ext (#4256)
|
2023-12-01 10:51:24 +02:00 |
|
batched.swift
|
swift : fix prompt tokenization logic (#4321)
|
2023-12-04 15:43:45 +02:00 |
|
beam-search
|
|
|
|
benchmark
|
sync : ggml (backend v2) (#3912)
|
2023-11-13 14:16:23 +02:00 |
|
convert-llama2c-to-ggml
|
|
|
|
embedding
|
|
|
|
export-lora
|
sync : ggml (backend v2) (#3912)
|
2023-11-13 14:16:23 +02:00 |
|
finetune
|
finetune - update readme to mention llama support only (#4148)
|
2023-11-20 19:30:00 +01:00 |
|
gguf
|
|
|
|
infill
|
main : Add ChatML functionality to main example (#4046)
|
2023-11-20 14:56:59 +01:00 |
|
jeopardy
|
|
|
|
llama-bench
|
|
|
|
llama.swiftui
|
swift : fix concatenation method to avoid invalid UTF8 stringfication (#4325)
|
2023-12-04 18:03:49 +02:00 |
|
llava
|
llava : ShareGPT4V compatibility (vision encoder only loading) (#4172)
|
2023-11-30 23:11:14 +01:00 |
|
lookahead
|
examples : add readme files
|
2023-11-29 11:00:17 +02:00 |
|
main
|
sampling : custom samplers order (#4285)
|
2023-12-05 12:05:51 +02:00 |
|
main-cmake-pkg
|
|
|
|
metal
|
sync : ggml (backend v2) (#3912)
|
2023-11-13 14:16:23 +02:00 |
|
parallel
|
llama : KV cache view API + better KV cache management (#4170)
|
2023-11-23 19:07:56 +02:00 |
|
perplexity
|
Respect tokenizer.ggml.add_bos_token value when tokenizing (#4040)
|
2023-11-16 19:14:37 -07:00 |
|
quantize
|
|
|
|
quantize-stats
|
|
|
|
save-load-state
|
|
|
|
server
|
server : fix OpenAI API stop field to be optional (#4299)
|
2023-12-03 11:10:43 +02:00 |
|
simple
|
simple : update error message for KV cache check (#4324)
|
2023-12-04 18:04:21 +02:00 |
|
speculative
|
speculative : support --color (#4343)
|
2023-12-06 10:08:17 +02:00 |
|
tokenize
|
tokenize example: Respect normal add BOS token behavior (#4126)
|
2023-11-18 14:48:17 -07:00 |
|
train-text-from-scratch
|
sync : ggml (backend v2) (#3912)
|
2023-11-13 14:16:23 +02:00 |
|
alpaca.sh
|
|
|
|
chat-13B.bat
|
|
|
|
chat-13B.sh
|
|
|
|
chat-persistent.sh
|
|
|
|
chat-vicuna.sh
|
|
|
|
chat.sh
|
|
|
|
CMakeLists.txt
|
lookahead : add example for lookahead decoding (#4207)
|
2023-11-26 20:33:07 +02:00 |
|
gpt4all.sh
|
|
|
|
json-schema-to-grammar.py
|
|
|
|
llama2-13b.sh
|
|
|
|
llama2.sh
|
|
|
|
llama.vim
|
|
|
|
llm.vim
|
|
|
|
make-ggml.py
|
|
|
|
Miku.sh
|
|
|
|
reason-act.sh
|
|
|
|
server-llama2-13B.sh
|
|
|