llama.cpp/scripts
Johannes Gäßler e81b8e4b7f
llama: use FA + max. GPU layers by default (#15434)
* llama: use max. GPU layers by default, auto -fa

* ggml-backend: abort instead of segfault
2025-08-30 16:32:10 +02:00
..
apple
build-info.sh
check-requirements.sh
ci-run.sh
compare-commits.sh
compare-llama-bench.py
create_ops_docs.py
debug-test.sh
fetch_server_test_models.py
gen-authors.sh
gen-unicode-data.py
get_chat_template.py
get-flags.mk
get-hellaswag.sh
get-pg.sh
get-wikitext-2.sh
get-wikitext-103.sh
get-winogrande.sh
hf.sh
install-oneapi.bat
qnt-all.sh
run-all-perf.sh
run-all-ppl.sh
server-bench.py
sync_vendor.py
sync-ggml-am.sh
sync-ggml.last
sync-ggml.sh
tool_bench.py
tool_bench.sh
verify-checksum-models.py
xxd.cmake