mirror of
https://github.com/ggerganov/llama.cpp
synced 2026-04-09 00:35:42 +02:00
* support SYCL backend windows build * add windows build in CI * add for win build CI * correct install oneMKL * fix install issue * fix ci * fix install cmd * fix install cmd * fix install cmd * fix install cmd * fix install cmd * fix win build * fix win build * fix win build * restore other CI part * restore as base * rm no new line * fix no new line issue, add -j * fix grammer issue * allow to trigger manually, fix format issue * fix format * add newline * fix format * fix format * fix format issuse --------- Co-authored-by: Abhilash Majumder <30946547+abhilash1910@users.noreply.github.com> |
||
|---|---|---|
| .. | ||
| build-info.cmake | ||
| build-info.sh | ||
| check-requirements.sh | ||
| ci-run.sh | ||
| compare-llama-bench.py | ||
| convert-gg.sh | ||
| gen-build-info-cpp.cmake | ||
| get-flags.mk | ||
| get-hellaswag.sh | ||
| get-pg.sh | ||
| get-wikitext-2.sh | ||
| get-winogrande.sh | ||
| install-oneapi.bat | ||
| LlamaConfig.cmake.in | ||
| qnt-all.sh | ||
| run-all-perf.sh | ||
| run-all-ppl.sh | ||
| run-with-preset.py | ||
| server-llm.sh | ||
| sync-ggml-am.sh | ||
| sync-ggml.last | ||
| sync-ggml.sh | ||
| verify-checksum-models.py | ||