mirror of
https://github.com/ggerganov/llama.cpp
synced 2026-03-31 12:25:07 +02:00
* cmake : enable curl by default * no curl if no examples * fix build * fix build-linux-cross * add windows-setup-curl * fix * shell * fix path * fix windows-latest-cmake* * run: include_directories * LLAMA_RUN_EXTRA_LIBS * sycl: no llama_curl * no test-arg-parser on windows * clarification * try riscv64 / arm64 * windows: include libcurl inside release binary * add msg * fix mac / ios / android build * will this fix xcode? * try clearing the cache * add bunch of licenses * revert clear cache * fix xcode * fix xcode (2) * fix typo |
||
|---|---|---|
| .. | ||
| nix | ||
| cloud-v-pipeline | ||
| cpu.Dockerfile | ||
| cuda.Dockerfile | ||
| intel.Dockerfile | ||
| llama-cli-cann.Dockerfile | ||
| llama-cpp-cuda.srpm.spec | ||
| llama-cpp.srpm.spec | ||
| musa.Dockerfile | ||
| rocm.Dockerfile | ||
| tools.sh | ||
| vulkan.Dockerfile | ||