mirror of
https://github.com/ggerganov/llama.cpp
synced 2026-03-22 23:20:48 +01:00
* readme: cmake . -B build && cmake --build build * build: fix typo Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com> * build: drop implicit . from cmake config command * build: remove another superfluous . * build: update MinGW cmake commands * Update README-sycl.md Co-authored-by: Neo Zhang Jianyu <jianyu.zhang@intel.com> * build: reinstate --config Release as not the default w/ some generators + document how to build Debug * build: revert more --config Release * build: nit / remove -H from cmake example * build: reword debug instructions around single/multi config split --------- Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com> Co-authored-by: Neo Zhang Jianyu <jianyu.zhang@intel.com> |
||
|---|---|---|
| .. | ||
| nix | ||
| cloud-v-pipeline | ||
| full-cuda.Dockerfile | ||
| full-rocm.Dockerfile | ||
| full.Dockerfile | ||
| llama-cpp-clblast.srpm.spec | ||
| llama-cpp-cuda.srpm.spec | ||
| llama-cpp.srpm.spec | ||
| main-cuda.Dockerfile | ||
| main-intel.Dockerfile | ||
| main-rocm.Dockerfile | ||
| main-vulkan.Dockerfile | ||
| main.Dockerfile | ||
| server-cuda.Dockerfile | ||
| server-intel.Dockerfile | ||
| server-rocm.Dockerfile | ||
| server-vulkan.Dockerfile | ||
| server.Dockerfile | ||
| tools.sh | ||