mirror of
https://github.com/ggerganov/llama.cpp
synced 2026-04-19 13:45:53 +02:00
* CI: fix ARM64 image build error & enable compilation * Update .github/workflows/docker.yml Co-authored-by: Aaron Teo <taronaeo@gmail.com> * CI: revert ggml/src/ggml-cpu/CMakeLists.txt * Update .github/workflows/docker.yml Co-authored-by: Aaron Teo <taronaeo@gmail.com> * CI: update runs-on to ubuntu24.04, and update ARM64 build image ( ubuntu_version: "24.04") * CI: change cpu.Dockerfile gcc to 14; * CI : cpu.Dockerfile , update pip install . * Update .github/workflows/docker.yml Co-authored-by: Aaron Teo <taronaeo@gmail.com> --------- Co-authored-by: Aaron Teo <taronaeo@gmail.com> |
||
|---|---|---|
| .. | ||
| nix | ||
| cann.Dockerfile | ||
| cpu.Dockerfile | ||
| cuda-new.Dockerfile | ||
| cuda.Dockerfile | ||
| intel.Dockerfile | ||
| llama-cli-cann.Dockerfile | ||
| llama-cpp-cuda.srpm.spec | ||
| llama-cpp.srpm.spec | ||
| musa.Dockerfile | ||
| openvino.Dockerfile | ||
| rocm.Dockerfile | ||
| s390x.Dockerfile | ||
| tools.sh | ||
| vulkan.Dockerfile | ||