llama.cpp/.github/workflows
Georgi Gerganov ec450d3bbf
metal : opt-in compile flag for BF16 (#10218)
* metal : opt-in compile flag for BF16

ggml-ci

* ci : use BF16

ggml-ci

* swift : switch back to v12

* metal : has_float -> use_float

ggml-ci

* metal : fix BF16 check in MSL

ggml-ci
2024-11-08 21:59:46 +02:00
..
bench.yml.disabled ggml-backend : add device and backend reg interfaces (#9707) 2024-10-03 01:49:47 +02:00
build.yml metal : opt-in compile flag for BF16 (#10218) 2024-11-08 21:59:46 +02:00
close-issue.yml ci : fine-grant permission (#9710) 2024-10-04 11:47:19 +02:00
docker.yml musa: add docker image support (#9685) 2024-10-10 20:10:37 +02:00
editorconfig.yml
gguf-publish.yml
labeler.yml
nix-ci-aarch64.yml ci : fine-grant permission (#9710) 2024-10-04 11:47:19 +02:00
nix-ci.yml ci : fine-grant permission (#9710) 2024-10-04 11:47:19 +02:00
nix-flake-update.yml
nix-publish-flake.yml
python-check-requirements.yml
python-lint.yml
python-type-check.yml
server.yml