llama.cpp/.github
Zijun Yu 52f1096f21
openvino: driver setup, CI split, thread safety, and NPU optimizations (#21944)
* Thread safety per request only

* Fix ROPE yarn case

* Fix sticky stateful config

* Use i4/i8 directly for symmetric quant

* Use weightless caching

* Add WeightlessCacheAttribute to reduce NPU memory usage

* Gelu tanh support (#125)

* Imrope support (#126)

* fix(openvino): explicit ov::Tensor frees in ggml_backend_openvino_free

* add GPU,NPU support in OV Dockerfile

* add build-openvino.yml ci

* Fix sticky stateful config

* add concurrency to ov-gpu ci runs. Move OV CI to build-openvino.yml

* fix thread-safety of shared runtime context

* rope type abstraction for frontend translations

* fix editorconfig

---------

Co-authored-by: Mustafa Cavus <mustafa.cavus@intel.com>
Co-authored-by: Dan Hoffman <dhoff749@gmail.com>
Co-authored-by: Ravi Panchumarthy <ravi.panchumarthy@intel.com>
2026-04-21 18:58:34 +03:00
..
actions ggml : add OpenVINO backend (#15307) 2026-03-14 07:56:55 +02:00
ISSUE_TEMPLATE issues: add openvino backends (#20932) 2026-03-24 14:41:10 +08:00
workflows openvino: driver setup, CI split, thread safety, and NPU optimizations (#21944) 2026-04-21 18:58:34 +03:00
labeler.yml ci: drop v5 all: composition from labeler.yml (#21627) 2026-04-09 08:20:19 +02:00
pull_request_template.md contrib: add "Requirements" section to PR template (#20841) 2026-03-23 16:59:02 +01:00