This website requires JavaScript.
Explore
Help
Register
Sign In
git
/
llama.cpp
Watch
1
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggerganov/llama.cpp
synced
2026-03-26 09:00:59 +01:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
fd3abe849e
llama.cpp
/
docs
/
backend
History
Neo Zhang
7d2add51d8
sycl : support to malloc memory on device more than 4GB, update the doc and script (
#17566
)
...
Co-authored-by: Neo Zhang Jianyu <jianyu.zhang@intel.com>
2025-11-29 14:59:44 +02:00
..
hexagon
Add experimental ggml-hexagon backend for the Hexagon NPU (
#16547
)
2025-10-22 13:47:09 -07:00
BLIS.md
make : deprecate (
#10514
)
2024-12-02 21:22:53 +02:00
CANN.md
CANN: GGML_CANN_ACL_GRAPH works only USE_ACL_GRAPH enabled (
#16861
)
2025-11-12 14:37:52 +08:00
CUDA-FEDORA.md
docs: update: improve the Fedoa CUDA guide (
#12536
)
2025-03-24 11:02:26 +00:00
OPENCL.md
opencl: update doc (
#17011
)
2025-11-04 16:02:36 -08:00
SYCL.md
sycl : support to malloc memory on device more than 4GB, update the doc and script (
#17566
)
2025-11-29 14:59:44 +02:00
zDNN.md
zdnn: refactor codebase + add docs (
#16178
)
2025-09-23 14:53:05 +08:00