mirror of
https://github.com/ggerganov/llama.cpp
synced 2026-03-02 21:29:35 +01:00
* ggml : add ggml_build_forward_select * cuda : adapt CUDA graph compat to new feature * vulkan : update logic to handle command buffer closing * ggml : check compute for fusion * ggml : add comment |
||
|---|---|---|
| .. | ||
| ggml-alloc.h | ||
| ggml-backend.h | ||
| ggml-blas.h | ||
| ggml-cann.h | ||
| ggml-cpp.h | ||
| ggml-cpu.h | ||
| ggml-cuda.h | ||
| ggml-hexagon.h | ||
| ggml-metal.h | ||
| ggml-opencl.h | ||
| ggml-opt.h | ||
| ggml-rpc.h | ||
| ggml-sycl.h | ||
| ggml-vulkan.h | ||
| ggml-webgpu.h | ||
| ggml-zdnn.h | ||
| ggml-zendnn.h | ||
| ggml.h | ||
| gguf.h | ||