mirror of
https://github.com/ggerganov/llama.cpp
synced 2026-04-22 15:13:24 +02:00
* ggml : add ggml_scale_bias * ggml_vec_mad1_f32 * add more simd * add CUDA * sycl * vulkan * cann (placeholder) * opencl * will this fix cpu? * fix cuda * suggestions from coderabbit * fix cann compile error * vDSP_vsmsa * rm __ARM_FEATURE_SVE * use memcpy for op params * make code looks more consistent * use scalar for __ARM_FEATURE_SVE * add x param to ggml_vec_mad1_f32 |
||
|---|---|---|
| .. | ||
| ggml-alloc.h | ||
| ggml-backend.h | ||
| ggml-blas.h | ||
| ggml-cann.h | ||
| ggml-cpp.h | ||
| ggml-cpu.h | ||
| ggml-cuda.h | ||
| ggml-metal.h | ||
| ggml-opencl.h | ||
| ggml-opt.h | ||
| ggml-rpc.h | ||
| ggml-sycl.h | ||
| ggml-vulkan.h | ||
| ggml.h | ||
| gguf.h | ||