ik_llama.cpp/ggml/include
Kawrakow db3ba4999f
Fused mul + multi_add op (#858)
* Adding fused mul+multi_add + CPU implementation

* fused mul+multi_add: CUDA

* fused mul+multi_add: command line argument to disable it

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2025-10-24 07:40:35 +03:00
..
ggml-alloc.h Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
ggml-backend.h Offload only activated experts to the GPU (#698) 2025-09-04 12:22:30 +02:00
ggml-blas.h Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
ggml-cann.h Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
ggml-cpp.h Port mdmd from mainline + Qwen2/2.5-VL support (#798) 2025-09-27 08:45:29 +02:00
ggml-cuda.h Merge mainline - Aug 12 2024 (#17) 2024-08-12 15:14:32 +02:00
ggml-kompute.h Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
ggml-metal.h Merge mainline - Aug 12 2024 (#17) 2024-08-12 15:14:32 +02:00
ggml-rpc.h Fix non rpc build error (#506) 2025-06-08 17:27:00 +03:00
ggml-sycl.h Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
ggml-vulkan.h Vulkan: a fresh start (#608) 2025-07-15 08:03:13 +02:00
ggml.h Fused mul + multi_add op (#858) 2025-10-24 07:40:35 +03:00