llama.cpp/ggml
Johannes Gäßler 4eac5b4509
CUDA: refactor mma data loading for AMD (#22051)
* CUDA: refactor mma data loading for AMD

* fix CDNA MMQ occupancy

* fix CDNA3 mma

* fix RDNA3 compile
2026-04-19 18:26:59 +02:00
..
cmake ggml: backend-agnostic tensor parallelism (experimental) (#19378) 2026-04-09 16:42:19 +02:00
include CUDA: manage NCCL communicators in context (#21891) 2026-04-15 15:58:40 +02:00
src CUDA: refactor mma data loading for AMD (#22051) 2026-04-19 18:26:59 +02:00
.gitignore
CMakeLists.txt cmake: remove CMP0194 policy to restore MSVC builds (#21934) 2026-04-19 10:25:05 +03:00