ik_llama.cpp/ggml
Kawrakow f1191036b2
Support GigaChat3 (#995)
* Fixing Gigachat support

* Gigachat: CUDA FA (needs 192 x 192 for MLA = 3)

* Gigachat: CPU FA (needs 192 x 192 for MLA = 3)

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2025-11-24 06:55:14 +01:00
..
cmake Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
include CUDA: set compute parameters via command line arguments (#910) 2025-11-07 07:11:23 +02:00
src Support GigaChat3 (#995) 2025-11-24 06:55:14 +01:00
.gitignore Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
CMakeLists.txt Enable fusion by default (#939) 2025-11-11 10:35:48 +02:00