ik_llama.cpp/include
Kawrakow abb966eba1
Allow quantization of ffn_gate_inp (#896)
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2025-11-05 10:44:32 +02:00
..
llama.h Allow quantization of ffn_gate_inp (#896) 2025-11-05 10:44:32 +02:00