ik_llama.cpp/src
2025-07-13 20:15:19 +03:00
..
CMakeLists.txt Be able to repack tensors at run time (#147) 2024-12-17 14:16:34 +01:00
llama-grammar.cpp Merge mainline - Aug 12 2024 (#17) 2024-08-12 15:14:32 +02:00
llama-grammar.h Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
llama-impl.h add dry sampler (#513) 2025-06-19 10:24:53 +03:00
llama-sampling.cpp add dry sampler (#513) 2025-06-19 10:24:53 +03:00
llama-sampling.h add dry sampler (#513) 2025-06-19 10:24:53 +03:00
llama-vocab.cpp add hunyuan moe support for 561 (#565) 2025-07-09 10:29:40 +02:00
llama-vocab.h add dry sampler (#513) 2025-06-19 10:24:53 +03:00
llama.cpp iq2_kl: CUDA dequantize 2025-07-13 20:15:19 +03:00
unicode-data.cpp Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
unicode-data.h Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
unicode.cpp Fix non rpc build error (#506) 2025-06-08 17:27:00 +03:00
unicode.h Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00