ik_llama.cpp/common
Kawrakow e760b4dc41 Check for NaNs while loading the model. (#727)
* Check for NaNs while loading the model.

* Also tell which experts have NaNs.

* Add command line option to validate quants

* Add checks for more quantization types

* Add checks for more quantizagtion types

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2025-08-27 19:00:17 +03:00
..
cmake Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
base64.hpp llava : expose as a shared library for downstream projects (#3613) 2023-11-07 00:36:23 +03:00
build-info.cpp.in build : link against build info instead of compiling against it (#3879) 2023-11-02 08:50:16 +02:00
chat-parser.cpp Fix for Deepseek r1 parsing (#676) 2025-08-08 13:56:44 +03:00
chat-parser.h Enable CUDA graphs for MoE models + GPT-OSS support (#689) 2025-08-15 09:18:07 +03:00
chat-template.hpp add jinja template support (#677) 2025-08-09 12:50:30 +00:00
chat.cpp Enable CUDA graphs for MoE models + GPT-OSS support (#689) 2025-08-15 09:18:07 +03:00
chat.h Enable CUDA graphs for MoE models + GPT-OSS support (#689) 2025-08-15 09:18:07 +03:00
CMakeLists.txt Port speculative decoding from upstream to llama-server (#645) 2025-08-16 07:26:44 +03:00
common.cpp Check for NaNs while loading the model. (#727) 2025-08-27 19:00:17 +03:00
common.h Check for NaNs while loading the model. (#727) 2025-08-27 19:00:17 +03:00
console.cpp check C++ code with -Wmissing-declarations (#3184) 2023-09-15 15:38:27 -04:00
console.h gguf : new file format with flexible meta data (beta) (#2398) 2023-08-21 23:07:43 +03:00
grammar-parser.cpp Added support for . (any character) token in grammar engine. (#6467) 2024-06-06 06:08:52 -07:00
grammar-parser.h gguf : new file format with flexible meta data (beta) (#2398) 2023-08-21 23:07:43 +03:00
json-partial.cpp Function calling support for Kimi-K2 (#628) 2025-07-23 18:11:42 +02:00
json-partial.h Function calling support for Kimi-K2 (#628) 2025-07-23 18:11:42 +02:00
json-schema-to-grammar.cpp Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
json-schema-to-grammar.h JSON: [key] -> .at(key), assert() -> GGML_ASSERT (#7143) 2024-05-08 21:53:08 +02:00
json.hpp json-schema-to-grammar improvements (+ added to server) (#5978) 2024-03-21 11:50:43 +00:00
log.h Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
minja.hpp add jinja template support (#677) 2025-08-09 12:50:30 +00:00
ngram-cache.cpp Fixed lookup compilation issues on Windows (#6273) 2024-03-24 14:21:17 +01:00
ngram-cache.h Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
regex-partial.cpp Function calling support for Kimi-K2 (#628) 2025-07-23 18:11:42 +02:00
regex-partial.h Function calling support for Kimi-K2 (#628) 2025-07-23 18:11:42 +02:00
sampling.cpp Port speculative decoding from upstream to llama-server (#645) 2025-08-16 07:26:44 +03:00
sampling.h Port speculative decoding from upstream to llama-server (#645) 2025-08-16 07:26:44 +03:00
speculative.cpp Port universal assisted decoding to llama-server (#699) 2025-08-18 09:22:23 +03:00
speculative.h Port universal assisted decoding to llama-server (#699) 2025-08-18 09:22:23 +03:00
stb_image.h examples: support LLaVA v1.5 (multimodal model) (#3436) 2023-10-12 18:23:18 +03:00
train.cpp train : change default FA argument (#7528) 2024-05-25 15:22:35 +03:00
train.h sync : ggml (backend v2) (#3912) 2023-11-13 14:16:23 +02:00