ik_llama.cpp/common
saood06 7a68553487 Add mikupad to ik_llama as an alternative WebUI (#558)
* mikupad.html in ik_llama.cpp (functional but WIP)

* Remove hardcoded extension and add error handling to extension loading

* Update version number and add features array to version

* Make version endpoint always accessible

* Fix case with empty sql

* Add useful error message when launched without sql file

* Add sigma sampler

* Update sigma step and max based on docs

* Remove selectedSessionId and handle it with URL fragment

* Export All (code only, no UI)

* Add compression to server.cpp

* Major UI work (and also add update backend endpoints to accomadate)

* Finalize UI

* Fix visual bug

* fix merge conflict issue

* Pull in full sqlite_modern_cpp repo for the license as it is not attached to source files

* Make compression not show in sidebar if extension is not loaded

* Finalize build, Put support behing LLAMA_SERVER_SQLITE3: command not found build option, and update error message to include the build option is not passed situation

* Fix compile without flag on systems without it installed
2025-08-24 08:27:29 -05:00
..
cmake Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
base64.hpp llava : expose as a shared library for downstream projects (#3613) 2023-11-07 00:36:23 +03:00
build-info.cpp.in build : link against build info instead of compiling against it (#3879) 2023-11-02 08:50:16 +02:00
chat-parser.cpp Fix for Deepseek r1 parsing (#676) 2025-08-08 13:56:44 +03:00
chat-parser.h Enable CUDA graphs for MoE models + GPT-OSS support (#689) 2025-08-15 09:18:07 +03:00
chat-template.hpp add jinja template support (#677) 2025-08-09 12:50:30 +00:00
chat.cpp Enable CUDA graphs for MoE models + GPT-OSS support (#689) 2025-08-15 09:18:07 +03:00
chat.h Enable CUDA graphs for MoE models + GPT-OSS support (#689) 2025-08-15 09:18:07 +03:00
CMakeLists.txt Port speculative decoding from upstream to llama-server (#645) 2025-08-16 07:26:44 +03:00
common.cpp Add mikupad to ik_llama as an alternative WebUI (#558) 2025-08-24 08:27:29 -05:00
common.h Add mikupad to ik_llama as an alternative WebUI (#558) 2025-08-24 08:27:29 -05:00
console.cpp check C++ code with -Wmissing-declarations (#3184) 2023-09-15 15:38:27 -04:00
console.h gguf : new file format with flexible meta data (beta) (#2398) 2023-08-21 23:07:43 +03:00
grammar-parser.cpp Added support for . (any character) token in grammar engine. (#6467) 2024-06-06 06:08:52 -07:00
grammar-parser.h gguf : new file format with flexible meta data (beta) (#2398) 2023-08-21 23:07:43 +03:00
json-partial.cpp Function calling support for Kimi-K2 (#628) 2025-07-23 18:11:42 +02:00
json-partial.h Function calling support for Kimi-K2 (#628) 2025-07-23 18:11:42 +02:00
json-schema-to-grammar.cpp Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
json-schema-to-grammar.h JSON: [key] -> .at(key), assert() -> GGML_ASSERT (#7143) 2024-05-08 21:53:08 +02:00
json.hpp json-schema-to-grammar improvements (+ added to server) (#5978) 2024-03-21 11:50:43 +00:00
log.h Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
minja.hpp add jinja template support (#677) 2025-08-09 12:50:30 +00:00
ngram-cache.cpp Fixed lookup compilation issues on Windows (#6273) 2024-03-24 14:21:17 +01:00
ngram-cache.h Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
regex-partial.cpp Function calling support for Kimi-K2 (#628) 2025-07-23 18:11:42 +02:00
regex-partial.h Function calling support for Kimi-K2 (#628) 2025-07-23 18:11:42 +02:00
sampling.cpp Port speculative decoding from upstream to llama-server (#645) 2025-08-16 07:26:44 +03:00
sampling.h Port speculative decoding from upstream to llama-server (#645) 2025-08-16 07:26:44 +03:00
speculative.cpp Port universal assisted decoding to llama-server (#699) 2025-08-18 09:22:23 +03:00
speculative.h Port universal assisted decoding to llama-server (#699) 2025-08-18 09:22:23 +03:00
stb_image.h examples: support LLaVA v1.5 (multimodal model) (#3436) 2023-10-12 18:23:18 +03:00
train.cpp train : change default FA argument (#7528) 2024-05-25 15:22:35 +03:00
train.h sync : ggml (backend v2) (#3912) 2023-11-13 14:16:23 +02:00