..
models
refactor : llama-model.cpp ( #16252 )
2025-10-31 23:40:23 +01:00
CMakeLists.txt
refactor : llama-model.cpp ( #16252 )
2025-10-31 23:40:23 +01:00
llama-adapter.cpp
aLoRA Support ( #15327 )
2025-09-05 17:32:39 -06:00
llama-adapter.h
aLoRA Support ( #15327 )
2025-09-05 17:32:39 -06:00
llama-arch.cpp
model : Minimax M2 ( #16831 )
2025-10-31 21:20:47 +01:00
llama-arch.h
model : Minimax M2 ( #16831 )
2025-10-31 21:20:47 +01:00
llama-batch.cpp
batch : fix consistency checks for the input positions ( #16890 )
2025-10-31 13:50:33 +02:00
llama-batch.h
llama: store mrope data in KV cell ( #16825 )
2025-10-29 18:09:18 +01:00
llama-chat.cpp
model : add BailingMoeV2 support ( #16063 )
2025-10-20 21:38:20 +02:00
llama-chat.h
model : add BailingMoeV2 support ( #16063 )
2025-10-20 21:38:20 +02:00
llama-context.cpp
server : support unified cache across slots ( #16736 )
2025-11-02 18:14:04 +02:00
llama-context.h
server : support unified cache across slots ( #16736 )
2025-11-02 18:14:04 +02:00
llama-cparams.cpp
llama-cparams.h
server : support unified cache across slots ( #16736 )
2025-11-02 18:14:04 +02:00
llama-grammar.cpp
llama-grammar.h
llama-graph.cpp
llama : use std::abs instead of abs ( #16853 )
2025-10-30 08:30:58 +02:00
llama-graph.h
graph : support cacheless embeddings with FA and iSWA ( #16528 )
2025-10-13 22:42:37 +03:00
llama-hparams.cpp
model: add support for qwen3vl series ( #16780 )
2025-10-30 16:19:14 +01:00
llama-hparams.h
model: add support for qwen3vl series ( #16780 )
2025-10-30 16:19:14 +01:00
llama-impl.cpp
llama-impl.h
llama: use FA + max. GPU layers by default ( #15434 )
2025-08-30 16:32:10 +02:00
llama-io.cpp
llama-io.h
llama-kv-cache-iswa.cpp
server : context checkpointing for hybrid and recurrent models ( #16382 )
2025-10-03 21:34:51 +03:00
llama-kv-cache-iswa.h
llama: print memory breakdown on exit ( #15860 )
2025-09-24 16:53:48 +02:00
llama-kv-cache.cpp
model: add support for qwen3vl series ( #16780 )
2025-10-30 16:19:14 +01:00
llama-kv-cache.h
memory : remove KV cache size padding ( #16812 )
2025-10-28 20:19:44 +02:00
llama-kv-cells.h
llama: store mrope data in KV cell ( #16825 )
2025-10-29 18:09:18 +01:00
llama-memory-hybrid.cpp
memory : use sequential equal splits for recurrent modules ( #16442 )
2025-10-07 08:24:17 +03:00
llama-memory-hybrid.h
llama: print memory breakdown on exit ( #15860 )
2025-09-24 16:53:48 +02:00
llama-memory-recurrent.cpp
llama: consistent ctx <-> buf order for KV cache ( #16746 )
2025-10-28 11:23:54 +01:00
llama-memory-recurrent.h
llama: consistent ctx <-> buf order for KV cache ( #16746 )
2025-10-28 11:23:54 +01:00
llama-memory.cpp
memory : correctly handle failure in apply() ( #14438 )
2025-06-30 18:03:03 +03:00
llama-memory.h
llama: print memory breakdown on exit ( #15860 )
2025-09-24 16:53:48 +02:00
llama-mmap.cpp
llama-mmap.h
llama-model-loader.cpp
model : Apertus model implementation ( #15852 )
2025-10-02 20:43:22 +03:00
llama-model-loader.h
model: support GLM 4.5 family of models ( #14939 )
2025-08-04 20:29:25 +02:00
llama-model-saver.cpp
llama-model-saver.h
llama-model.cpp
server : support unified cache across slots ( #16736 )
2025-11-02 18:14:04 +02:00
llama-model.h
model : Minimax M2 ( #16831 )
2025-10-31 21:20:47 +01:00
llama-quant.cpp
llama : use std::abs instead of abs ( #16853 )
2025-10-30 08:30:58 +02:00
llama-quant.h
llama-sampling.cpp
vocab : mark EOT token for Granite models ( #16499 )
2025-10-10 17:17:31 +03:00
llama-sampling.h
llama-vocab.cpp
model : Minimax M2 ( #16831 )
2025-10-31 21:20:47 +01:00
llama-vocab.h
model : Minimax M2 ( #16831 )
2025-10-31 21:20:47 +01:00
llama.cpp
llama-quant: add support for mmproj ( #16592 )
2025-10-15 14:48:08 +02:00
unicode-data.cpp
unicode-data.h
unicode.cpp
model : add Kimi-K2 support ( #14654 )
2025-07-15 21:54:22 +02:00
unicode.h
devops: add s390x & ppc64le CI ( #15925 )
2025-09-27 02:03:33 +08:00