| .. |
|
CMakeLists.txt
|
Enable and clean up compiler warnings in src (#824)
|
2025-10-11 16:01:13 +03:00 |
|
llama-arch.cpp
|
Mimo-V2-Flash support (#1096)
|
2026-01-05 08:00:01 +02:00 |
|
llama-arch.h
|
Mimo-V2-Flash support (#1096)
|
2026-01-05 08:00:01 +02:00 |
|
llama-build-context.cpp
|
Copy reduce result to other GPUs if necessary (#1156)
|
2026-01-19 08:40:26 +02:00 |
|
llama-build-context.h
|
Merge ffn_up and ffn_gate experts tensors (#1137)
|
2026-01-12 18:30:53 +02:00 |
|
llama-context.h
|
POC: CUDA tensor parallel (MoE models) (#1022)
|
2025-12-01 19:25:40 +01:00 |
|
llama-cparams.h
|
Additional graph reduce types for split mode graph (#1154)
|
2026-01-18 08:02:49 +02:00 |
|
llama-grammar.cpp
|
Update grammar (#1023)
|
2025-11-30 18:45:38 +01:00 |
|
llama-grammar.h
|
Update grammar (#1023)
|
2025-11-30 18:45:38 +01:00 |
|
llama-hparams.cpp
|
Mimo-V2-Flash support (#1096)
|
2026-01-05 08:00:01 +02:00 |
|
llama-hparams.h
|
Mimo-V2-Flash support (#1096)
|
2026-01-05 08:00:01 +02:00 |
|
llama-impl.h
|
server: stop processing the prompt when client disconnects (#1134)
|
2026-01-13 07:56:59 +02:00 |
|
llama-load-tensors.cpp
|
Fixing split mode graph with many GPUs (#1152)
|
2026-01-17 08:05:24 +02:00 |
|
llama-mmap.cpp
|
Enable CUDA graphs for MoE models + GPT-OSS support (#689)
|
2025-08-15 09:18:07 +03:00 |
|
llama-mmap.h
|
Enable CUDA graphs for MoE models + GPT-OSS support (#689)
|
2025-08-15 09:18:07 +03:00 |
|
llama-model-loader.cpp
|
Merge ffn_up and ffn_gate experts tensors (#1137)
|
2026-01-12 18:30:53 +02:00 |
|
llama-model-loader.h
|
Merge ffn_up and ffn_gate experts tensors (#1137)
|
2026-01-12 18:30:53 +02:00 |
|
llama-model.cpp
|
Mimo-V2-Flash support (#1096)
|
2026-01-05 08:00:01 +02:00 |
|
llama-model.h
|
Merge ffn_up and ffn_gate experts tensors (#1137)
|
2026-01-12 18:30:53 +02:00 |
|
llama-quantize.cpp
|
Merge ffn_up and ffn_gate experts tensors (#1137)
|
2026-01-12 18:30:53 +02:00 |
|
llama-sampling.cpp
|
Merge remote-tracking branch 'origin/main' into ik/adaptive_p_2
|
2026-01-19 13:11:04 +00:00 |
|
llama-sampling.h
|
A hopefully more efficient adaptive_p sampling (#1161)
|
2026-01-19 15:01:55 +02:00 |
|
llama-vocab.cpp
|
Server: refactor and rename functions (#1151)
|
2026-01-18 08:16:57 +02:00 |
|
llama-vocab.h
|
Update mtmd to improve accuracy of M-RoPE (#993)
|
2025-11-29 07:27:15 +01:00 |
|
llama.cpp
|
A hopefully more efficient adaptive_p sampling (#1161)
|
2026-01-19 15:01:55 +02:00 |
|
unicode-data.cpp
|
Merge mainline llama.cpp (#3)
|
2024-07-27 07:55:01 +02:00 |
|
unicode-data.h
|
Merge mainline llama.cpp (#3)
|
2024-07-27 07:55:01 +02:00 |
|
unicode.cpp
|
Server: refactor and rename functions (#1151)
|
2026-01-18 08:16:57 +02:00 |
|
unicode.h
|
Enable CUDA graphs for MoE models + GPT-OSS support (#689)
|
2025-08-15 09:18:07 +03:00 |