Commit Graph

3360 Commits

Author SHA1 Message Date
Georgi Gerganov
ea174c62bc bench : update [no ci] 2025-10-12 11:16:23 +03:00
Georgi Gerganov
ff4c1a5a53 talk-llama : sync llama.cpp 2025-10-12 11:16:23 +03:00
Georgi Gerganov
ed6a3063ec sync : ggml 2025-10-12 11:16:23 +03:00
Georgi Gerganov
d201705e71 metal : fix mul-mm condition + fix mul-mv permuted kernels (llama/16494) 2025-10-12 11:16:23 +03:00
Diego Devesa
1cc342427b cuda : avoid initializing unused devices (llama/16510) 2025-10-12 11:16:23 +03:00
Prajwal B Mehendarkar
d8f1aa4e1d cmake : Dont define XOPENSOURCE on AIX (llama/16481) 2025-10-12 11:16:23 +03:00
duduta
d83fef35df cpu : optimize the ggml NORM operation (llama/15953)
* ggml-cpu: optimize norm operation to use intrinsics or Accelerate

          rename function

          add endif macro comment

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Co-authored-by: Aaron Teo <taronaeo@gmail.com>

* implement s390x SIMD suggested by @taronaeo

* add TODO comment

* tidy up spaces

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Co-authored-by: Aaron Teo <taronaeo@gmail.com>
2025-10-12 11:16:23 +03:00
Chenguang Li
b9eac9419c CANN: Improve ACL graph matching (llama/16166)
* CANN: improve ACL graph matching

Record `ne` and `nb` information for src tensors and include them in the
graph matching check. This enhances the robustness of ACL graph matching
by preventing incorrect matches when src tensors share the same data
address but differ in shape or stride.

* CANN: add op_params match
2025-10-12 11:16:23 +03:00
Charles Xu
c8b2c56fd2 kleidiai: kernel interface refactoring (llama/16460) 2025-10-12 11:16:23 +03:00
Neo Zhang Jianyu
7df6766b63 refactor soft_max, add soft_max_back (llama/16472)
* refactor to support soft_max_ext

* fix error and support soft_max_back

* rm unused functions

* fix format issue

---------

Co-authored-by: Zhang Jianyu <zhang.jianyu@outlook.com>
2025-10-12 11:16:23 +03:00
ai-fonsi
21e6e72a2f Disable CUDA host buffers on integrated GPUs (llama/16308) 2025-10-12 11:16:23 +03:00
Georgi Gerganov
7ef78a72e1 metal : mark FA blocks (llama/16372)
* metal : better unroll in the FA kernels

* metal : index FA blocks

* tests : restore [no ci]

* metal : prevent division by zero in FA kernels

* metal : fix -INF detection logic
2025-10-12 11:16:23 +03:00
Reese Levine
4eea3efc49 ggml webgpu: profiling, CI updates, reworking of command submission (llama/16452)
* Add profiling

* More detailed profiling

* Rework command submission to avoid global locks

* Update wait handling

* try new method of waiting on futures

* Add serializing of command submission in some cases

* Add new pool for timestamp queries and clean up logging

* Serialize command submission in CI and leave a TODO note

* Update webgpu CI

* Add myself as WebGPU codeowner

* Deadlock avoidance

* Leave WebGPU/Vulkan CI serialized

* Fix divide by 0

* Fix logic in division by inflight_threads

* Update CODEOWNERS and remove serialize submit option
2025-10-12 11:16:23 +03:00
Georgi Gerganov
4bce4fa5e9 metal : add support for non-padded FA KV (llama/16148)
* metal : pad K, V and Mask when needed

* cont : simplify

* cuda : add TODO about KV padding requirement

* metal : add comments

* metal : remove mask padding requirement
2025-10-12 11:16:23 +03:00
Georgi Gerganov
6cf0c21b09 tests : add -INF blocks to the KQ mask in the FA tests (llama/16380)
* tests : add -INF blocks to the KQ mask in the FA tests

* cont : bump -INF block size to 64

Co-authored-by: Jeff Bolz <jbolz@nvidia.com>

* ggml : prevent division by zero in FA CPU op

---------

Co-authored-by: Jeff Bolz <jbolz@nvidia.com>
2025-10-12 11:16:23 +03:00
Georgi Gerganov
1a4116f942 metal : various optimizations + refactoring (llama/16446)
* metal : ssm_scan minor opts

* metal : get_rows optimize

* metal : cpy optimize

* metal : ssm_conv opt

* metal : ssm_scan simplify

* metal : ssm_Scan opt
2025-10-12 11:16:23 +03:00
Georgi Gerganov
0e431b3cea ggml : fix unaligned access in AMX code (llama/16315) 2025-10-12 11:16:23 +03:00
Daniel Bevenius
0f29d7c3fa ggml-cpu : fix leftover handling in ggml_vec_scale_f32 for SVE (llama/16443)
This commit updates the leftover handling in ggml_vec_scale_f32.

The motivation for this is that the code currently incorrectly assumes
there would be fewer than ggml_f32_epr leftover elements. However,
since the main loop processes 2*ggml_f32_epr elements per iteration
, there can be up to (2*ggml_f32_epr - 1) leftover elements.

The original single-pass leftover code could only process ggml_f32_epr
elements, leaving some elements unscaled.

Example scenario with 256-bit SVE:
```
ggml_f32_epr  = 8 (elements per register)
ggml_f32_step = 16 (two registers per iteration)
n             = 25
np            = 16
leftovers     = 9 elements (16-24)

Original    : processes only elements 16-23, misses element 24
This commit : loop processes elements 16-23, then element 24
```

Refs: https://github.com/ggml-org/llama.cpp/actions/runs/18070620247/job/51419855630
2025-10-12 11:16:23 +03:00
Reese Levine
b8bdf06182 ggml webgpu: actually add softmax, fix rms_norm offset (llama/16400)
* implement soft_max

* Fix soft_max data race

* Temporary fix, wait on each submit
2025-10-12 11:16:23 +03:00
Eve
2ca8fa37fa vulkan: use a more appropriate amount of threads when generating shaders (llama/16418)
* use a more flexible amount of threads

* fix windows compile and 0 thread case

* nominmax
2025-10-12 11:16:23 +03:00
Radoslav Gerganov
93882335a8 rpc : check src buffer when copying tensor (llama/16421)
Only dst buffer is guaranteed to be an RPC buffer. Add check for the src
one.
2025-10-12 11:16:23 +03:00
Radoslav Gerganov
af51bbab88 rpc : add support for multiple devices (llama/16276)
* rpc : add support for multiple devices

Allow rpc-server to expose multiple devices from a single endpoint.
Change RPC protocol to include device identifier where needed.

closes: #15210

* fixes

* use ggml_backend_reg_t

* address review comments

* fix llama-bench backend report

* address review comments, change device naming

* fix cmd order
2025-10-12 11:16:23 +03:00
Acly
49e0a426f3 vulkan : incremental shader builds (llama/16341)
* vulkan (DRAFT): split shader generation by GLSL source file, to improve incremental build times

* support dep-files so shaders are recompiled if their included files change

* rename shader files which are used as "headers" to use .glsl extension
* move glslc extension detection shaders to separate folders
* the above is to prevent them from getting glob'd with the actual compute shaders that need to be compiled

* vulkan : only write embedded shader .hpp/.cpp when they change

* avoid recompiling ggml-vulkan.cpp when editing shaders
* pass single --source argument instead of --input-dir & --filter to shader gen
* check for source file match earlier

* fix hang in vulkan-shaders-gen when there are compilation errors

* early out did not decrement compile_count

* clean up

* fix glslc integer dot product test

* unconditionally write the embedded shader cpp output

* replace output filepath in generated dep-files to match output in CMakeLists

---------

Co-authored-by: Jeff Bolz <jbolz@nvidia.com>
2025-10-12 11:16:23 +03:00
Georgi Gerganov
93c1305565 metal : fix loop bound in ggml_mem_ranges (llama/16412) 2025-10-12 11:16:23 +03:00
Acly
a70144a873 ggml : fix graph reallocation with multiple chunks (llama/16396)
reallocation is needed if a single chunk grows in size,
even if total allocation size stays the same or is lower
2025-10-12 11:16:23 +03:00
Jeff Bolz
2e6888089f vulkan: Replace uses of maxMemoryAllocationSize and VK_WHOLE_SIZE (llama/16354)
* vulkan: Replace uses of maxMemoryAllocationSize and VK_WHOLE_SIZE

Replace maxMemoryAllocationSize check with maxBufferSize when creating buffers.
The maxMemoryAllocationSize limit is a "soft" limit and allocations can succeed
beyond that limit. This allows > 4GB buffers to be allocated on some
implementations (e.g. NVIDIA) and tensors this large can be used for im2col
and mul_mat.

For temporary buffers (prealloc_x/y/etc) check against maxStorageBufferRange.
I'm not sure this check is ideal, but we always use these buffers as a single
full size binding and the limit may be smaller than maxMemoryAllocationSize
or maxBufferSize, so I think this is reasonable.

Replace descriptor range uses of VK_WHOLE_SIZE with a manually computed range.
The maxStorageBufferRange may be smaller than the maxBufferSize or
maxMemoryAllocationSize (and the Vulkan spec warns about this in a note) and
it's invalid usage if VK_WHOLE_SIZE computes a range larger than
maxStorageBufferRange.

With this change, it should be possible to generate videos using wan networks
in stable-diffusion.cpp.

* vulkan: Add env var GGML_VK_FORCE_MAX_BUFFER_SIZE and use stoull
2025-10-12 11:16:23 +03:00
Jeff Bolz
90bdcf2ef6 vulkan: Fix FA coopmat1 invalid array indexing (llama/16365)
When computing sinks, the cm1 shader was looping r from 0 to Br rather than
to rows_per_thread. I must have copied this from the scalar path (where it is
correct), and somehow it wasn't causing failures on current drivers.
2025-10-12 11:16:23 +03:00
Jeff Bolz
fd11cd97ab vulkan: in flash attention, bounds check against nem1 (don't rely on GGML_KQ_MASK_PAD) (llama/16316) 2025-10-12 11:16:23 +03:00
Reese Levine
27ebde6afd ggml webgpu: add support for soft_max, optimize rms_norm (llama/16357)
* Add inplace softmax

* Move rms_norm to split row approach

* Update debug for supports_op

* clean up debug statements

* Update tests/test-backend-ops.cpp

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2025-10-12 11:16:23 +03:00
Piotr Wilkin (ilintar)
33ca8355c4 model : Apertus model implementation (llama/15852)
* First attempt

* No permute during convert (fixes qk tensors), proper norm application.

* RoPE = NeoX

* Coherence!

* Migrate xielu params from tensors to hyperparameters

* Simple CUDA kernel

* Revert stupid LLM refactorings

* Chat template support

* configchecker / flake8 errors

* Reorder unary.cu

* I do conclude that LLMs are, in fact, stupid.

* Fix after merge

* Final newline

* Make xIELU an UNARY_OP

* Final newline

* Correctly account for parameter shift

* Argh.

* Update ggml/src/ggml-cpu/unary-ops.cpp

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* Refactor: remove unused methods, inline and factorize softplus, add const modifiers

* Revert CUDA changes, implement xIELU as a separate OP

* Pesky newline

* Add float2half / half2float for F16 inputs/outputs

* CUDA variants, attempt 2

* Actually, attempt 3

* Update ggml/src/ggml-cuda/unary.cu

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

* Missing convert header

* Proper formula and reference for xIELU in the comments.

* Modify unary-ops.cpp to add the functor-based logic besides the template system to retain optimizations

* Apply suggestions from code review

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Add tensor mappings for Apertus to global list instead

* Fix lazy on scalars

* Update ggml/src/ggml-cuda/unary.cu

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

* Add comment about the constraints on positive/negative alpha

* Change `softplus` to `ggml_softplus`

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
2025-10-12 11:16:23 +03:00
R0CKSTAR
e29508be8b musa: update compile flags (llama/16265)
Signed-off-by: Xiaodong Ye <yeahdongcn@gmail.com>
2025-10-12 11:16:23 +03:00
uvos
b73f67d3f6 HIP: Disable ROCWMMA fattn on CDNA when compiled against ROCWMMA 2.0.0 (llama/16221)
* HIP: Disable ROCWMMA fatt on CDNA when compiled against ROCWMMA 2.0.0

rocwmma 2.0.0 includes a bug in the code fakeing fp16 accumulation on CDNA

* CUDA: Fix volta condition in ggml_cuda_should_use_wmma_fattn
2025-10-12 11:16:23 +03:00
Eve
b0560310aa vulkan: make ggml_vk_default_dispatcher support older vulkan headers (llama/16345)
* make ggml_vk_default_dispatcher support older vulkan headers

* simpilfy with using
2025-10-12 11:16:23 +03:00
lhez
31bb869929 opencl: support pad_ext (llama/15888) 2025-10-12 11:16:23 +03:00
Reese Levine
8208cea829 ggml webgpu: support for rope,div,sub,glu,scale,cont operators (llama/16187)
* Work on rope

* Simplify inplace operation generation and combine mul/add generation

* Work on rope variants

* implement neox rope

* rope complete

* Add sub,div,glu operators

* implement scale op

* Update cpy shader to handle cont/more types

* formatting

* Update test vars printing for rope,rms_norm

* Avoid ROPE hardcoded constants

* Add TODO to change ROPE constants to enum

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* fix TODO comment

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2025-10-12 11:16:23 +03:00
lhez
199626d79e opencl: support ne3 in get_rows (llama/15866) 2025-10-12 11:16:23 +03:00
Ruben Ortlam
c3b5c4d934
whisper : Support using devices of type iGPU (#3469) 2025-10-11 17:55:16 +03:00
Andreas Lubbe
85871a9469
whisper : add support for --carry-initial-prompt (#3395)
* Add support for --carry-initial-prompt

* PR fixes for ruby and go

* Refactoring for readability

* WIP 1

* WIP 2

* PR fixes

* More PR fixes

* PR fix

* Further simplification

* d'oh

* One more logic fix

* Update src/whisper.cpp

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* Truncate prompt_past0 upon initialization

* Slight simplification

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2025-10-10 19:51:15 +03:00
Andreas Lubbe
a0ca50f3b9
cli: Fix assignment for vad_min_silence_duration_ms (#3467)
* cli: Fix assignment for vad_min_silence_duration_ms

Found and fixed this simple copy/paste error

* server : fix vad_min_silence_duration_ms assignment

---------

Co-authored-by: Daniel Bevenius <daniel.bevenius@gmail.com>
2025-10-10 15:21:03 +02:00
Georgi Gerganov
d3a29d7b88
minor : fix code style (#3463) 2025-10-10 11:33:01 +03:00
Silviu Caragea
85d1d3d3dc
vad : free vad_segments in whisper_vad (#3463)
This commit fixes multiple issues:

* memory leak because vad_segments is never released
* avoid segmentation fault when whisper_vad_segments_from_samples returns nullptr.
* avoid potential segmentation fault when the app fails to allocate memory for filtered samples and the vad context is released but also get released withing state itself when whisper_free_state is called
2025-10-10 06:20:21 +02:00
Georgi Gerganov
98930fded1
whisper : clean-up headers 2025-10-09 10:48:52 +03:00
KITAITI Makoto
8877dfc11a
[skip ci]Bump Ruby bindings' version to 1.3.4 (#3461) 2025-10-08 20:45:20 +09:00
Daniel Bevenius
c8223a8548
vad : fix memory leaks in VAD implementation (#3453)
* vad : fix memory leak by storing ggml_context in vad context struct

This commit addresses a memory leak issue in the voice activity
detection (VAD) where the ggml_context is not stored within the vad
context structure.

The motivation for this change that this is causing the context memory
to stay allocated and the tensor still point to that memory but this
memory is never freed.

* vad : free memory allocated for VAD hparams

This commit frees the model hyperparameters allocated for the VAD
context in the `whisper_vad_free` function. Specifically, it deletes the
`encoder_in_channels`, `encoder_out_channels`, and `kernel_sizes` arrays
allocated with `new[]` in the `whisper_vad_init` function.

The motivation for this is to prevent memory leaks when the VAD.

* vad: free ggml buffer in whisper_vad_free

This commit frees the ggml buffer in the whisper_vad_free function to
prevent memory leaks.

Resolves: https://github.com/ggml-org/whisper.cpp/issues/3452

* Revert "vad : fix memory leak by storing ggml_context in vad context struct"

This reverts commit aeafca437e.

* whisper : free ggml context in whisper_vad_init_context

This commit frees the ggml_context after initializing the VAD context in
the whisper_vad_init_context function.

The motivation for this is to prevent memory leaks.
2025-10-06 14:57:44 +02:00
KITAITI Makoto
7849aff7a2
ruby : Loose RegExp for test (#3448) 2025-10-01 15:33:11 +03:00
Daniel Bevenius
2a56869669
bindings-java : disable flash attention by default (#3445)
This commit disables flash-attention for the Java binding test so that
the testFullTranscribe test passes.

Without this change the test was failing because the expected output
mismatches after the flash-attention change:
```console
<And so my fellow Americans ask not what your country can do for you ask what you can do for your country.>
but was:
<and so my fellow Americans ask not what your country can do for you ask what you can do for your country>
```

An alternative would also be to update the expected output but it felt
better to keep the same expected output and disable flash-attention and
not just change the expected output to match the new behavior.
2025-10-01 09:13:34 +02:00
Georgi Gerganov
8c0855fd6b
bench : update [no ci] 2025-09-30 21:40:32 +03:00
Georgi Gerganov
47fcd7da8b
scripts : add -nfa option [no ci] 2025-09-30 21:37:00 +03:00
Georgi Gerganov
8a67c55c8a
wchess : fix link [no ci] 2025-09-30 21:28:03 +03:00
Georgi Gerganov
41fc9dea6a
release : v1.8.0 2025-09-30 21:25:36 +03:00