Commit Graph

3341 Commits

Author SHA1 Message Date
Eve
2ca8fa37fa vulkan: use a more appropriate amount of threads when generating shaders (llama/16418)
* use a more flexible amount of threads

* fix windows compile and 0 thread case

* nominmax
2025-10-12 11:16:23 +03:00
Radoslav Gerganov
93882335a8 rpc : check src buffer when copying tensor (llama/16421)
Only dst buffer is guaranteed to be an RPC buffer. Add check for the src
one.
2025-10-12 11:16:23 +03:00
Radoslav Gerganov
af51bbab88 rpc : add support for multiple devices (llama/16276)
* rpc : add support for multiple devices

Allow rpc-server to expose multiple devices from a single endpoint.
Change RPC protocol to include device identifier where needed.

closes: #15210

* fixes

* use ggml_backend_reg_t

* address review comments

* fix llama-bench backend report

* address review comments, change device naming

* fix cmd order
2025-10-12 11:16:23 +03:00
Acly
49e0a426f3 vulkan : incremental shader builds (llama/16341)
* vulkan (DRAFT): split shader generation by GLSL source file, to improve incremental build times

* support dep-files so shaders are recompiled if their included files change

* rename shader files which are used as "headers" to use .glsl extension
* move glslc extension detection shaders to separate folders
* the above is to prevent them from getting glob'd with the actual compute shaders that need to be compiled

* vulkan : only write embedded shader .hpp/.cpp when they change

* avoid recompiling ggml-vulkan.cpp when editing shaders
* pass single --source argument instead of --input-dir & --filter to shader gen
* check for source file match earlier

* fix hang in vulkan-shaders-gen when there are compilation errors

* early out did not decrement compile_count

* clean up

* fix glslc integer dot product test

* unconditionally write the embedded shader cpp output

* replace output filepath in generated dep-files to match output in CMakeLists

---------

Co-authored-by: Jeff Bolz <jbolz@nvidia.com>
2025-10-12 11:16:23 +03:00
Georgi Gerganov
93c1305565 metal : fix loop bound in ggml_mem_ranges (llama/16412) 2025-10-12 11:16:23 +03:00
Acly
a70144a873 ggml : fix graph reallocation with multiple chunks (llama/16396)
reallocation is needed if a single chunk grows in size,
even if total allocation size stays the same or is lower
2025-10-12 11:16:23 +03:00
Jeff Bolz
2e6888089f vulkan: Replace uses of maxMemoryAllocationSize and VK_WHOLE_SIZE (llama/16354)
* vulkan: Replace uses of maxMemoryAllocationSize and VK_WHOLE_SIZE

Replace maxMemoryAllocationSize check with maxBufferSize when creating buffers.
The maxMemoryAllocationSize limit is a "soft" limit and allocations can succeed
beyond that limit. This allows > 4GB buffers to be allocated on some
implementations (e.g. NVIDIA) and tensors this large can be used for im2col
and mul_mat.

For temporary buffers (prealloc_x/y/etc) check against maxStorageBufferRange.
I'm not sure this check is ideal, but we always use these buffers as a single
full size binding and the limit may be smaller than maxMemoryAllocationSize
or maxBufferSize, so I think this is reasonable.

Replace descriptor range uses of VK_WHOLE_SIZE with a manually computed range.
The maxStorageBufferRange may be smaller than the maxBufferSize or
maxMemoryAllocationSize (and the Vulkan spec warns about this in a note) and
it's invalid usage if VK_WHOLE_SIZE computes a range larger than
maxStorageBufferRange.

With this change, it should be possible to generate videos using wan networks
in stable-diffusion.cpp.

* vulkan: Add env var GGML_VK_FORCE_MAX_BUFFER_SIZE and use stoull
2025-10-12 11:16:23 +03:00
Jeff Bolz
90bdcf2ef6 vulkan: Fix FA coopmat1 invalid array indexing (llama/16365)
When computing sinks, the cm1 shader was looping r from 0 to Br rather than
to rows_per_thread. I must have copied this from the scalar path (where it is
correct), and somehow it wasn't causing failures on current drivers.
2025-10-12 11:16:23 +03:00
Jeff Bolz
fd11cd97ab vulkan: in flash attention, bounds check against nem1 (don't rely on GGML_KQ_MASK_PAD) (llama/16316) 2025-10-12 11:16:23 +03:00
Reese Levine
27ebde6afd ggml webgpu: add support for soft_max, optimize rms_norm (llama/16357)
* Add inplace softmax

* Move rms_norm to split row approach

* Update debug for supports_op

* clean up debug statements

* Update tests/test-backend-ops.cpp

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2025-10-12 11:16:23 +03:00
Piotr Wilkin (ilintar)
33ca8355c4 model : Apertus model implementation (llama/15852)
* First attempt

* No permute during convert (fixes qk tensors), proper norm application.

* RoPE = NeoX

* Coherence!

* Migrate xielu params from tensors to hyperparameters

* Simple CUDA kernel

* Revert stupid LLM refactorings

* Chat template support

* configchecker / flake8 errors

* Reorder unary.cu

* I do conclude that LLMs are, in fact, stupid.

* Fix after merge

* Final newline

* Make xIELU an UNARY_OP

* Final newline

* Correctly account for parameter shift

* Argh.

* Update ggml/src/ggml-cpu/unary-ops.cpp

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* Refactor: remove unused methods, inline and factorize softplus, add const modifiers

* Revert CUDA changes, implement xIELU as a separate OP

* Pesky newline

* Add float2half / half2float for F16 inputs/outputs

* CUDA variants, attempt 2

* Actually, attempt 3

* Update ggml/src/ggml-cuda/unary.cu

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

* Missing convert header

* Proper formula and reference for xIELU in the comments.

* Modify unary-ops.cpp to add the functor-based logic besides the template system to retain optimizations

* Apply suggestions from code review

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Add tensor mappings for Apertus to global list instead

* Fix lazy on scalars

* Update ggml/src/ggml-cuda/unary.cu

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

* Add comment about the constraints on positive/negative alpha

* Change `softplus` to `ggml_softplus`

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
2025-10-12 11:16:23 +03:00
R0CKSTAR
e29508be8b musa: update compile flags (llama/16265)
Signed-off-by: Xiaodong Ye <yeahdongcn@gmail.com>
2025-10-12 11:16:23 +03:00
uvos
b73f67d3f6 HIP: Disable ROCWMMA fattn on CDNA when compiled against ROCWMMA 2.0.0 (llama/16221)
* HIP: Disable ROCWMMA fatt on CDNA when compiled against ROCWMMA 2.0.0

rocwmma 2.0.0 includes a bug in the code fakeing fp16 accumulation on CDNA

* CUDA: Fix volta condition in ggml_cuda_should_use_wmma_fattn
2025-10-12 11:16:23 +03:00
Eve
b0560310aa vulkan: make ggml_vk_default_dispatcher support older vulkan headers (llama/16345)
* make ggml_vk_default_dispatcher support older vulkan headers

* simpilfy with using
2025-10-12 11:16:23 +03:00
lhez
31bb869929 opencl: support pad_ext (llama/15888) 2025-10-12 11:16:23 +03:00
Reese Levine
8208cea829 ggml webgpu: support for rope,div,sub,glu,scale,cont operators (llama/16187)
* Work on rope

* Simplify inplace operation generation and combine mul/add generation

* Work on rope variants

* implement neox rope

* rope complete

* Add sub,div,glu operators

* implement scale op

* Update cpy shader to handle cont/more types

* formatting

* Update test vars printing for rope,rms_norm

* Avoid ROPE hardcoded constants

* Add TODO to change ROPE constants to enum

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* fix TODO comment

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2025-10-12 11:16:23 +03:00
lhez
199626d79e opencl: support ne3 in get_rows (llama/15866) 2025-10-12 11:16:23 +03:00
Ruben Ortlam
c3b5c4d934
whisper : Support using devices of type iGPU (#3469) 2025-10-11 17:55:16 +03:00
Andreas Lubbe
85871a9469
whisper : add support for --carry-initial-prompt (#3395)
* Add support for --carry-initial-prompt

* PR fixes for ruby and go

* Refactoring for readability

* WIP 1

* WIP 2

* PR fixes

* More PR fixes

* PR fix

* Further simplification

* d'oh

* One more logic fix

* Update src/whisper.cpp

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* Truncate prompt_past0 upon initialization

* Slight simplification

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2025-10-10 19:51:15 +03:00
Andreas Lubbe
a0ca50f3b9
cli: Fix assignment for vad_min_silence_duration_ms (#3467)
* cli: Fix assignment for vad_min_silence_duration_ms

Found and fixed this simple copy/paste error

* server : fix vad_min_silence_duration_ms assignment

---------

Co-authored-by: Daniel Bevenius <daniel.bevenius@gmail.com>
2025-10-10 15:21:03 +02:00
Georgi Gerganov
d3a29d7b88
minor : fix code style (#3463) 2025-10-10 11:33:01 +03:00
Silviu Caragea
85d1d3d3dc
vad : free vad_segments in whisper_vad (#3463)
This commit fixes multiple issues:

* memory leak because vad_segments is never released
* avoid segmentation fault when whisper_vad_segments_from_samples returns nullptr.
* avoid potential segmentation fault when the app fails to allocate memory for filtered samples and the vad context is released but also get released withing state itself when whisper_free_state is called
2025-10-10 06:20:21 +02:00
Georgi Gerganov
98930fded1
whisper : clean-up headers 2025-10-09 10:48:52 +03:00
KITAITI Makoto
8877dfc11a
[skip ci]Bump Ruby bindings' version to 1.3.4 (#3461) 2025-10-08 20:45:20 +09:00
Daniel Bevenius
c8223a8548
vad : fix memory leaks in VAD implementation (#3453)
* vad : fix memory leak by storing ggml_context in vad context struct

This commit addresses a memory leak issue in the voice activity
detection (VAD) where the ggml_context is not stored within the vad
context structure.

The motivation for this change that this is causing the context memory
to stay allocated and the tensor still point to that memory but this
memory is never freed.

* vad : free memory allocated for VAD hparams

This commit frees the model hyperparameters allocated for the VAD
context in the `whisper_vad_free` function. Specifically, it deletes the
`encoder_in_channels`, `encoder_out_channels`, and `kernel_sizes` arrays
allocated with `new[]` in the `whisper_vad_init` function.

The motivation for this is to prevent memory leaks when the VAD.

* vad: free ggml buffer in whisper_vad_free

This commit frees the ggml buffer in the whisper_vad_free function to
prevent memory leaks.

Resolves: https://github.com/ggml-org/whisper.cpp/issues/3452

* Revert "vad : fix memory leak by storing ggml_context in vad context struct"

This reverts commit aeafca437e.

* whisper : free ggml context in whisper_vad_init_context

This commit frees the ggml_context after initializing the VAD context in
the whisper_vad_init_context function.

The motivation for this is to prevent memory leaks.
2025-10-06 14:57:44 +02:00
KITAITI Makoto
7849aff7a2
ruby : Loose RegExp for test (#3448) 2025-10-01 15:33:11 +03:00
Daniel Bevenius
2a56869669
bindings-java : disable flash attention by default (#3445)
This commit disables flash-attention for the Java binding test so that
the testFullTranscribe test passes.

Without this change the test was failing because the expected output
mismatches after the flash-attention change:
```console
<And so my fellow Americans ask not what your country can do for you ask what you can do for your country.>
but was:
<and so my fellow Americans ask not what your country can do for you ask what you can do for your country>
```

An alternative would also be to update the expected output but it felt
better to keep the same expected output and disable flash-attention and
not just change the expected output to match the new behavior.
2025-10-01 09:13:34 +02:00
Georgi Gerganov
8c0855fd6b
bench : update [no ci] 2025-09-30 21:40:32 +03:00
Georgi Gerganov
47fcd7da8b
scripts : add -nfa option [no ci] 2025-09-30 21:37:00 +03:00
Georgi Gerganov
8a67c55c8a
wchess : fix link [no ci] 2025-09-30 21:28:03 +03:00
Georgi Gerganov
41fc9dea6a
release : v1.8.0 2025-09-30 21:25:36 +03:00
Daniel Bevenius
5904d00dbb
examples : add wchess.wasm to wasm examples build (#3443)
* examples : add wchess.wasm to wasm examples build

This commit add the wchess.wasm example to the wasm examples that are
deployed to https://ggml.ai/whisper.cpp.

Refs: https://github.com/ggml-org/whisper.cpp/issues/3434#issuecomment-3346980420
2025-09-30 16:23:01 +02:00
Georgi Gerganov
0b3587acdd
whisper : enable flash attention by default (#3441) 2025-09-30 15:47:20 +03:00
Georgi Gerganov
1e5ad50f8f
bench : add rtx 5090 [no ci] 2025-09-30 13:58:15 +03:00
Georgi Gerganov
527ff158d0 ggml : bump version to 0.9.4 (ggml/1363) 2025-09-30 13:54:08 +03:00
Georgi Gerganov
e4bf87b0e9
bench : update [no ci] 2025-09-30 12:51:25 +03:00
Georgi Gerganov
b57b9d3a27
sync : ggml 2025-09-30 12:31:08 +03:00
anavp-nvidia
62b3b86e3f
cuda : Enable CUDA Graph usage for Nemotron Nano v2 (NemotronH) (llama/16328)
* Fix Nemotron Nano v2 9B not executing as CUDA Graph on NVIDIA GPUs

* fix to ensure test-backend-ops check passes
2025-09-30 12:31:04 +03:00
Georgi Gerganov
78f85f2b92
metal : dynamic simdgroups for MV kernels (llama/16340)
* metal : dynamic simdgroups for MV kernels

* cont : minor
2025-09-30 12:31:04 +03:00
Charles Xu
01e86b69ab
kleidiai : fix work size and threads sync for fp16 (llama/16246) 2025-09-30 12:31:04 +03:00
alex-spacemit
35ebdf7304
ggml: riscv: add riscv spacemit backend (llama/15288)
* ggml: add spacemit backend

Change-Id: I249bdc043485d815a9c351867137bc1e27cc2e23

* add new line at end of file

Change-Id: I889ed1c85fb45e62350ecde0c06f70450cadfbe2

* add riscv zba extension limit

Change-Id: I321eb200f859751727afe5cae13074dfce2bb0ce

* fixed for review comments, file renamed and format

Change-Id: Ia20b6ec24a36638e62e0fe07cf100916a7cce3ce

* fixed for code format, after clang-format

Change-Id: I5dc33a0412da3d3f2d77075d8939185d3009eca2

* use _Float16 instead of __fp16

Change-Id: I039fb02bb95270e641bc4442204e658735859d43

* add ci for riscv64-spacemit-ime-native

Change-Id: I711c1033061df1a289ea77891b2997599dfe8279

* update debian-13-riscv64-spacemit-ime-native ci label

Change-Id: Ifb2b891e2fca57b5da604fce2ac255f27731179a

* remove license comment for spacemit ime

Change-Id: If0dc3ca30a958631ccca0a28b62e0b825f9fb0c3

* upgrade binutils for gcc ime

Change-Id: Ibf2fa74c1064408974cb5b45f044d40987e5fb45

* add spacemit ime cross jobs

Change-Id: I80d74909941d41cb9cd09e51d8baf01c985cbfc6

* remove native compile for riscv64-spacemit-ime

Change-Id: I01920afafdc73fa7424014fd648d243f8ec9e25e

* ci : add caching for spacemit ime cross toolchain

Change-Id: Ic54a192019a2fd982bbd58225ce3bbc38f4053de

* ci: bug fixed for cache path and env

Change-Id: I28c42e10b6fff053bb6580926ca2353448cb042a

* Update .github/workflows/build-linux-cross.yml for cache path

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* bugfixed for  build-linux-cross.yml,  syntax error

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

---------

Co-authored-by: cailinxi <linxi.cai@spacemit.com>
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
2025-09-30 12:31:03 +03:00
Rafal Lewczuk
94fe9bbe2b
ggml-backend : add root cause in error message if loading backend library fails (llama/16172)
This PR adds additional information to an error message when loading backend library via ld_load_library() fails. This helps spotting why backend library did not load (missing library, missing dependency or unresolved symbol etc.).
2025-09-30 12:31:00 +03:00
Georgi Gerganov
32be14f8eb
bench : update [no ci] (#3439) 2025-09-29 17:42:38 +03:00
Georgi Gerganov
a77d11d91e
bench : warm-up all kernels (#3438) 2025-09-29 17:27:53 +03:00
Georgi Gerganov
22c12ee86d
ggml : remove oboslete files (#0) 2025-09-29 16:47:30 +03:00
Georgi Gerganov
d8cdcce884
ci : add self-hosted workflows (#3437)
* ci : add self-hosted workflows

* cont : fail workflow if there is an error
2025-09-29 16:42:39 +03:00
Georgi Gerganov
b4909a6c78
whisper : remove ggml_mul_mat padding (#3436) 2025-09-29 16:42:08 +03:00
Georgi Gerganov
fcf0181ee2
talk-llama : sync llama.cpp 2025-09-29 15:18:41 +03:00
Georgi Gerganov
404a93114c
sync : ggml 2025-09-29 15:18:18 +03:00
Georgi Gerganov
3201382792
cmake : remove metal flag (llama/0) 2025-09-29 15:18:13 +03:00