Go to file
Georgi Gerganov 23c1635e02 cuda : enable CUDA graphs for MMID 1 <= BS <= 4 (llama/19645)
* cuda : enable CUDA graphs for MMID BS <= 4

* cont : add stream capture check

Co-authored-by: Oliver Simons <osimons@nvidia.com>

* cont : add MMVQ_MMID_MAX_BATCH_SIZE

---------

Co-authored-by: Oliver Simons <osimons@nvidia.com>
2026-02-25 12:32:13 +02:00
.github ci : Upgrade GitHub Actions for Node 24 compatibility (#1426) 2026-02-25 10:31:04 +02:00
ci ggml : fix conv2d_dw SVE path (#1380) 2025-11-04 20:40:52 +02:00
cmake cmake : remove unused file (#1419) 2026-01-30 16:29:51 +02:00
docs Update gguf specification to synchronize the ggml_types declaration shown in the doc with the actual one. (#1342) 2025-09-16 13:42:24 +02:00
examples Rewrite simple-backend to use sched and ggml_backend_load_all (#1376) 2025-10-29 18:10:19 +01:00
include ggml : make ggml_is_view as API (llama/19539) 2026-02-25 12:32:13 +02:00
scripts sync : whisper.cpp 2026-02-15 22:19:16 +02:00
src cuda : enable CUDA graphs for MMID 1 <= BS <= 4 (llama/19645) 2026-02-25 12:32:13 +02:00
tests ggml : avoid UB in gemm ukernel (llama/19642) 2026-02-25 12:32:13 +02:00
.editorconfig gguf : add file format specification (#302) 2023-11-01 19:01:49 +02:00
.gitignore gitignore : ignore idea files (#1339) 2025-09-09 13:17:07 +02:00
.gitmodules git : remove kompute submodule (#1300) 2025-07-12 16:12:49 +03:00
AUTHORS authors : update 2025-02-04 13:03:55 +02:00
CMakeLists.txt ggml : bump version to 0.9.7 (#1425) 2026-02-15 22:21:04 +02:00
CONTRIBUTING.md contrib : recommend PRs to llama.cpp (#1312) 2025-07-25 07:05:38 +03:00
ggml.pc.in pkg-config: include the new GGML_VERSION as a version (#1348) 2025-09-25 18:59:38 +02:00
LICENSE docs : Minor cleanups (llama/19252) 2026-02-07 10:37:38 +02:00
README.md readme : remove transfer notice (#1107) 2025-02-08 10:33:44 +02:00
requirements.txt ci : update requirements.txt 2024-12-03 21:05:37 +02:00

ggml

Roadmap / Manifesto

Tensor library for machine learning

Note that this project is under active development.
Some of the development is currently happening in the llama.cpp and whisper.cpp repos

Features

  • Low-level cross-platform implementation
  • Integer quantization support
  • Broad hardware support
  • Automatic differentiation
  • ADAM and L-BFGS optimizers
  • No third-party dependencies
  • Zero memory allocations during runtime

Build

git clone https://github.com/ggml-org/ggml
cd ggml

# install python dependencies in a virtual environment
python3.10 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt

# build the examples
mkdir build && cd build
cmake ..
cmake --build . --config Release -j 8

GPT inference (example)

# run the GPT-2 small 117M model
../examples/gpt-2/download-ggml-model.sh 117M
./bin/gpt-2-backend -m models/gpt-2-117M/ggml-model.bin -p "This is an example"

For more information, checkout the corresponding programs in the examples folder.

Using CUDA

# fix the path to point to your CUDA compiler
cmake -DGGML_CUDA=ON -DCMAKE_CUDA_COMPILER=/usr/local/cuda-12.1/bin/nvcc ..

Using hipBLAS

cmake -DCMAKE_C_COMPILER="$(hipconfig -l)/clang" -DCMAKE_CXX_COMPILER="$(hipconfig -l)/clang++" -DGGML_HIP=ON

Using SYCL

# linux
source /opt/intel/oneapi/setvars.sh
cmake -G "Ninja" -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icpx -DGGML_SYCL=ON ..

# windows
"C:\Program Files (x86)\Intel\oneAPI\setvars.bat"
cmake -G "Ninja" -DCMAKE_C_COMPILER=cl -DCMAKE_CXX_COMPILER=icx -DGGML_SYCL=ON ..

Compiling for Android

Download and unzip the NDK from this download page. Set the NDK_ROOT_PATH environment variable or provide the absolute path to the CMAKE_ANDROID_NDK in the command below.

cmake .. \
   -DCMAKE_SYSTEM_NAME=Android \
   -DCMAKE_SYSTEM_VERSION=33 \
   -DCMAKE_ANDROID_ARCH_ABI=arm64-v8a \
   -DCMAKE_ANDROID_NDK=$NDK_ROOT_PATH
   -DCMAKE_ANDROID_STL_TYPE=c++_shared
# create directories
adb shell 'mkdir /data/local/tmp/bin'
adb shell 'mkdir /data/local/tmp/models'

# push the compiled binaries to the folder
adb push bin/* /data/local/tmp/bin/

# push the ggml library
adb push src/libggml.so /data/local/tmp/

# push model files
adb push models/gpt-2-117M/ggml-model.bin /data/local/tmp/models/

adb shell
cd /data/local/tmp
export LD_LIBRARY_PATH=/data/local/tmp
./bin/gpt-2-backend -m models/ggml-model.bin -p "this is an example"

Resources