Go to file
copilot-swe-agent[bot] 85837de93e Remove accidentally committed build artifacts
Co-authored-by: ggerganov <1991296+ggerganov@users.noreply.github.com>
2025-11-04 12:13:30 +00:00
.github ci : add self-hosted workflows (#1357) 2025-09-29 15:15:13 +03:00
ci ci : print results [no ci] (#1358) 2025-09-29 16:20:52 +03:00
cmake ggml: Skip backend library linking code when GGML_BACKEND_DL=ON (llama/15094) 2025-08-14 14:17:28 +03:00
docs Update gguf specification to synchronize the ggml_types declaration shown in the doc with the actual one. (#1342) 2025-09-16 13:42:24 +02:00
examples Rewrite simple-backend to use sched and ggml_backend_load_all (#1376) 2025-10-29 18:10:19 +01:00
include model: add support for qwen3vl series (llama/16780) 2025-11-01 09:41:35 +02:00
scripts sync : llama.cpp 2025-11-01 09:41:35 +02:00
src Fix test-conv2d-dw failure on ARM SVE by using runtime vector length 2025-11-04 11:37:37 +00:00
tests CUDA: add expert reduce kernel (llama/16857) 2025-11-01 09:41:35 +02:00
_codeql_detected_source_root Complete fix for test-conv2d-dw on ARM SVE 2025-11-04 11:45:25 +00:00
.editorconfig gguf : add file format specification (#302) 2023-11-01 19:01:49 +02:00
.gitignore Add _codeql_build_dir to gitignore 2025-11-04 11:46:03 +00:00
.gitmodules git : remove kompute submodule (#1300) 2025-07-12 16:12:49 +03:00
AUTHORS authors : update 2025-02-04 13:03:55 +02:00
CMakeLists.txt Add experimental ggml-hexagon backend for the Hexagon NPU (llama/16547) 2025-11-01 09:41:35 +02:00
CONTRIBUTING.md contrib : recommend PRs to llama.cpp (#1312) 2025-07-25 07:05:38 +03:00
ggml.pc.in pkg-config: include the new GGML_VERSION as a version (#1348) 2025-09-25 18:59:38 +02:00
LICENSE license : update copyright notice + add AUTHORS 2024-04-09 20:17:51 +03:00
README.md readme : remove transfer notice (#1107) 2025-02-08 10:33:44 +02:00
requirements.txt ci : update requirements.txt 2024-12-03 21:05:37 +02:00

ggml

Roadmap / Manifesto

Tensor library for machine learning

Note that this project is under active development.
Some of the development is currently happening in the llama.cpp and whisper.cpp repos

Features

  • Low-level cross-platform implementation
  • Integer quantization support
  • Broad hardware support
  • Automatic differentiation
  • ADAM and L-BFGS optimizers
  • No third-party dependencies
  • Zero memory allocations during runtime

Build

git clone https://github.com/ggml-org/ggml
cd ggml

# install python dependencies in a virtual environment
python3.10 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt

# build the examples
mkdir build && cd build
cmake ..
cmake --build . --config Release -j 8

GPT inference (example)

# run the GPT-2 small 117M model
../examples/gpt-2/download-ggml-model.sh 117M
./bin/gpt-2-backend -m models/gpt-2-117M/ggml-model.bin -p "This is an example"

For more information, checkout the corresponding programs in the examples folder.

Using CUDA

# fix the path to point to your CUDA compiler
cmake -DGGML_CUDA=ON -DCMAKE_CUDA_COMPILER=/usr/local/cuda-12.1/bin/nvcc ..

Using hipBLAS

cmake -DCMAKE_C_COMPILER="$(hipconfig -l)/clang" -DCMAKE_CXX_COMPILER="$(hipconfig -l)/clang++" -DGGML_HIP=ON

Using SYCL

# linux
source /opt/intel/oneapi/setvars.sh
cmake -G "Ninja" -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icpx -DGGML_SYCL=ON ..

# windows
"C:\Program Files (x86)\Intel\oneAPI\setvars.bat"
cmake -G "Ninja" -DCMAKE_C_COMPILER=cl -DCMAKE_CXX_COMPILER=icx -DGGML_SYCL=ON ..

Compiling for Android

Download and unzip the NDK from this download page. Set the NDK_ROOT_PATH environment variable or provide the absolute path to the CMAKE_ANDROID_NDK in the command below.

cmake .. \
   -DCMAKE_SYSTEM_NAME=Android \
   -DCMAKE_SYSTEM_VERSION=33 \
   -DCMAKE_ANDROID_ARCH_ABI=arm64-v8a \
   -DCMAKE_ANDROID_NDK=$NDK_ROOT_PATH
   -DCMAKE_ANDROID_STL_TYPE=c++_shared
# create directories
adb shell 'mkdir /data/local/tmp/bin'
adb shell 'mkdir /data/local/tmp/models'

# push the compiled binaries to the folder
adb push bin/* /data/local/tmp/bin/

# push the ggml library
adb push src/libggml.so /data/local/tmp/

# push model files
adb push models/gpt-2-117M/ggml-model.bin /data/local/tmp/models/

adb shell
cd /data/local/tmp
export LD_LIBRARY_PATH=/data/local/tmp
./bin/gpt-2-backend -m models/ggml-model.bin -p "this is an example"

Resources