llama.cpp/include
copilot-swe-agent[bot] 51b679a5d6
semver: revert llama_export.h, fix ABI baseline to track full signatures
- Revert include/llama.h to use the original manual LLAMA_API visibility
  macro block (LLAMA_SHARED / LLAMA_BUILD)
- Revert src/CMakeLists.txt: remove GenerateExportHeader, restore
  LLAMA_BUILD/LLAMA_SHARED compile definitions and original
  target_include_directories
- Revert CMakeLists.txt: remove llama_export.h from LLAMA_PUBLIC_HEADERS
- Add scripts/gen-libllama-abi.py: Python parser that reads include/llama.h
  and extracts normalized full LLAMA_API function signatures (return type +
  name + parameter list), handling both plain and DEPRECATED() patterns
- Regenerate scripts/libllama.abi with full signatures (233 entries)
- Update .github/workflows/libllama-abi-check.yml to use the header parser
  script instead of building the library and running nm; the check now runs
  in seconds with no compiler dependency

Agent-Logs-Url: https://github.com/ggml-org/llama.cpp/sessions/cd21903e-afd2-477a-8285-0a2d46e1398c

Co-authored-by: ggerganov <1991296+ggerganov@users.noreply.github.com>
2026-04-15 12:02:36 +00:00
..
llama-cpp.h llama : re-enable manual LoRA adapter free (#19983) 2026-03-18 12:03:26 +02:00
llama.h semver: revert llama_export.h, fix ABI baseline to track full signatures 2026-04-15 12:02:36 +00:00