mirror of
https://github.com/ggerganov/llama.cpp
synced 2026-04-30 19:21:18 +02:00
- Revert include/llama.h to use the original manual LLAMA_API visibility macro block (LLAMA_SHARED / LLAMA_BUILD) - Revert src/CMakeLists.txt: remove GenerateExportHeader, restore LLAMA_BUILD/LLAMA_SHARED compile definitions and original target_include_directories - Revert CMakeLists.txt: remove llama_export.h from LLAMA_PUBLIC_HEADERS - Add scripts/gen-libllama-abi.py: Python parser that reads include/llama.h and extracts normalized full LLAMA_API function signatures (return type + name + parameter list), handling both plain and DEPRECATED() patterns - Regenerate scripts/libllama.abi with full signatures (233 entries) - Update .github/workflows/libllama-abi-check.yml to use the header parser script instead of building the library and running nm; the check now runs in seconds with no compiler dependency Agent-Logs-Url: https://github.com/ggml-org/llama.cpp/sessions/cd21903e-afd2-477a-8285-0a2d46e1398c Co-authored-by: ggerganov <1991296+ggerganov@users.noreply.github.com> |
||
|---|---|---|
| .. | ||
| actions | ||
| ISSUE_TEMPLATE | ||
| workflows | ||
| labeler.yml | ||
| pull_request_template.md | ||