ik_llama.cpp/cmake
Kawrakow 0ceeb11721 Merge mainline llama.cpp (#3)
* Merging mainline - WIP

* Merging mainline - WIP

AVX2 and CUDA appear to work.
CUDA performance seems slightly (~1-2%) lower as it is so often
the case with llama.cpp/ggml after some "improvements" have been made.

* Merging mainline - fix Metal

* Remove check

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-07-27 07:55:01 +02:00
..
arm64-windows-llvm.cmake ggml : prevent builds with -ffinite-math-only (#7726) 2024-06-04 17:01:09 +10:00
arm64-windows-msvc.cmake Add support for properly optimized Windows ARM64 builds with LLVM and MSVC (#7191) 2024-05-16 12:47:36 +10:00
build-info.cmake Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
git-vars.cmake Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
llama-config.cmake.in Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
llama.pc.in cmake : add pkg-config spec file for llama.cpp (#7702) 2024-06-03 11:06:24 +03:00