ik_llama.cpp/requirements
Kawrakow 154e0d75fc
Merge mainline llama.cpp (#3)
* Merging mainline - WIP

* Merging mainline - WIP

AVX2 and CUDA appear to work.
CUDA performance seems slightly (~1-2%) lower as it is so often
the case with llama.cpp/ggml after some "improvements" have been made.

* Merging mainline - fix Metal

* Remove check

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-07-27 07:55:01 +02:00
..
requirements-all.txt Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
requirements-compare-llama-bench.txt Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
requirements-convert_hf_to_gguf_update.txt Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
requirements-convert_hf_to_gguf.txt Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
requirements-convert_legacy_llama.txt Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
requirements-convert_llama_ggml_to_gguf.txt Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
requirements-convert_lora_to_gguf.txt Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
requirements-pydantic.txt Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00
requirements-test-tokenizer-random.txt Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00