ik_llama.cpp/.github/ISSUE_TEMPLATE
Kawrakow 0ceeb11721 Merge mainline llama.cpp (#3)
* Merging mainline - WIP

* Merging mainline - WIP

AVX2 and CUDA appear to work.
CUDA performance seems slightly (~1-2%) lower as it is so often
the case with llama.cpp/ggml after some "improvements" have been made.

* Merging mainline - fix Metal

* Remove check

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-07-27 07:55:01 +02:00
..
01-bug-low.yml build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
02-bug-medium.yml build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
03-bug-high.yml build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
04-bug-critical.yml build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
05-enhancement.yml github: add refactor to issue template (#7561) 2024-05-28 20:27:27 +10:00
06-research.yml github: add contact links to issues and convert question into research [no ci] (#7612) 2024-05-30 21:55:36 +10:00
07-refactor.yml github: add refactor to issue template (#7561) 2024-05-28 20:27:27 +10:00
config.yml Merge mainline llama.cpp (#3) 2024-07-27 07:55:01 +02:00