mirror of
https://github.com/ggerganov/llama.cpp
synced 2026-04-03 13:56:07 +02:00
* Add profiling * More detailed profiling * Rework command submission to avoid global locks * Update wait handling * try new method of waiting on futures * Add serializing of command submission in some cases * Add new pool for timestamp queries and clean up logging * Serialize command submission in CI and leave a TODO note * Update webgpu CI * Add myself as WebGPU codeowner * Deadlock avoidance * Leave WebGPU/Vulkan CI serialized * Fix divide by 0 * Fix logic in division by inflight_threads * Update CODEOWNERS and remove serialize submit option |
||
|---|---|---|
| .. | ||
| bench.yml.disabled | ||
| build-amd.yml | ||
| build-cache.yml | ||
| build-cmake-pkg.yml | ||
| build-linux-cross.yml | ||
| build-riscv-native.yml | ||
| build.yml | ||
| close-issue.yml | ||
| copilot-setup-steps.yml | ||
| docker.yml | ||
| editorconfig.yml | ||
| gguf-publish.yml | ||
| labeler.yml | ||
| pre-tokenizer-hashes.yml | ||
| python-check-requirements.yml | ||
| python-lint.yml | ||
| python-type-check.yml | ||
| release.yml | ||
| server.yml | ||
| update-ops-docs.yml | ||
| winget.yml | ||