mirror of
https://github.com/ggerganov/llama.cpp
synced 2026-04-23 20:12:00 +02:00
Two bugs in `server_models::load()` that affect router mode reliability: **Bug 1: Deadlock when child process crashes** When a child process is killed (e.g., SIGKILL from OS code signature validation), the monitoring thread deadlocks on `stopping_thread.join()` because the stopping_thread's wait predicate (`is_stopping`) is never satisfied — the model name was never inserted into `stopping_models`. `update_status()` is never reached and the model stays stuck in LOADING state permanently. Fix: extend the stopping_thread's wait predicate to also wake when the child process is no longer alive (`!subprocess_alive()`). When woken by a dead child, the thread skips the shutdown sequence and returns immediately. The original `stopping_models.erase()` logic is preserved for normal unloads. **Bug 2: TOCTOU race bypasses `--models-max` (ref #20137)** `unload_lru()` is called outside the mutex, then `load()` acquires the lock afterward. Under concurrent requests, multiple threads observe capacity and all proceed to load, exceeding the limit. Fix: re-check capacity under the lock after `unload_lru()` returns. If another thread filled the slot in the window between `unload_lru()` and the lock acquisition, reject with an error instead of silently exceeding the limit. |
||
|---|---|---|
| .. | ||
| batched-bench | ||
| cli | ||
| completion | ||
| cvector-generator | ||
| export-lora | ||
| fit-params | ||
| gguf-split | ||
| imatrix | ||
| llama-bench | ||
| mtmd | ||
| parser | ||
| perplexity | ||
| quantize | ||
| results | ||
| rpc | ||
| server | ||
| tokenize | ||
| tts | ||
| CMakeLists.txt | ||