mirror of
https://github.com/invoke-ai/InvokeAI
synced 2026-03-14 02:40:20 +01:00
- ldm.generate.Generator() now takes an argument named `max_load_models`. This is an integer that limits the model cache size. When the cache reaches the limit, it will start purging older models from cache. - CLI takes an argument --max_load_models, default to 2. This will keep one model in GPU and the other in CPU and switch back and forth quickly. - To not cache models at all, pass --max_load_models=1 |
||
|---|---|---|
| .. | ||
| orig_scripts | ||
| dream.py | ||
| images2prompt.py | ||
| invoke.py | ||
| legacy_api.py | ||
| merge_embeddings.py | ||
| preload_models.py | ||
| sd-metadata.py | ||