mirror of
https://github.com/invoke-ai/InvokeAI
synced 2026-04-07 15:35:07 +02:00
- ldm.generate.Generator() now takes an argument named `max_load_models`. This is an integer that limits the model cache size. When the cache reaches the limit, it will start purging older models from cache. - CLI takes an argument --max_load_models, default to 2. This will keep one model in GPU and the other in CPU and switch back and forth quickly. - To not cache models at all, pass --max_load_models=1 |
||
|---|---|---|
| .. | ||
| generator | ||
| restoration | ||
| args.py | ||
| conditioning.py | ||
| devices.py | ||
| image_util.py | ||
| log.py | ||
| model_cache.py | ||
| pngwriter.py | ||
| prompt_parser.py | ||
| readline.py | ||
| seamless.py | ||
| server_legacy.py | ||
| server.py | ||
| txt2mask.py | ||