mirror of
https://github.com/invoke-ai/InvokeAI
synced 2026-03-02 13:09:06 +01:00
* Add FLUX.2 LOKR model support (detection and loading) (#88) Fix BFL LOKR models being misidentified as AIToolkit format Fix alpha key warning in LOKR QKV split layers Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com> Co-authored-by: lstein <111189+lstein@users.noreply.github.com> * Fix BFL→diffusers key mapping for non-block layers in FLUX.2 LoRA/LoKR BFL's FLUX.2 model uses different names than diffusers' Flux2Transformer2DModel for top-level modules (embedders, modulations, output layers). The existing conversion only handled block-level renames (double_blocks→transformer_blocks), causing "Failed to find module" warnings for non-block LoRA keys like img_in, txt_in, modulation.lin, time_in, and final_layer. --------- Co-authored-by: Copilot <198982749+Copilot@users.noreply.github.com> Co-authored-by: lstein <111189+lstein@users.noreply.github.com> Co-authored-by: Alexander Eichhorn <alex@eichhorn.dev> |
||
|---|---|---|
| .. | ||
| layers | ||
| lora_conversions | ||
| __init__.py | ||
| layer_patcher.py | ||
| model_patch_raw.py | ||
| pad_with_zeros.py | ||