mirror of
https://github.com/invoke-ai/InvokeAI
synced 2026-03-02 04:59:06 +01:00
* Add FLUX.2 LOKR model support (detection and loading) (#88) Fix BFL LOKR models being misidentified as AIToolkit format Fix alpha key warning in LOKR QKV split layers Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com> Co-authored-by: lstein <111189+lstein@users.noreply.github.com> * Fix BFL→diffusers key mapping for non-block layers in FLUX.2 LoRA/LoKR BFL's FLUX.2 model uses different names than diffusers' Flux2Transformer2DModel for top-level modules (embedders, modulations, output layers). The existing conversion only handled block-level renames (double_blocks→transformer_blocks), causing "Failed to find module" warnings for non-block LoRA keys like img_in, txt_in, modulation.lin, time_in, and final_layer. --------- Co-authored-by: Copilot <198982749+Copilot@users.noreply.github.com> Co-authored-by: lstein <111189+lstein@users.noreply.github.com> Co-authored-by: Alexander Eichhorn <alex@eichhorn.dev> |
||
|---|---|---|
| .. | ||
| __init__.py | ||
| base.py | ||
| clip_embed.py | ||
| clip_vision.py | ||
| controlnet.py | ||
| external_api.py | ||
| factory.py | ||
| flux_redux.py | ||
| identification_utils.py | ||
| ip_adapter.py | ||
| llava_onevision.py | ||
| lora.py | ||
| main.py | ||
| qwen3_encoder.py | ||
| siglip.py | ||
| spandrel.py | ||
| t2i_adapter.py | ||
| t5_encoder.py | ||
| textual_inversion.py | ||
| unknown.py | ||
| vae.py | ||