- Support all the quantized linear layer by automatic detecting method
- Support Flux in Kohya-ss/sd-scripts
- Support wildcard matching for select layers in preset
- Support Flux
- Support any quantized linear layer such as torchao
- Refined Functional API to support drop-in replacement between different algorithms
- Support wildcard for name matching in preset
- fix bugs in loading function of BOFT/OFT
- fix bugs in loading function of LoKr
- fix wrong behaviour of weight-decomposition when multiplier != 1
- Improve the coverage of unit-test
We reconstruct the whole library with new Class definition and brand new Functional API system.
We also removed lot of redundant/unused modules.
Since the whole library are changed significantly. We decide to call it 3.0.0 as a new major version.
- New Module API
- Add Parametrize API
- Add Functional API
- LoCon/LoHa/LoKr/Diag-OFT/BOFT only.
- Remove optional deps from install_requires
- Remove lot of redundant/deprecated modules
- Better testing
- HunYuan DiT Support (PR in kohya-ss/sd-scripts)
- LyCORIS now have consistent API for different algorithm like
bypass_forward_diff
orget_diff_weight
method. Developers of other project can utilize these API to do more tricks or integrate LyCORIS into their framework more easily. - LyCORIS now have parametrize API which utilize
torch.nn.utils.parametrize.register_parametrization
to directly patch individual parameters. Which can be useful for MHA layer or other tricky modules.- Currently only support 2~5D tensors. And LyCORIS will pretend these weights are weight of Linear/Conv1,2,3D then send it into LyCORIS modules
- More native implementation or more detailed control will be added in the future.
- LyCORIS now have functional API. Developers who prefer functional more than Module things can utilize this feature.
- Functional API also allow developers who don't want to introduce new dependencies. Just copy-paste the source code and utilizing it. (with Apache-2 License, directly copy-paste is totally allowed)
- Add support for Conv1d and Conv3d module on LoCon/LoHa/LoKr/Full/OFT/BOFT/GLoRA (not All algo in LyCORIS support them, you may receive error when apply unsopported algo), support inherited module (for example:
LoRACompatibleConv
orLoRACompatibleLinear
fromhuggingface/diffusers
) - HunYuan DiT support.
- Drop dependencies related to kohya-ss/sd-scripts:
- We now take kohya-ss/sd-scripts as optional dependency
- Which means
transformers
,diffusers
and anything related to kohya are all optional deps now.
- The definition of dropout and rank_dropout in each algorithm are changed. Since some concept of original rank_dropout in the lora of kohya-ss/sd-script is hard to applied to other algorithm. We can only design the dropout for each module seperatedly.
apply_max_norm
issue are all fixed.- DyLoRA, (IA)^3, GLoRA are all rewritten and support Linear/Conv1,2,3d.
- (IA)^3, GLoRA, Diag-OFT, BOFT are supported in
create_lycoris_from_weights
lycoris.kohya.create_network_from_weights
also support them as well.
- Fix wrong implementation of BOFT.
create_lycoris_from_weights
andcreate_network_from_weights
now have correct logging infos.get_module
andmake_module
are moved into modules' API.
- HCP modules are dropped. We will wait until HCP have better wrapper API.
- HyperNetwork-related modules like
hypernet/
,attention.py
,lilora.py
are removed. - Uncompleted GLoKr are removed.
- code copied from kohya-ss/sd-scripts are removed. The original sd-scripts repo is now an optional dependency.
- DoRA
- Weight decompose for LoHa and LoKr. (A.K.A DoHa/DoKr)
- DoRA/DoHa/DoKr will require smaller Learning rate!
- Support "bypass" (a.k.a. adapter) mode for LoHa/LoKr/OFT/BOFT
- LoHa will require 2xFLOPs since we rebuild full diff weight and then do one more forward.
- LoKr, OFT, BOFT should be more efficient than LoHa in bypass mode.
- Support bnb 8bit/4bit Linear layer (a.k.a. QLyCORIS) with LoHa/LoKr/OFT/BOFT.
- This will force module to enable bypass mode.
- Refine some details about code quality. Based on the report from GitRoll. (Thx you gitroll!)
- Remove redundant calculation in BOFT
- rank_dropout has been removed from OFT/BOFT temporarily untill we ensure how to apply it.
- Fix bugs in lokr when
lokr_w1_a
not exist. - Fix bugs in conversion scritps.
- Faster, better extract script
- support kohya-ss/sd-scripts image gen
- support regex name in kohya-ss/sd-scripts
- support resume on:
- full
- loha
- oft
- boft
- Add logger into LyCORIS
- Update HCP convert for the case where only UNet or TE is trained.
- Change arg names for conversion scripts.
- Fix wrong TE prefix in merge scripts.
- Fix warnings and confusing logging.
- Fix bugs in full module.
- Related: Fix bugs in
stable-diffusion-webui/extensions-builtin/Lora
- The PR
- Support merge sdxl loras which trained on plain diffusers with Kohya's LoRA implementation.
- Can be found in LECO or other similar projects.
- Refactor the batch convert scripts for pivotal bundle and hcp.
- Change the class name
lycoris.kohya.LycorisNetwork
tolycoris.kohya.LycorisNetworkKohya
to avoid confusion. - Fix bugs in merge scripts for Norm module and LoKr module.
- Fix bugs in scaled weight norms of OFT.
- Fix bugs in extract scripts for SDXL.
- Fix bugs in full module which consume 2x vram.
- Fix bugs in
create_network_from_weights
which caused bugs in "resume" feature for SDXL.
- Start supporting HCP-Diffusion (The reason to name this version "2.0.0")
- Now LyCORIS support LoHa/LoKr/Diag-OFT algorithm in HCP-Diffusion
- Add Pivotal tuning utilities
- Add hcp convert utilities
- Have no plan at this time to support full/lora and train_norms since HCP can do them natively
- Add Diag-OFT modules
- Add standalone usage support
- Can wrap any pytorch module which contains Linear/Conv2d/LayerNorm/GroupNorm modules
- Will support more module in the future
- Add SDXL support in Merge script
- Add SDXL support in Extract-locon
- More efficient (speed/vram) implementation for full module
- Better implementation of custom state_dict
- Fix errors of dropouts
- Fix errors of apply_max_norms
- Fix errors of resume
- Add norm modules (for training LayerNorm and GroupNorm, which should be good for style)
- Add full modules (So you can "native finetune" with lycoris now, should be convinient to try different weight)
- Add preset config system
- Add custom config system
- Support resuming from models
- Merge script support norm and full modules
- Fix errors with optional requirements
- Fix errors with not necessary import
- Fix wrong factorization behaviours
- Update utils in kohya-ss/sd-scripts
- Add config/preset system
- Improve the project structure
- reimplement weight init method
- implement HyperDreamBooth into LyCORIS
- better file structure
- rearrange the version format, previous 0.1.7 should be 1.7.0
- fix the bug in scale weight norm
- Add support for rank_dropout and module_dropout on LoCon/LoHa/LoKr
- Add support for scale_weight_norms on LoCon/LoHa/LoKr
- Will support SDXL on 0.1.8 (you can follow the dev branch)
- add dylora and IA^3 algorithm
- cp decomposition is default to disable now
- add 4 more layer to train (conv_in/out, time_embedding)
- Add cp-decomposition implementation for convolution layer
- Both LoRA(LoCon) and LoHa can use this more parameter-efficient decomposition
- Add sparse bias for extracted LoRA
- Will add to training in the future (Maybe)
- Change weight initialization method in LoHa
- Use lower std to avoid loss to go high or NaN when using normal lr (like 0.5 in Dadap)