Skip to content

v3.3.0 - TwinFlow, LayerSync, and Flux.2 edit training

Choose a tag to compare

@bghira bghira released this 16 Dec 21:51
· 1020 commits to release since this release
2e018b5

Features

  • TwinFlow, a distillation method that works on most flow-matching arch and converges in much less time than typical distillation
  • LayerSync, a self-regularisation method for practically all transformer models supported in SimpleTuner
  • CREPA can combine forces with LayerSync to self-regulate instead of using DINO features
  • Flux.2 can now accept conditioning datasets
  • Custom flow-matching timesteps can be provided for training, allowing configuration of "Glance" style training runs
  • WebUI: better path handling for datasets, sensible defaults will be set instead of requiring the user to figure it out
  • CLI: When configuring dataset cache directories, you can now use {id}, {output_dir} in addition to {model_family} to make dynamic paths that adjust automatically based on these attributes

Bugfixes

  • WebUI: Search box race condition resolved that prevented items from highlighting, or subsections from expanding

What's Changed

  • TwinFlow self-directed distillation by @bghira in #2159
  • (#2136) add --flow_custom_timesteps with Glance "distillation" example by @bghira in #2160
  • flux2: adjust comfyUI lora export format to use their custom keys instead of generic LoRA layout by @bghira in #2162
  • [webUI] refactoring validation and default paths for text embed and VAE caches by @bghira in #2163
  • flux2: support conditioning datasets by @bghira in #2164
  • fix search box race condition that prevented expanding subsection or highlighting results by @bghira in #2165
  • LayerSync + CREPA adaptation by @bghira in #2161
  • merge by @bghira in #2166

Full Changelog: v3.2.3...v3.3.0