Skip to content

Releases: damian0815/compel

v2.3.0

05 Oct 18:25
Compare
Choose a tag to compare

PR #120 Tokenization info, CLS token handling options for longer prompts, bugfixes

Full Changelog: v2.2.1...v2.3.0

v2.2.1

01 Oct 14:44
Compare
Choose a tag to compare

What's Changed

Fix for #116

v2.2.0

01 Oct 07:58
Compare
Choose a tag to compare

What's Changed

  • Add support for Flux by adding internal support for T5TokenizerFast & T5EncoderModel. This probably also enables support for other T5-based models - I have only tested Flux.
  • Add CompelForSD, CompelForSDXL, and CompelForFlux classes to help wrangling the different embeddings and prompt intentions (inputs - main, style, negative main, negative style; outputs - embeds, negative embeds, pooled embeds, pooled negative embeds)
  • Performance improvement when no weights are applied by bypassing Compel's weighting. Causes slightly different output with FP16 - disable with compel.disable_no_weights_bypass()

2.1.1

17 May 21:48
Compare
Choose a tag to compare

Fix RuntimeError: token sequence mismatch + default to SENTENCE splitting

Full Changelog: 2.1.0...2.1.1

v2.1.0

11 May 15:14
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: v2.0.3...2.1.0

v2.0.2

20 Aug 15:27
Compare
Choose a tag to compare

i'm going to try to do git releases. here's the "first".

compel 2.0.2 is now available on pypi:

pip install compel==2.0.2

it fixes an issue with SDXL if you have called enable_sequential_cpu_offload() on your pipeline. you'll need to pass eg device='cuda' to compel's __init__.

i also cleaned up the SDXL demo notebook and extended it to demonstrate using positive+negative and long prompt support with the __call__ interface (i.e. compel([positive_prompt, negative_prompt])).