Releases: damian0815/compel
Releases · damian0815/compel
v2.3.0
PR #120 Tokenization info, CLS token handling options for longer prompts, bugfixes
Full Changelog: v2.2.1...v2.3.0
v2.2.1
v2.2.0
What's Changed
- Add support for Flux by adding internal support for
T5TokenizerFast
&T5EncoderModel
. This probably also enables support for other T5-based models - I have only tested Flux. - Add
CompelForSD
,CompelForSDXL
, andCompelForFlux
classes to help wrangling the different embeddings and prompt intentions (inputs - main, style, negative main, negative style; outputs - embeds, negative embeds, pooled embeds, pooled negative embeds) - Performance improvement when no weights are applied by bypassing Compel's weighting. Causes slightly different output with FP16 - disable with
compel.disable_no_weights_bypass()
2.1.1
Fix RuntimeError: token sequence mismatch
+ default to SENTENCE splitting
Full Changelog: 2.1.0...2.1.1
v2.1.0
What's Changed
- Update README.md by @GerbertBless in #101
- Add better options for splitting long prompts by @damian0815 in #112
New Contributors
- @GerbertBless made their first contribution in #101
Full Changelog: v2.0.3...2.1.0
v2.0.2
i'm going to try to do git releases. here's the "first".
compel 2.0.2 is now available on pypi:
pip install compel==2.0.2
it fixes an issue with SDXL if you have called enable_sequential_cpu_offload()
on your pipeline. you'll need to pass eg device='cuda'
to compel's __init__
.
i also cleaned up the SDXL demo notebook and extended it to demonstrate using positive+negative and long prompt support with the __call__
interface (i.e. compel([positive_prompt, negative_prompt])
).