Skip to content

Flux new control models #59

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 21 commits into from
Dec 4, 2024
Merged

Flux new control models #59

merged 21 commits into from
Dec 4, 2024

Conversation

andreasjansson
Copy link
Member

No description provided.

self.redux_up = nn.Linear(redux_dim, txt_in_features * 3, dtype=dtype)
self.redux_down = nn.Linear(txt_in_features * 3, txt_in_features, dtype=dtype)

sd = load_sft(redux_path, device=str(device))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe wrap this in if redux_path is not None?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good thought - I'd rather remove the | None = os.getenv("FLUX_REDUX") part of this. the way we're using this model we're always passing in redux_path and we should fail w/o it.

torch._dynamo.mark_dynamic(img_ids, 1, min=256, max=8100)
torch._dynamo.mark_dynamic(img_cond, 1, min=256, max=8100)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is only needed if we're compiling the bf16 controlnets, which I don't think we're doing (unless I'm wrong?). not a huge deal either way, more of a thought.

@@ -386,12 +386,12 @@ def into_bytes(self, x: torch.Tensor, jpeg_quality: int = 99) -> io.BytesIO:
return im

@torch.inference_mode()
def as_img_tensor(self, x: torch.Tensor) -> io.BytesIO:
"""Converts the image tensor to bytes."""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

extremely nit but might be worth renaming this method to as_pil_and_np_image or something.

@@ -378,64 +440,62 @@ def handle_loras(
self.bf16_lora = lora_weights
self.bf16_lora_scale = lora_scale

def preprocess(self, aspect_ratio: str, megapixels: str = "1") -> Tuple[int, int]:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

need to update this name change in the HotswapPredictor

predict.py Outdated
timesteps = get_schedule(
num_inference_steps, (x.shape[-1] * x.shape[-2]) // 4, shift=self.shift
num_inference_steps,
# TODO: this has changed in upstream flux to x.shape[1]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can probably delete this comment

Copy link
Contributor

@daanelson daanelson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

generally great, need one update to HotswapPredictor and the rest is cosmetic. I should be able to push that in a bit.

@daanelson daanelson self-requested a review December 2, 2024 22:07
@daanelson daanelson merged commit 02f32be into main Dec 4, 2024
1 check passed
@daanelson daanelson deleted the flux-new-control-models branch February 6, 2025 19:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants