Skip to content

Conversation

@imxuebi
Copy link

@imxuebi imxuebi commented Dec 29, 2025

fix: unify HomoLR predict logic and handle batch_size

Changes:

  1. Refactor HomoLRClient.predict in python/fate/ml/glm/homo/lr/client.py:

    • Always initialize a new FedAVGClient instance with local_mode=True for prediction.
    • This ensures consistent prediction behavior regardless of whether it is called during training (validation) or as a standalone prediction task, avoiding side effects from the training trainer state.
  2. Update predict component in python/fate/components/components/homo_lr.py:

    • Add handling for batch_size <= 0.
    • If batch_size is non-positive, it is now correctly converted to None (indicating full batch), preventing errors when initializing HomoLRClient.

fix: unify HomoLR predict logic and handle batch_size

Signed-off-by: imxuebi <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant