You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As part of Kubeflow Training V2 work, we should design and implement custom Trainer to fine-tune LLMs that we are planning to support via TrainingRuntimes in Kubeflow upstream.
We should discuss whether we should use native PyTorch APIs or HuggingFace Transformers in the LLM Trainer implementation.
The Trainer should allow users to configure LoRA, QLoRA, FSDP, and other important configurations.
As part of Kubeflow Training V2 work, we should design and implement custom Trainer to fine-tune LLMs that we are planning to support via TrainingRuntimes in Kubeflow upstream.
We should discuss whether we should use native PyTorch APIs or HuggingFace Transformers in the LLM Trainer implementation.
The Trainer should allow users to configure LoRA, QLoRA, FSDP, and other important configurations.
Useful resources:
Part of: #2170
cc @saileshd1402 @deepanker13 @kubeflow/wg-training-leads
Love this feature?
Give it a 👍 We prioritize the features with most 👍
The text was updated successfully, but these errors were encountered: