-
Notifications
You must be signed in to change notification settings - Fork 1
Add bandwidth limit feature for HTTP and S3 datasources #36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
…--bwlimit Co-authored-by: kebe7jun <[email protected]>
Do you have any solutions for other types of Datasets? |
// - PVC: | ||
// - NFS: | ||
// - CONDA: requirements.txt, environment.yaml | ||
// - REFERENCE: | ||
// - HUGGING_FACE: repo, repoType, endpoint, include, exclude, revision | ||
// - MODEL_SCOPE: repo, repoType, include, exclude, revision | ||
// bandwidthLimit: Bandwidth limit for downloads in KiB/s, or use suffix B|K|M|G|T|P (e.g. "1M" for 1 MiB/s, "10M" for 10 MiB/s) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you have any solutions for other types of Datasets?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as above - bandwidth limiting can be extended to other dataset types beyond HTTP and S3, though implementation complexity varies. The current approach leverages rclone's built-in --bwlimit
flag, which is why it's straightforward for HTTP and S3. Other types would require external bandwidth limiting tools like trickle
or traffic control.
region: "us-east-1" | ||
provider: "AWS" | ||
bandwidthLimit: "5M" # Limit download bandwidth to 5 MiB/s | ||
secretRef: s3-credentials |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
secretRef: s3-credentials | |
secretRef: s3-credentials | |
Yes, bandwidth limiting can be extended to other dataset types, though the implementation complexity varies: Currently supported: HTTP and S3 (leveraging rclone's built-in Possible future extensions:
Not applicable: PVC, NFS, REFERENCE (no network downloads involved) The HTTP/S3 implementation is straightforward because rclone has built-in bandwidth limiting. For other types, we'd need external tools which adds complexity and dependencies. Would you like me to implement any specific dataset type? |
Co-authored-by: nekomeowww <[email protected]>
Can you help me add the feature to limit bandwidth using |
…ope datasets - Add bandwidth limiting support using `trickle` for Git, HuggingFace, and ModelScope dataset types - Create bandwidth conversion utility to convert rclone format to trickle format - Wrap network commands with trickle when bandwidthLimit option is specified - Update API documentation to include bandwidthLimit support for new dataset types - Add comprehensive tests for bandwidth limiting functionality - Create sample configuration files showing bandwidth limiting usage The implementation leverages trickle's ability to wrap any command with bandwidth throttling, providing a consistent approach across different dataset types that don't have native bandwidth limiting capabilities like HTTP/S3 with rclone. Co-authored-by: kebe7jun <[email protected]>
I've implemented bandwidth limiting using The implementation:
The bandwidth limit format is consistent with the existing HTTP/S3 implementation (KiB/s with B|K|M|G|T|P suffixes) and automatically converts to trickle's format. Usage example: spec:
source:
type: GIT
uri: https://github.com/user/repo.git
options:
bandwidthLimit: "5M" # Limit to 5 MiB/s The feature is fully tested and includes sample configuration files. Commit: ebdb19d |
Is it possible to integrate trickle directly into the entrypoint of the data-loader Docker image? In this way, there is no need to add wrap when executing the command. The speed limit value can be passed through the environment variable. |
Co-authored-by: kebe7jun <[email protected]>
Yes! I've successfully integrated trickle directly into the data-loader Docker entrypoint. The implementation:
The bandwidth limit value is automatically converted from rclone format (KiB/s with B|K|M|G|T|P suffixes) to trickle's KB/s format. Users still specify bandwidth limits the same way in their Dataset configurations - the change is transparent. Benefits:
Commit: 333b1ef |
This PR adds bandwidth limiting capability to prevent data downloads from consuming all available bandwidth and affecting other services in production environments.
Changes
Core Implementation
bandwidthLimit
option support that adds--bwlimit
flag to rclone commandsbandwidthLimit
option support that adds--bwlimit
flag to rclone commandsUsage
Users can now specify a bandwidth limit in their Dataset configurations:
Bandwidth Limit Format
The bandwidth limit follows rclone's format:
"1M"
(1 MiB/s),"500K"
(500 KiB/s),"2G"
(2 GiB/s)Testing
--bwlimit
flag is correctly added to rclone commandsImplementation Details
The feature leverages rclone's built-in
--bwlimit
flag for bandwidth throttling:bandwidthLimit
option from Dataset configuration--bwlimit <value>
to rclone command arguments when presentThis approach is minimal, surgical, and leverages existing rclone capabilities for reliability.
Fixes #3.
💬 Share your feedback on Copilot coding agent for the chance to win a $200 gift card! Click here to start the survey.