Skip to content

Update prefetcher to increment client read window linearly with each read #1546

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

mansi153
Copy link
Contributor

Rebased version of #1453, PRed to run CI benchmarks and analyse performance/behaviour

Mountpoint's S3 client has a backpressure mechanism to controlling how much data to fetch. This change updates the way Mountpoint's prefetcher signals to Mountpoint's S3 client (using the AWS CRT internally) to fetch more data ahead of where a consuming application is reading to accelerate throughput.

Before this change, Mountpoint would wait for 50% of the existing window to be consumed. For example with a window of 2GiB, 1GiB must be read by the kernel before Mountpoint would inform the S3 client to fetch more data up to 2GiB ahead of the current position.

After this change, Mountpoint now sends this signal with every read by the kernel. For example, a 128KiB read by the Kernel to fill a page in the page cache will result in the CRT being updated to be 2GiB ahead of the end offset of the 128KiB read, where the readahead window has a size of 2GiB.

We observe improved throughput for single file handle sequential reading with this approach. For random reading and for multiple file handle reading, we don't see an observable change in throughput. We expect this may be a prerequisite for driving higher throughput with multiple file handles, with this potentially being one bottleneck among others.

Does this change impact existing behavior?

This improves the way Mountpoint signals progress to its S3 client. We expect improvements in throughput, but the end-user behavior hasn't changed in a meaningful way.

Does this change need a changelog entry? Does it require a version change?

A changelog entry has been added to note the change in algorithm, alongside ensuring a minor version bump. This is added to communicate the change, in case any issue is raised around the change in behavior. However, we do not expect any regressions given the benchmarking performed.


By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license and I agree to the terms of the Developer Certificate of Origin (DCO).

@mansi153 mansi153 added the performance PRs to run benchmarks on label Jul 29, 2025
@mansi153 mansi153 deployed to PR benchmarks July 29, 2025 09:30 — with GitHub Actions Active
@mansi153 mansi153 temporarily deployed to PR integration tests July 29, 2025 09:30 — with GitHub Actions Inactive
@mansi153 mansi153 temporarily deployed to PR integration tests July 29, 2025 09:30 — with GitHub Actions Inactive
@mansi153 mansi153 temporarily deployed to PR integration tests July 29, 2025 09:30 — with GitHub Actions Inactive
@mansi153 mansi153 temporarily deployed to PR integration tests July 29, 2025 09:30 — with GitHub Actions Inactive
@mansi153 mansi153 temporarily deployed to PR integration tests July 29, 2025 09:30 — with GitHub Actions Inactive
@mansi153 mansi153 temporarily deployed to PR integration tests July 29, 2025 09:30 — with GitHub Actions Inactive
@mansi153 mansi153 temporarily deployed to PR integration tests July 29, 2025 09:30 — with GitHub Actions Inactive
@mansi153 mansi153 temporarily deployed to PR integration tests July 29, 2025 09:30 — with GitHub Actions Inactive
@mansi153 mansi153 temporarily deployed to PR integration tests July 29, 2025 09:30 — with GitHub Actions Inactive
@mansi153 mansi153 temporarily deployed to PR integration tests July 29, 2025 09:30 — with GitHub Actions Inactive
@mansi153 mansi153 temporarily deployed to PR integration tests July 29, 2025 09:30 — with GitHub Actions Inactive
@mansi153 mansi153 temporarily deployed to PR integration tests July 29, 2025 09:30 — with GitHub Actions Inactive
@mansi153 mansi153 temporarily deployed to PR integration tests July 29, 2025 09:30 — with GitHub Actions Inactive
@mansi153 mansi153 temporarily deployed to PR integration tests July 29, 2025 09:30 — with GitHub Actions Inactive
@mansi153 mansi153 temporarily deployed to PR integration tests July 29, 2025 09:30 — with GitHub Actions Inactive
@mansi153 mansi153 temporarily deployed to PR integration tests July 29, 2025 09:30 — with GitHub Actions Inactive
@mansi153 mansi153 temporarily deployed to PR integration tests July 29, 2025 09:30 — with GitHub Actions Inactive
@mansi153 mansi153 temporarily deployed to PR integration tests July 29, 2025 09:30 — with GitHub Actions Inactive
@mansi153 mansi153 temporarily deployed to PR integration tests July 29, 2025 09:30 — with GitHub Actions Inactive
@mansi153 mansi153 temporarily deployed to PR integration tests July 29, 2025 09:30 — with GitHub Actions Inactive
@mansi153 mansi153 temporarily deployed to PR integration tests July 29, 2025 09:30 — with GitHub Actions Inactive
@mansi153 mansi153 marked this pull request as draft July 29, 2025 09:30
@@ -5,6 +5,7 @@
* Adopt a unified memory pool to reduce overall memory usage. ([#1511](https://github.com/awslabs/mountpoint-s3/pull/1511))
* Replace `S3Uri` with `S3Path` and consolidate related types like `Bucket` and `Prefix` into the `s3` module.
([#1535](https://github.com/awslabs/mountpoint-s3/pull/1535))
* `PrefetchGetObject` now has an updated backpressure algorithm advancing the read window with each call to `PrefetchGetObject::read`, with the aim of higher sequential-read throughput. ([#1453](https://github.com/awslabs/mountpoint-s3/pull/1453))
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll fix this and add the appropriate changelog entry and version updates

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
performance PRs to run benchmarks on
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants