Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

update umbra results #295

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open

update umbra results #295

wants to merge 1 commit into from

Conversation

toschmidt
Copy link
Contributor

The latest results from Umbra.

Measurements were done on c6a.metal and c6a.4xlarge.

@chhetripradeep
Copy link
Collaborator

I conducted tests using the c6a.4xlarge instance type so far, but the results I obtained is significantly deviated from those reported in the PR. Here is the logs for your reference https://pastila.nl/?004f715e/b8abaf8f576db05761d58384f793885e#8fprgaf2QyvhCjaZW8EOgA==

@chhetripradeep
Copy link
Collaborator

chhetripradeep commented Jan 24, 2025

Unrelated to this particular PR but related to the umbra's benchmark.sh script:

Why does it enables direct i/o rather than the default buffered i/o (as other tests do) ?

@toschmidt
Copy link
Contributor Author

@chhetripradeep thanks for validating the results.
Actually, for the hot run, your results look better than ours. Interestingly, for the cold run your results are roughly 2x slower. Which is also consistent with the Q23 result from the hot run and the load time which also doubles. In all theses cases we must read the data from the ebs device. For the other queries in the hot run the working set is cached in memory. Hence the results are the same.

My assumption is that you used a smaller ebs volume. Check out this document by aws: https://docs.aws.amazon.com/ebs/latest/userguide/general-purpose.html#gp2-performance

Throughput performance

gp2 volumes deliver throughput between 128 MiB/s and 250 MiB/s, depending on the volume size. Throughput >performance is provisioned as follows:

Volumes that are 170 GiB and smaller deliver a maximum throughput of 128 MiB/s.

Volumes larger than 170 GiB but smaller than 334 GiB can burst to a maximum throughput of 250 MiB/s.

Volumes that are 334 GiB and larger deliver 250 MiB/s.

What ebs size did you use?

@chhetripradeep
Copy link
Collaborator

What ebs size did you use?

Indeed, that might be the reason. I changed the size of the disk from 250G to 500G and it may not have fully applied. I will create a fresh new instance in order to remove this entropy.

Copy link
Collaborator

@chhetripradeep chhetripradeep left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With 500G disk, the results are reproducible.

@chhetripradeep
Copy link
Collaborator

chhetripradeep commented Jan 24, 2025

If you are interested in difference between direct i/o and buffered i/o results, cold run (buffered i/o) is bit slower than cold run (direct i/o): https://pastila.nl/?0028f31a/899da606e28144e6b024a9050c5f6fdc#RESfw30rg8hih9TPjPcqjA==

@toschmidt
Copy link
Contributor Author

If you are interested in difference between direct i/o and buffered i/o results, cold run (buffered i/o) is bit slower than cold run (direct i/o): https://pastila.nl/?0028f31a/899da606e28144e6b024a9050c5f6fdc#RESfw30rg8hih9TPjPcqjA==

Thanks for evaluating this, we use direct-io in all our deployments and will change the default for this setting for the next release.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants