Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

liboqs benchmarking still running 0.9.0-rc1 #110

Open
SWilson4 opened this issue Apr 2, 2024 · 5 comments
Open

liboqs benchmarking still running 0.9.0-rc1 #110

SWilson4 opened this issue Apr 2, 2024 · 5 comments
Assignees

Comments

@SWilson4
Copy link
Member

SWilson4 commented Apr 2, 2024

See https://openquantumsafe.org/benchmarking/visualization/2024-04-02/speed_kem.json.

I think I fixed this by reordering the Docker cleanup commands in the scripts run by cron jobs on various platforms. We'll see when the tests run next, I guess.

TODO in future profiling updates/fixes: add the trigger scripts to GitHub somewhere so we can have proper version control for them.

@SWilson4
Copy link
Member Author

SWilson4 commented Apr 4, 2024

It turns out that this was not an issue due to cron jobs; we just haven't pushed the oqs-perf image since liboqs main was at 0.9.0-rc1. Oops...

This could be easily (temporarily) fixed by simply pushing a new image to Docker Hub, but at this point we've had so many issues with profiling that I would rather rewrite the infrastructure setup to (attempt to) avoid this popping up again in the future.

@SWilson4 SWilson4 self-assigned this Apr 4, 2024
@baentsch
Copy link
Member

baentsch commented Apr 9, 2024

would rather rewrite the infrastructure setup

ACK. I'm turning off the AWS VMs running profiling then.

@baentsch
Copy link
Member

baentsch commented Apr 9, 2024

One more thought: OK to close all open issues given they're no longer relevant given a new system is being done? Or do you want to keep them as "bad examples"/what to avoid @SWilson4 ? Makes for shorter OQS meeting invites :)

@baentsch
Copy link
Member

@SWilson4 I was doing some cleanup and came across this issue: How are you coming along with the new baentschmarking ? Shall we close this issue without action or should I re-activate something now that there are new algs available for testing?

@SWilson4
Copy link
Member Author

@SWilson4 I was doing some cleanup and came across this issue: How are you coming along with the new baentschmarking ? Shall we close this issue without action or should I re-activate something now that there are new algs available for testing?

I've taken the lack of activity on #112 to indicate that there currently isn't enough interest in b(a)en(ts)chmarking to justify the time required to revitalize the subproject.

It would be nice to have something, though, especially as we now have new algorithms that haven't yet been benchmarked in liboqs. Perhaps I should interpret the lack of discussion to indicate that the status quo for benchmarking was OK (barring bugs) and we should just update the oqs-perf image and fire up the VMs again, being careful to avoid old issues.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants