Skip to content

Optimize einsum contraction order #30

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Jul 19, 2025
Merged

Optimize einsum contraction order #30

merged 3 commits into from
Jul 19, 2025

Conversation

AntonOresten
Copy link
Member

@AntonOresten AntonOresten commented Jul 17, 2025

Since einsum isn't a generated function, and the OMEinsum backend is mostly not type stable anyway (under-Peter/OMEinsum.jl#97), array sizes are known. This PR is meant to allow einsum contraction order optimization. I don't know exactly what sets TreeSA and the different optimizers apart, and I don't know if I'd like them to be user-facing here. Related is #26.

Closes #24

@samuelsonric
Copy link

samuelsonric commented Jul 17, 2025

Helpful? The image is from this page.

@AntonOresten
Copy link
Member Author

Thanks! I suppose that's logarithmic?

I feel like Einsum is such a rabbit hole, and the time-to-optimize vs time-to-contract is a tricky dilemma, and I obviously want both to be good. I don't really know how optimized einsum expressions are found/used in practice. Does it get resolved at runtime on every loop or are tensors mostly of fixed size, and it can be optimized by knowing sizes before-hand? Would something like Reactant.jl be able to remove duplicate optimization work, since it compiles functions while keeping track of sizes?

Was surprised by how slow e.g. TreeSA was for a seemingly simple task:

julia> @be optimize_code(ein"ij,jk,j->", Dict('i' => 30, 'j' => 40, 'k' => 20), OMEinsum.TreeSA())
Benchmark: 15 samples with 1 evaluation
 min    6.055 ms (605052 allocs: 30.000 MiB, 8.83% gc time)
 median 6.772 ms (605052 allocs: 30.000 MiB, 12.98% gc time)
 mean   6.857 ms (605052 allocs: 30.000 MiB, 13.93% gc time)
 max    7.934 ms (605052 allocs: 30.000 MiB, 19.68% gc time)

Inclined to drop einsum in a breaking release, since it would be so difficult to get right, and my main contribution is really my zero-overhead rearrange and friends.

@AntonOresten AntonOresten mentioned this pull request Jul 19, 2025
@AntonOresten AntonOresten reopened this Jul 19, 2025
Copy link

codecov bot commented Jul 19, 2025

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 100.00%. Comparing base (dd3f296) to head (d6dbaaa).
Report is 3 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff            @@
##              main       #30   +/-   ##
=========================================
  Coverage   100.00%   100.00%           
=========================================
  Files           12        12           
  Lines          296       313   +17     
=========================================
+ Hits           296       313   +17     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@AntonOresten AntonOresten merged commit 43fc9be into main Jul 19, 2025
5 of 6 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Optimize einsum contraction order
2 participants