Skip to content

Add inline and inbounds annotations to unsafe_dot #639

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Jul 10, 2025

Conversation

jacobga1998
Copy link
Contributor

@jacobga1998 jacobga1998 commented Jul 9, 2025

Hello,

I was using a FIRInterpolation filter and found that it was a bottleneck in my code. When profiling I realized that the compiler wasn't inlining the unsafe_dot function, and subsequently that there was a missing inbounds for the first multiplication. In the benchmark below the execution time after adding @inline and @inbounds is about half. I also tried longer filters, and decimation kernels, and didn't find any difference in the execution time, but I added the annotation to the other functions which did not already have them in case there is a scenario where it matters.

Let me know if you think this is worth merging.

kernel = rand(10);

interp_fil = FIRFilter(kernel, 8);
data = rand(ComplexF64, 10^6);

output = similar(data, 8 * 10^6);

@benchmark filt!(output, interp_fil, data)

Before PR(julia 1.11)

julia> @benchmark filt!(output, interp_fil, data)
BenchmarkTools.Trial: 124 samples with 1 evaluation per sample.
 Range (min … max):  35.798 ms … 45.617 ms  ┊ GC (min … max): 0.00% … 0.00%
 Time  (median):     40.198 ms              ┊ GC (median):    0.00%
 Time  (mean ± σ):   40.403 ms ±  1.944 ms  ┊ GC (mean ± σ):  0.00% ± 0.00%

                     ▂ ▂▆▂  ▆▅▂█    ▂
  ▄▁▁▄▄▅▁▁▄▄▁▁▄▅▄▄▅▄██▇███▅██████▇▅███▄▅▁▅▁▁▅▄▁▄▁▁▅▁▄▄▄▄▄▁▅▅▄ ▄
  35.8 ms         Histogram: frequency by time        45.3 ms <

 Memory estimate: 16 bytes, allocs estimate: 1.

After PR

julia> @benchmark filt!(output, interp_fil, data)
BenchmarkTools.Trial: 228 samples with 1 evaluation per sample.
 Range (min … max):  19.426 ms … 27.220 ms  ┊ GC (min … max): 0.00% … 0.00%
 Time  (median):     21.760 ms              ┊ GC (median):    0.00%
 Time  (mean ± σ):   21.913 ms ±  1.515 ms  ┊ GC (mean ± σ):  0.00% ± 0.00%

  ▃  ▂  ▃ ▂▄  ▄█▃▅ ▇▃▄▃▅▃▃▃▃   ▃ ▃
  █▅▃█▇▆██████████▇██████████▇▇█▆█▆▆▆▁▅▇▃▃▃▃▁▁▁▃▅▁▃▃▃▃▁▁▁▁▃▁▅ ▅
  19.4 ms         Histogram: frequency by time        26.7 ms <

 Memory estimate: 16 bytes, allocs estimate: 1.

Copy link

codecov bot commented Jul 9, 2025

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 98.13%. Comparing base (0660d55) to head (8253510).
Report is 1 commits behind head on master.

Additional details and impacted files
@@           Coverage Diff           @@
##           master     #639   +/-   ##
=======================================
  Coverage   98.13%   98.13%           
=======================================
  Files          19       19           
  Lines        3277     3277           
=======================================
  Hits         3216     3216           
  Misses         61       61           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@martinholters
Copy link
Member

Any idea how much difference the @inline and the @inbounds each make on their own?

@jacobga1998
Copy link
Contributor Author

It is mostly the inline. Rerunning the benchmark:

Without the @inbounds:
julia> @benchmark filt!(output, interp_fil, data)
BenchmarkTools.Trial: 190 samples with 1 evaluation per sample.
Range (min … max): 21.867 ms … 44.836 ms ┊ GC (min … max): 0.00% … 0.00%
Time (median): 24.969 ms ┊ GC (median): 0.00%
Time (mean ± σ): 26.243 ms ± 3.886 ms ┊ GC (mean ± σ): 0.00% ± 0.00%

 ▇ █▇▄▄▄ ▄▃▁

▅▅▆█▆█████▇████▇▅▁▃▃▁▄▆▁▁▄▃▆▃▅▅▅▅▄▄▃▄▁▅▁▁▁▁▄▁▁▁▁▁▁▁▁▁▁▃▁▁▃▃ ▃
21.9 ms Histogram: frequency by time 39.7 ms <

Memory estimate: 16 bytes, allocs estimate: 1.

Without the @inline

julia> @benchmark filt!(output, interp_fil, data)
BenchmarkTools.Trial: 124 samples with 1 evaluation per sample.
Range (min … max): 33.064 ms … 54.623 ms ┊ GC (min … max): 0.00% … 0.00%
Time (median): 39.826 ms ┊ GC (median): 0.00%
Time (mean ± σ): 40.553 ms ± 4.594 ms ┊ GC (mean ± σ): 0.00% ± 0.00%

       ▁  ▅   █  ▂▄  ▄

▃▁▅█▁▅▅▆██▆▅█▆█▅█▅███▆▆██▃▆▃▁▃▃▁▆▃▃▅▁▃▁▅▅▆▃▁▅▃▁▃▁▃▁▅▁▁▁▁▁▁▅ ▃
33.1 ms Histogram: frequency by time 53.6 ms <

Memory estimate: 16 bytes, allocs estimate: 1.

@martinholters
Copy link
Member

Thanks. In my opinion, overriding the compiler heuristics needs good justification, and those benchmarks surely qualify. The additional @inbounds seem about as (un)safe as the existing ones, so seems fair to add them, too.

@wheeheee
Copy link
Member

wheeheee commented Jul 9, 2025

But if this helps only interpolation, maybe we can use call-site inlining for just that?

@martinholters
Copy link
Member

But if this helps only interpolation, maybe we can use call-site inlining for just that?

Good point. Can you try that, @jacobga1998?

@jacobga1998
Copy link
Contributor Author

Seems to give the same benchmarks, I uploaded it.

Copy link
Member

@wheeheee wheeheee left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a style preference, I like the inline macro right beside the function called. Other than that, LGTM. Also, a little benchmarking suggests to me that BLAS.dot is slower than the Julia version for small kernel lengths, but that's a separate issue and there is no urgent need to change that.

@jacobga1998
Copy link
Contributor Author

Agreed, that looks nicer. Added the suggestions.

@wheeheee wheeheee merged commit 68341c4 into JuliaDSP:master Jul 10, 2025
10 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants