Skip to content

Conversation

@WAcry
Copy link

@WAcry WAcry commented Nov 17, 2025

Thank you for your great work for maintaining such a high-quality open-source library. I've used it for a while, really appreciate all the effort that has gone into it.

In our scenario, we execute few same expressions under very high concurrency requirements — on the order of 100,000 invocations per second.

To support this, we already cache the Lambda instance and reuse it across calls. However, we found that the current Lambda.Invoke path leave some room for optimization, especially in extremely hot paths: The Lambda invocation path involves DynamicInvoke and repeated LINQ allocations.

This PR removes a hot LINQ (to reduce allocations putting pressure on GC), introduces a fast invoker path for Lambda to replace dynamic invoke, and adds a “prefer interpretation” option for Eval, reducing allocations and improving performance in high-frequency scenarios.


Benchmark

BenchmarkDotNet v0.14.0, Windows 11 (10.0.26200.7171)
11th Gen Intel Core i7-11800H 2.30GHz, 1 CPU, 16 logical and 8 physical cores
.NET SDK 9.0.307
[Host] : .NET 8.0.0 (8.0.23.53103), X64 RyuJIT AVX-512F+CD+BW+DQ+VL+VBMI
DefaultJob : .NET 8.0.0 (8.0.23.53103), X64 RyuJIT AVX-512F+CD+BW+DQ+VL+VBMI

After:

Method Mean Error StdDev Gen0 Gen1 Allocated
'Invoke cached lambda (object[])' 2.497 ms 0.0179 ms 0.0167 ms 187.5000 - 2.29 MB
'Invoke cached lambda (IEnumerable<Parameter>)' 90.553 ms 0.3137 ms 0.2620 ms 3000.0000 - 37.38 MB
'Eval (IEnumerable<Parameter>)' 16,550.422 ms 237.4673 ms 222.1270 ms 1436000.0000 27000.0000 17184.89 MB

Before:

Method Mean Error StdDev Gen0 Gen1 Gen2 Allocated
'Invoke cached lambda (object[])' 216.8 ms 1.53 ms 1.43 ms 21666.6667 - - 260.93 MB
'Invoke cached lambda (IEnumerable<Parameter>)' 179.8 ms 1.36 ms 1.27 ms 16000.0000 - - 191.5 MB
'Eval (IEnumerable<Parameter>)' 23,721.9 ms 147.18 ms 122.90 ms 1436000.0000 373000.0000 7000.0000 17176.89 MB

We see a dramatic reduction in both latency (70x) and allocations (-99%) for this hot-path scenario after the fast invoker optimization.

Eval() is also 1.7x faster using interpretation instead of Compiling. We don't really care since we cache lambda, but it's simple to add. I see we had a discussion here too: #362


What This PR Changes

1. Optimize Lambda invocation path and reduce allocations

Goal: Avoid repeated LINQ allocations and heavy DynamicInvoke usage on every call, especially when the same lambda is invoked extremely frequently with consistent argument shapes.

Concretely:

Pre-snapshot and cache parameter metadata in Lambda:

Convert DeclaredParameters / UsedParameters to arrays and cache the corresponding ParameterExpression instances.
Precompute the mapping “used parameter index → declared parameter index” so we don’t have to enumerate and look up parameters on each invocation.

Introduce a fast path for invocation in declared-parameter order:

Add a fast invocation delegate (e.g. _fastInvokerFromDeclared) built from an expression tree that takes object[] and performs strongly-typed invocation logic.

When the number and types of arguments exactly match the expected parameters, we go through this fast path, avoiding:

DynamicInvoke
Repeated boxing/unboxing
Extra allocations.

If the arguments do not match (wrong count or incompatible types), we safely fall back to the original DynamicInvoke path to preserve behavior and exception semantics.

Optimize Invoke overloads:

Invoke(IEnumerable<Parameter>):

Replace LINQ-based matching with an implementation based on the cached _usedParameters mapping.
When parameters fully match, route to the fast path; otherwise, fall back to the existing logic.

Invoke(object[] args):

Build the invocation argument array directly in declared-parameter order and reuse the fast path.
Only fall back when argument types or counts do not match.

Overall, this significantly reduces per-call allocations and improves performance in high-frequency, cached-lambda scenarios.


2. Adjust Eval default behavior to favor interpretation

Goal: Improve performance for typical Eval scenarios, which are often one-off evaluations where compilation overhead dominates.

Changes:

Interpreter.Eval(string, Type, params Parameter[]) is updated to:

Call ParseAsLambda(..., preferInterpretation: true).
Then execute the resulting Lambda via lambda.Invoke(parameters).

From a library user’s perspective, the public API stays the same, but:

The default evaluation strategy for Eval becomes interpretation-first.
This reduces IL generation and JIT overhead, which is especially beneficial when Eval is used frequently in hot paths or in environments where startup latency and memory pressure matter.


Compatibility

All changes are limited to internal constructors, private helpers, and invocation internals.
There are "almost" no breaking changes to the public API surface, unless I missed anything.
When the fast path cannot be used (e.g., argument count/type mismatch), the code falls back to the original DynamicInvoke logic, preserving:
Exception types
Observable behavior
Compatibility with existing code.

Thank you again for providing and maintaining this project. I hope these optimizations are useful and are happy to adjust the implementation if you have any suggestions or style preferences.

Enhance Eval and Lambda classes: introduce preferInterpretation flag for optimized expression evaluation
@davideicardi
Copy link
Member

Thank you @WAcry ! Super optimization!

I will study a little bit further the code, but for now I don't see problems, just a little bit more complex 😄 .

Just a curiosity: you cannot use the compiled delegate in your real scenario?

If you want we can include the benchmark code somewhere? Maybe in sample/benchmark?

@WAcry
Copy link
Author

WAcry commented Nov 22, 2025

Just a curiosity: you cannot use the compiled delegate in your real scenario?

In our real usage we unfortunately can’t meaningfully use Compile<TDelegate>():

  • At the call site we don’t know either the number or the CLR types of the parameters at compile time.
  • The expression text itself comes from runtime configuration, not from code.
  • The set of variables in the expression is discovered via DetectIdentifiers.
  • The parameter types are inferred from the first runtime values, which we fetch from data sources as a Dictionary<string, object> and treat as (value?.GetType() ?? typeof(object)).

Because of that, we don’t have a static TDelegate that we can write in our own code which would match all these dynamically shaped cases. We’d still end up with a Delegate instance and have to invoke it in a general way, which is exactly the path this PR tries to optimize (removing DynamicInvoke, avoiding LINQ allocations, etc.).

So in short: Lambda.Compile<TDelegate>() is great if we know the signature up front, but in our scenario the shape is only known at runtime, so we rely on Lambda.Invoke(...) as the generic entry point.

On the benchmark side: I’ve just pushed a small BenchmarkDotNet project under benchmark/DynamicExpresso.Benchmarks.

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR introduces significant performance optimizations for the Lambda invocation path, achieving a 70× speedup and 99% reduction in allocations for high-frequency cached lambda scenarios. The changes also make Eval() prefer interpretation over compilation for one-off expressions, providing a 1.7× performance improvement.

Key Changes:

  • Introduced fast invoker path using compiled expression trees to replace DynamicInvoke in hot paths
  • Pre-computed and cached parameter metadata in Lambda constructor to eliminate repeated LINQ allocations
  • Modified Eval() to use interpretation by default instead of compilation for better one-off expression performance

Reviewed changes

Copilot reviewed 7 out of 8 changed files in this pull request and generated 5 comments.

Show a summary per file
File Description
src/DynamicExpresso.Core/Lambda.cs Core optimization: adds fast invoker path, parameter caching, and type checking infrastructure for high-performance invocation
src/DynamicExpresso.Core/Interpreter.cs Updates Eval() to prefer interpretation over compilation for one-off expressions
benchmark/DynamicExpresso.Benchmarks/Program.cs New benchmark harness using BenchmarkDotNet
benchmark/DynamicExpresso.Benchmarks/LambdaBenchmarks.cs Benchmark implementations for measuring Lambda invocation performance
benchmark/DynamicExpresso.Benchmarks/DynamicExpresso.Benchmarks.csproj Benchmark project configuration
README.md Adds documentation for running benchmarks
DynamicExpresso.sln Integrates benchmark project into solution
.gitignore Excludes BenchmarkDotNet artifacts from version control

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines 192 to 195
if (_usedCount == 0)
{
return _fastInvokerFromDeclared.Value(Array.Empty<object>());
}
Copy link

Copilot AI Nov 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[nitpick] When _usedCount == 0, the method returns early at line 194 without checking if matchedValues.Count == 0. This is correct behavior, but the code structure is confusing because it checks if (_usedCount == 0) after the matching loop that will naturally result in an empty matchedValues list when _usedCount == 0.

Consider restructuring to check _usedCount == 0 before the matching loop to avoid the unnecessary loop execution and list allocation.

Copilot uses AI. Check for mistakes.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@WAcry It seems to be a valid suggestion.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good point.

Comment on lines 180 to 190
foreach (var used in _usedParameters)
{
foreach (var actual in paramList)
{
if (actual != null &&
used.Name.Equals(actual.Name, _parserArguments.Settings.KeyComparison))
{
matchedValues.Add(actual.Value);
}
}
}
Copy link

Copilot AI Nov 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[nitpick] The parameter matching logic mirrors the original LINQ-based implementation, but both allow duplicate parameter names in the input parameters collection, which can lead to unexpected behavior. If parameters contains duplicate names (e.g., [Parameter("a", 1), Parameter("a", 2)]) and the expression uses "a", both values will be added to matchedValues, resulting in a count mismatch and fallback to InvokeWithUsedParameters with an incorrect array.

Consider adding validation to reject duplicate parameter names in the input, or documenting this behavior clearly.

Copilot uses AI. Check for mistakes.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@WAcry This seems to be a correct suggestion. What do you think?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As Copilot suggests, the refactored code still mirrors the original behavior here: if the caller passes duplicate names (e.g. [Parameter("a", 1), Parameter("a", 2)]), we end up with multiple entries in the values array and ultimately let DynamicInvoke throw (e.g. due to argument count mismatch), just like before. Let me know if you think it's better to throw an exception directly instead.

Copy link
Member

@davideicardi davideicardi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again thank you for the PR and the performance improvement! Really appreciated.

I'm a bit hesitant because the code seems quite a bit more complex than before.
I know that this is common when apply optimization... but if you are able to get similar improvements (maybe not all) but with a more maintainable code, I think it is better.

What do you think? Could it be possible?

P.S. I have executed Copilot Review, there are a couple of comments that seems to make sense to me. I have resolved the other ones, because irrelevant for me.

Comment on lines 25 to 38
private readonly Parameter[] _declaredParameters;
private readonly Parameter[] _usedParameters;
private readonly ParameterExpression[] _declaredParameterExpressions;

// For each used parameter index, which declared parameter index it corresponds to.
private readonly int[] _usedToDeclaredIndex;
private readonly bool _allUsedAndInDeclaredOrder;
private readonly Type[] _effectiveUsedTypes;
private readonly bool[] _usedAllowsNull;
private readonly int _declaredCount;
private readonly int _usedCount;

// Fast path: declared-order object[] -> result.
private readonly Lazy<Func<object[], object>> _fastInvokerFromDeclared;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you think we can consolidate these various arrays/variables into one or more simpler aggregated objects (e.g. a LambdaInvocationContext ?) so we keep most of the performance while making the hot path easier to read and maintain?

The performance gains are great and the direction makes sense—this would just be about reducing branching and scattered variables.

Copy link
Author

@WAcry WAcry Nov 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point! I removed two less important arrays and moved all invocation-related state and logic into the InvocationContext class to better encapsulate the complexity

@davideicardi
Copy link
Member

@metoule What do you think? Suggestions or ideas?

@metoule
Copy link
Contributor

metoule commented Dec 1, 2025

Thanks for the PR! There was indeed a need for improvement.

I would prefer to split the PR in two: one that keeps the current DynamicInvoke behavior with the rest of the improvements (preferInterpretation, _usedToDeclaredIndex, etc) but without the new _fastInvokerFromDeclared. I think it'll will already bring major benefits, while being safer to release.

I also find it surprising that building a new delegate that calls the Invoke method of the first one can benefit that much. Can't we call the Invoke method directly? If it's really worth having a new delegate, we may be able to build it without having to compile two expression trees (the old delegate and the new one).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants