You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I was recently optimising some hashmap heavy code. We got a big speedup from moving from HashMap<Object, Double> to Object2DoubleOpenHashMap, so first let me say thank you for fastutil, it's great.
We noticed an opportunity in the profiling afterwards though. I expected mergeDouble to have an optimisation to hash the input once, then find the position of the data, then replace the data in that position. Like HashMap.compute.
But in the profile I see that mergeDouble is implemented only in the interface as Object2DoubleMap.mergeDouble, which doesn't know about hashing, and is implemented in terms of calling double getDouble(Object) then put(Object, double) This ends up hashing the Object twice, and finding the hashmap position twice (including calling .equals to see if it's the right position).
It's not a big deal or anything; we're more than happy with the performance improvements delivered. But I thought I'd report it in case it's an idea you like. Here's a flamegraph: