-
Notifications
You must be signed in to change notification settings - Fork 554
simplifier: Account for triangle filtering when finalizing grid size #907
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Without this it's hard to understand which grid was chosen after the search.
Binary search for the target grid size is performed using an approximate calculation for triangle count which overestimates the result as it does not filter out duplicate triangles. This often creates cases where a slightly larger grid would still fit in the triangle budget. This could be solved by using post-filtered triangle count during the binary search but that's very expensive. Most often, the excess is small enough that we can only bump the grid size by 1, so instead we perform a single probe pass to check if increasing the grid size is possible. To do this we use the same code that we use for triangle filtering, but use vertex ids instead of vertex cells; it is still correct because there's a 1-1 relationship between these, and while the destination buffer becomes non-sensical briefly, it's only used as temporary storage during hashing. To avoid reallocating the hash table and keep the flow simple, we only do this if the hash table is sufficiently large to perform the probing.
Instead of allocating enough storage for min_triangles, we allocate enough for target triangle count; this reduces the chance of skipping the probing pass - before this it would happen ~10% of the time and now it happens ~1%. The extra cost of the adjustment is insignificant. Also fix tracing of the probe grid size.
Instead of trying to predict the triangle filter hash table size ahead of time, we can size it to accommodate both current and slightly larger grid for probing. This requires us to run countTriangles to get the worst case bound, but we need to do this anyhow to do filtering safely. This makes sure that we can filter a probed grid even on small grid sizes, where significant gains from probing are the most likely.
We are mostly interested in probing as a way to improve quality; on larger grids the quality impact is typically much smaller, so for now conservatively limit this to grids under 64 vertices where grid+1 may yield more significant quality gains.
Need to rethink this after further testing… |
We can't clobber indices buffer as it's used later... so we need to allocate a scratch buffer, but it can be smaller than the original as we have an upper bound on the triangle count.
In this case it does not change the output of the simplifier, but does run and does adjust the grid size up by 1.
Alright, after further testing I've decided it's better to leave this as is and close the linked issue as "by design/wontfix". The code in this PR technically does work, but I'm just not happy with this. Sloppy simplifier will be stabilized in the "bare minimum functionality" mode; if this change universally improved appearance I would merge this, but the problem is that the code here is somewhat fragile, it often introduces extra costs with no particular benefit (~half the time on my test set the probe ends up keeping the existing grid size); when it does change the grid size, the impact on quality is often inconclusive. There are rare cases where this does visibly improve quality but it's just not a very stable improvement, and is too reliant on the particular triangle target. On the flip side, any approximations here end up underestimating the triangle count, so anything short of precise filtering ends up having a chance to overshoot the target count instead, which is also unsatisfying. For example, in the process of development I tried to graph various metrics as a function of grid size on the test set, and here's the quality improvement as a result of this change; all the negative values are due to a larger grid being accidentally worse aligned wrt the target mesh vertices (while that's a weakness of the overall algorithm, on this adjustment it means the net effect is difficult to distinguish from noise): The focus going forward is going to be on the default simplifier, so the sloppy simplifier will be left in more or less its current state, minus a couple tweaks that I'll submit in a separate PR. |
Binary search for the target grid size is performed using an approximate
calculation for triangle count which overestimates the result as it does
not filter out duplicate triangles. This often creates cases where a
slightly larger grid would still fit in the triangle budget.
This could be solved by using post-filtered triangle count during the
binary search but that's very expensive. Most often, the excess is small
enough that we can only bump the grid size by 1, so instead we perform a
single probe pass to check if increasing the grid size is possible.
To do this we use the same code that we use for triangle filtering, but
use vertex ids instead of vertex cells; it is still correct because
there's a 1-1 relationship between these, and while the destination
buffer becomes nonsensical briefly, it's only used as temporary storage
during hashing.
Fixes #133.
This contribution is sponsored by Valve.