Make predicate implementations generic with respect to any suitable floating point type#38
Make predicate implementations generic with respect to any suitable floating point type#38tinko92 wants to merge 2 commits intogeorust:mainfrom
Conversation
4f390b2 to
baa91b8
Compare
|
Because genericity has been addressed, an addendum to address some comments raised in #3 and #9:
I do not think that this follows from either the full paper which makes all proofs for generic mantissa lengths and with assumed infinite exponent range, or the reference implementation, which has a macro switch between
I think, this implementation is very useful for integers for any architecture on which hardfloats are available. Integer coordinates can overflow very easily. E.g. for the incircle predicate, integer overflows can be produced with coordinates in the ten thousands when computed in i64. I don't think it can safely cover the range of i16, worse for insphere. This approach deals with this issue with high performance for anything up to i32. With f128 that could be extended to i64, for which I could not find any existing robust implementation in rust. There are some minor differences for integers, e.g. slightly lower error bounds, and always terminate at stage B because it will always be correct. But I don't think it is worthwhile to bother with that, the idea of this library is that stage B will almost never be reached.
and
While there certainly classes of cases where stage A will pass with f64 and not with f32, f64 does not generally prevent going into slower paths when its precision is sufficient. Even something as benign as orient2d of (1.0,1.0), (2.0,2.0), (3.0,3.0) will fail the stage A test in f64, even though the result would obviously be correct and the coordinates could be expressed in f16 or i8. The philosophy of the approach in my understanding is based on the notion that actual or near-degeneracies are so rare for non-adversarial input (negligible for general position from uniform float distribution, still rare for e.g. integer grids) that the amortized cost of the higher stages (~1-2 OOM more expensive than stage A) is negligible. This is shown in Table 2, 4, 6-8 in section 4 of the original paper. This is true for both f32 and f64 for common scenarios (a uniformly chosen point could be uncertain for stage A if it is within 10^-8 or 10^-16 distance of a unit length line, both likelihoods are negligible, actual collinear points on an integer grid are equally likely for f32 and f64).
This is correct and included in this PR.
This is not the case. The sum of two expansions of length a, and b has maximum length a+b, the product at most 2ab, as long as the components are all of the same floating-point type, i.e. if input is f32 and calculations are done at f32, then all the same expansion lengths apply. If the input is f32 and computation is done in f64 on the other hand, expansion lengths could be reduced theoretically in the later stages (mighth require compress, though? not sure) but I don't think this is worthwhile to pursue because it only concerns stack space and zeroelim eliminates unneeded zero components already at runtime. I agree, the best return type is the type used for the calculation for the reasons given. |
| 53 => 94_906_264, | ||
| 64 => 4_294_967_294, | ||
| 113 => 101_904_826_760_412_360, | ||
| // -(6 - 22) = 16, fall-back to the generally applicable bound by Shewchuk |
There was a problem hiding this comment.
Why is this branch commented out?
There was a problem hiding this comment.
Apologies for long silence, will update the PR soon with more clear code and better reviewable references. This is not a commented out branch but a comment to explain the default branch. But there is a sign issue in this block that makes all the bounds more pessimistic (no robustness impact, probably no practical performance impact but should be fixed), so thanks for highlighting it.
This PR proposes making the implementation of the floating point predicate generic with respect to the floating-point type.
Motivation
Interest in an f128-implementation was shown in #29 (comment) . The current f64 implementation cannot solve predicates robustly for f128 and with this PR, f128 is supported when enabled (since the stable tool chain complains about it and I don't know how to make the stable tool chain just ignore it in Rust, it is currently commented out). While using f64 can evaluate predicates robustly for f32, there are platforms on which f32 ops are cheaper than f64, so having computation in f32 as an option could also be an improvement. It would work for f16 too but that may be a dubious choice for geometry due to low exponent range, so no test cases were included for that type. It will work for any other float type as well that satisifes the requirement and implements the newly added trait in this PR, which is a single line with the given macro.
Changes
Adds Ieee754Float trait with an implementation macro, to generate the necessary constants for any float type that has ::EPSILON (for error bound computations), ::MANTISSA_DIGITS (for splitter computations), and the necesary arithmetic ops. Adds *Generic versions of Coord/Coord32 and _generic versions of orient2d, orient3d, incircle and insphere that require the new trait. Changes all other methods (which I consider implementation details that should generally not be called directly imho) to generic code that requires the new trait. Adds orient2d tests for f32 and f128 (commented out to avoid compiler error with the stabled tool chain).
Design Choices
The function signatures of orient2d, orient3d, incircle and insphere, which should be the main API, remain unchanged, so that the default is converting to f64 and computing with that type. The actual computation is delegated to the _generic versions of those functions, which can also also be called with other types that implement the new trait. This choice is proposed because f64 is a good default that will work for f64, f32, f16, u8, ..., i8, up to i32, is good (not optimal, though) for those integer types, and mitigate potential range limitations of f32 and especially f16.
For public functions like *adapt or *exact, this PR changes the arguments from Into to requiring the new trait. I can change it, if that is undesirable.
Background
The original C implementation is generic by allowing
REALto be defined asfloatordouble. The current Rust implementation is forf64only.The underlying theory of the algorithms used requires at least 6 mantissa digits, round-to-nearest, and tie-break to even default rounding mode, as specified in IEEE754. This is satisfied by f16, f32, f64, f128 and 80-bit extended precision on x86 (there is no f80 in rust, though). It implicitly requires that the values are in such that no overflows or denormalisations occur, which requires for a d-th degree predicate that the absolute values of the coordinates are zero or roughly within the d-th root of the minimum and maximum positive values of the floating point type. d is 2, 4, 3, and 5 for orient2d, incircle, orient3d and insphere respectively, so this allows values up to ~10^19 for f32 for orient2d, up to 10^7 for insphere, etc., so this is reasonable for f32 and upwards.