-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[imageio] Fix interpreting TIFF images as LDR/HDR #18541
base: master
Are you sure you want to change the base?
[imageio] Fix interpreting TIFF images as LDR/HDR #18541
Conversation
LDR/HDR is not determined by the sample format (floating point or integer), but by the bit depth. Actually, we use this flag to determine whether image data in the absence of color space metadata should be treated as linear or nonlinear.
// flag the image buffer properly depending on sample format | ||
if(t.sampleformat == SAMPLEFORMAT_IEEEFP) | ||
// Flag the image properly depending on bit depth | ||
if(t.bpp > 8) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is an arbitrary value, I mean there is no spec for what is HDR or not (at least I did not found one). I would have been tempted to use > 16
as 16 is probably still a small value for an HDR but again not sure, just a feeling. What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed, there is no difference of how arbitrary >8 or floating point choice is. In fact, it is more likely people save 16b Adobe RGB for "high fidelity" LDR archiving... IMHO floating point is more likely to be linear (and unclipped)... This is the assumption we have for PFM and float EXRs, so at least it is consistent...
LDR/HDR flag is not determined by the sample format (floating point or integer), but by the bit depth.
This is not entirely true. It is determined by the format and/or bit depth if not float, but most importantly, the transfer curve in combination w/ high bit depth or float.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The same way we assume all 16b PNGs are HDR is also not entirely correct...
So, while it is true that we use <=8bpp as criterion for most loaders to be sure the image is LDR, I still think there are exceptions like TIFF and PNG as those formats have been around decades before HLG and PQ transfer curves showed up, and AVIF/HEIC/JXL after them (and they're more likely to mean HDR if >8bpp is used there, but still not exclusively...)
Long term, this HDR flag probably needs to come out of specific image loaders, and input ICC/CICP profile + bit depth analyzed for all supported formats in one place, at a higher level of abstraction...
Or get rid of this flag as dt doesn't really have a dedicated HDR workflow anyway...
LDR/HDR flag is not determined by the sample format (floating point or integer), but by the bit depth. Actually, we use this flag to choose whether image data in the absence of color space metadata should be treated as linear or nonlinear.