Description
Currently the image input of NamiColor is a RGB image from camera or scanner. The input color space transform will transform this RGB image into a standard RGB color space, or more specifically, Rec.2020 color space. And then perform channel alignments and other transforms on this image.
But here's the question: Why Rec.2020? Why not ACES AP0 or DCI-P3 or any other RGB space? Or, one step further, why an standard RGB space?
Since NamiColor gets its initial idea from the Cineon digital intermediate system, I believe we should take a look at the principle of the Cineon system.
The Scanning result of a Cineon scanner is what we called "Printing Desnity" or PD for short. This represents the target density when we print the negative image onto a standard intermediate stock(Kodak 5244 for example) using a contact printer. To accomplish this goal, Kodak designed the spectral response sensitivity of the Cineon scanner to approximate printing density response.
Of course, when using NamiColor, we are not targeting a total simulation of a contact print workflow. But we still can benefit from the idea of scanning density rather than a simple "RGB dye shot" which is calibrated to the target of a standard human observer looking at a back-illuminated camera negative.
So, a better input transform result for a negative scan would be, in my point of view, a approximation of a standard Status M densitometer, or a SMPTE RP 180 densitometer.
To accomplish this, we could do a calibration (if we are scanning using a camera setup) by measuring the spectral power distribution of the backlight, then measuring the spectral sensitivity of a digital camera. And then we can calculate a transform matrix which will transform the native RGB camera image into a Status M/RP 180 density image, using a least square root optimization method.
And for off-the-shelf scanners, we could do a spectral sensitivity prediction using a IT-8 target strip. And then do the same calibration process.
I believe this can produce a much more pleasing result than the current input color space transform method. But the implementation of the DCTL itself requires little change. We are just adding some more 3x3 matrice.
Maybe we could try this out on a sample setup and see what result we can get.