Replies: 1 comment 3 replies
-
At the moment we don't automatically test rendered PDFs at all. We have a dummy backend that outputs debug information about what the PDF backend is being asked to render in plain text format with some relevant details. That's what we diff as part of our end to end regression tests. Usually the debug information is far more informative than visually comparing PDFs because it gives a strong clue about what exactly changed. Sometimes however a visual is good. I've been using As noted we're using Busted for unit tests of smaller chunks of the Lua API. Honestly we need more of these and fewer full regression tests as they are more informative and easier to maintain. Quite a few things we have as regression tests could probably be reduced to unit tests. We have three holes in the testing I can think of:
|
Beta Was this translation helpful? Give feedback.
-
Hypothetically, are there more kinds of tests we can contribute?
Existing Tests
The existing files under tests/ take a "golden file" approach. This works great. I see there is an "update-actuals" make recipe as well, so diffs can be checked in, if they're valid. At least, that's what it looks like from glancing at the automake script 🤔
There are also Lua tests, which can - presumably - verify that you've constructed an expected object structure. Unit test stuff. Essential.
Possible Additional Tests
What else is possible? Given disk space and compute, could you use something like ImageMagick to convert rendered PDFs into images and then do diffs on actual image "golden files"?
Are there other kinds of structural tests that can bridge the gap (or test some middle ground) between pure Lua and rendered PDF?
Beta Was this translation helpful? Give feedback.
All reactions