Skip to content

addpkg(main/farbfeld): Conversion tools for farbfeld image format #24752

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 3 commits into
base: master
Choose a base branch
from

Conversation

SamB
Copy link
Contributor

@SamB SamB commented May 17, 2025

The farbfeldt format is designed to be trivial to parse, so that custom tools can easily parse it without needing a special library.

Unlike netpbm's PNM family, which has 7(?) different formats with tricky-to-parse textual headers, farbfelt is a single format with a binary header, and always stores a 16-bit-per-channel RGBA (non-premultiplied) image, in no specified color space, with 32-bit unsigned width and height.

So it's trivial to parse, and there's no need to worry about what readers will do with color management information because there is no such information.

The farbfeldt format is designed to be trivial to parse, so that
custom tools can easily parse it without needing a special library.

Unlike netpbm's PNM family, which has 7(?) different formats with
tricky-to-parse textual headers, farbfelt is a single format
with a binary header, and always stores a 16-bit-per-channel
RGBA (non-premultiplied) image, in no specified color space, with
32-bit unsigned width and height.

So it's trivial to parse, and there's no need to worry about what
readers will do with color management information because there is no
such information.
@SamB SamB requested review from Grimler91 and TomJo2000 as code owners May 17, 2025 21:56
TERMUX_PKG_SRCURL=https://dl.suckless.org/farbfeld/farbfeld-${TERMUX_PKG_VERSION}.tar.gz
TERMUX_PKG_SHA256=c7df5921edd121ca5d5b1cf6fb01e430aff9b31242262e4f690d3af72ccbe72a

TERMUX_PKG_DEPENDS=libjpeg-turbo,libpng
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While what you typed seemed to work here for downloading dependencies, It is a little more conventional in termux-packages to format the TERMUX_PKG_DEPENDS like this:

TERMUX_PKG_DEPENDS="libjpeg-turbo, libpng"

termux_step_configure() {
# Need to pass these on command line to override settings in config.mk,
# but they aren't available before configure
TERMUX_PKG_EXTRA_MAKE_ARGS="PREFIX=${TERMUX_PREFIX} CC=${CC} LD=${LD}"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fatal error: 'jpeglib.h' file not found
fatal error: 'png.h' file not found

Hi I will try to help,

These errors basically happened to you because this termux_step_configure() isn't yet fully ready for cross-compilation. It worked on your device when you tested because, from a certain perspective, Termux on-device compilation is Android non-cross-compiliation, and certain cross-compilation-related errors are, as a result, not possible to reproduce there.

Here is a cross-compilation-compatible termux_step_configure() for this software that I made for you, you could use this to resolve these errors (and it would still work when building on-device as well):

termux_step_configure() {
	# Replace config.mk
	cat <<-EOF > config.mk
	VERSION    = $TERMUX_PKG_VERSION
	PREFIX     = $PREFIX
	MANPREFIX  = $PREFIX/share/man
	CC         = $CC
	CPPFLAGS   = $CPPFLAGS -D_DEFAULT_SOURCE
	CFLAGS     = $CFLAGS -std=c99 -pedantic -Wall -Wextra -Os
	LDFLAGS    = $LDFLAGS -s
	PNG-LDLIBS = -lpng
	JPG-LDLIBS = -ljpeg
	EOF
}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, yes, naturally the Android NDK tools don't have Termux stuff in the default search paths. That was sily of me.

And I guess we don't really have to worry that much about this getting stale - farbfeld is so simple there may never be reason to do another upstream release; it's far more likely to need updating due to either termux-packages build-system changes causing FTBFS, ABI break (intended or otherwise) in libpng or libjpeg, or something funny about the binaries coming to light with some future version of Bionic.

(Though I suppose it's always possible that they will add more utilities to the package, or rewrite the specification in pure ASCII for better font compatibility.)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, when the next release comes, this would probably start failing, and when that happens, hopefully the comment is clear enough to indicate for future maintainers that they would need to look at the newer version's config.mk and rebase the modified config.mk on it.

SamB added 2 commits May 18, 2025 12:52
This only worked on-device; how silly of me.

Solution suggested by @robertkirkman.
Since we're generating our own config.mk, this is easy to fix.
@SamB
Copy link
Contributor Author

SamB commented May 18, 2025

Okay, so, I was watching the x86_64 build do the "Build Packages" step and it took at least five and a half minutes to pull the ghcr.io/termux/package-builder:latest image, compared to 20 seconds for the rest of the build. I wonder if there's a way to avoid doing this step from scratch for each package architecture?

@robertkirkman
Copy link
Contributor

It's like that because of an organizational decision to use strategy: matrix: target_arch: to run four independent workflow jobs, one for each target architecture.

strategy:
matrix:
target_arch: [aarch64, arm, i686, x86_64]

That is what allows a formally separate GitHub Actions log to be generated for each architecture, which can be visited by clicking the link associated with it in the box:

image

This helps give an at-a-glance summary of architecture build status for other PRs, of larger, more heavily-patched packages that may sometimes have errors that need to be troubleshooted when building for certain architectures.

There is, in fact, a different way to build, which could use a single instance of both a Docker pull and a Docker container to build the package for all four architectures,

It's this command:

scripts/run-docker.sh ./build-package.sh -I -f -a all farbfeld

However, I don't know of any way to do that and simultaneously keep the build logs and build success/failure status overview separate in the GitHub Actions GUI for all four architectures. I think that switching to that way in GitHub Actions would probably cause the build log to combine into a single log for all four architectures, one after another, which could impact readability and clarity of the logs in the GitHub Actions web UI.

@SamB
Copy link
Contributor Author

SamB commented May 19, 2025

Yeah, I was more thinking there out to be a way to tell Actions that you wanted to use that Docker image so it would already be cached before starting the action; unfortunately, the documentation doesn't describe such a thing, though there are indications that using a docker action might speed things up.

Oh well.

@twaik
Copy link
Member

twaik commented May 19, 2025

Yeah, I was more thinking there out to be a way to tell Actions that you wanted to use that Docker image

It uses the same ubuntu host + docker image under the hood, but in this case we can not remove packages on host so it is not that applicable to our case.

@robertkirkman
Copy link
Contributor

robertkirkman commented May 19, 2025

@SamB

there are indications that using a docker action might speed things up.

Unfortunately, while that might be true, the reason why we don't use runs: or container: directives in the Build workflow jobs is partly because we have a conditional setup where almost all packages are built inside docker containers (to save time), but packages in big-pkgs.list are built outside of docker containers (to save space), and this is automatically detected when the workflow is running by this condition block, which automatically decides whether each PR should be built inside or outside of the docker container.

docker='true'
if [[ "${#packages[@]}" -gt 0 ]]; then
for pkg in "${packages[@]}"; do
if grep -qFx "$pkg" ./scripts/big-pkgs.list; then
docker='false'
break
fi
done
fi

It might sound counterintuitive, but when we measured everything while developing this,

  • the docker='true' codepath marked above really does run faster than the docker='false' codepath (because our docker container comes with our setup-ubuntu.sh preinstalled into it), so it is used whenever possible
  • the docker='false' codepath really does save more space in the GitHub Actions runner (because it completely uninstalls Docker and some other GitHub Actions preinstalled packages that are not used during this codepath, in our free-space.sh), but it takes longer because it runs setup-ubuntu.sh plus free-space.sh every time.

If you can think of a way to optimize this more (maybe by splitting some of these conditional checks in the large Build job script out into smaller, subdivided jobs that could take advantage of the runs: or container: directives of GitHub Actions to try to "cache" the docker image?) then it would be very interesting!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants