Bazzite with some bluesabre-flavored tweaks.
The name is a work-in-progress.
Based on the ublue-os/image-template image template.
This image is built atop the Bazzite Nvidia (ghcr.io/ublue-os/bazzite-nvidia:stable) image. It uses the Nvidia proprietary drivers to support the Nvidia GTX 1080 graphics card, among many others.
| Package | File | Credits |
|---|---|---|
1Password (1password) with Vivaldi support |
install-1password.sh | benhoman/ublue |
Inter Font Family (rsms-inter-fonts) |
install-fonts.sh | N/A |
Optipng (optipng) |
install-utilities.sh | N/A |
Microsoft TypeType Core Fonts (msttcore-fonts) |
install-fonts.sh | kohega/bazzite-khg |
Ubuntu Font Family (ubuntu-family-fonts) |
install-fonts.sh | rpassmore/my-ublue-os |
Virtual Machine Manager (virt-manager) |
install-virt-manager.sh | butterflysky/butterfly-ublue |
Vivaldi Browser (vivaldi-stable) |
install-vivaldi.sh | N/A |
Visual Studio Code (code) |
install-vscode.sh | ublue-os/bluefin |
Yakuake (yakuake) |
install-utilities.sh | N/A |
From your bootc system, run the following command:
sudo bootc switch ghcr.io/bluesabre/bazzite-blueThis should queue your image for the next reboot, which you can do immediately after the command finishes.
During installation, the requisite 1Password groups may not be created. In this case, create them yourself:
GID_ONEPASSWORD="1790"
GID_ONEPASSWORDCLI="1791"
groupadd -g ${GID_ONEPASSWORD} onepassword
groupadd -g ${GID_ONEPASSWORDCLI} onepassword-cliThe Containerfile defines the operations used to customize the selected image. This file is the entrypoint for your image build, and works exactly like a regular podman Containerfile. For reference, please see the Podman Documentation.
The build.sh file is called from your Containerfile. It is the best place to install new packages or make any other customization to your system. There are customization examples contained within it for your perusal.
The build.yml Github Actions workflow creates your custom OCI image and publishes it to the Github Container Registry (GHCR). By default, the image name will match the Github repository name. There are several environment variables at the start of the workflow which may be of interest to change.
This template provides an out of the box workflow for creating disk images (ISO, qcow, raw) for your custom OCI image which can be used to directly install onto your machines.
This template provides a way to upload the disk images that is generated from the workflow to a S3 bucket. The disk images will also be available as an artifact from the job, if you wish to use an alternate provider. To upload to S3 we use rclone which is able to use many S3 providers.
The build-disk.yml Github Actions workflow creates a disk image from your OCI image by utilizing the bootc-image-builder. In order to use this workflow you must complete the following steps:
- Modify
disk_config/iso.tomlto point to your custom container image before generating an ISO image. - If you changed your image name from the default in
build.ymlthen in thebuild-disk.ymlfile edit theIMAGE_REGISTRY,IMAGE_NAMEandDEFAULT_TAGenvironment variables with the correct values. If you did not make changes, skip this step. - Finally, if you want to upload your disk images to S3 then you will need to add your S3 configuration to the repository's Action secrets. This can be found by going to your repository settings, under
Secrets and Variables->Actions. You will need to add the following
S3_PROVIDER- Must match one of the values from the supported listS3_BUCKET_NAME- Your unique bucket nameS3_ACCESS_KEY_ID- It is recommended that you make a separate key just for this workflowS3_SECRET_ACCESS_KEY- See above.S3_REGION- The region your bucket lives in. If you do not know then set this value toauto.S3_ENDPOINT- This value will be specific to the bucket as well.
Once the workflow is done, you'll find the disk images either in your S3 bucket or as part of the summary under Artifacts after the workflow is completed.
This template comes with the necessary tooling to index your image on artifacthub.io. Use the artifacthub-repo.yml file at the root to verify yourself as the publisher. This is important to you for a few reasons:
- The value of artifacthub is it's one place for people to index their custom images, and since we depend on each other to learn, it helps grow the community.
- You get to see your pet project listed with the other cool projects in Cloud Native.
- Since the site puts your README front and center, it's a good way to learn how to write a good README, learn some marketing, finding your audience, etc.
The Justfile contains various commands and configurations for building and managing container images and virtual machine images using Podman and other utilities.
To use it, you must have installed just from your package manager or manually. It is available by default on all Universal Blue images.
image_name: The name of the image (default: "image-template").default_tag: The default tag for the image (default: "latest").bib_image: The Bootc Image Builder (BIB) image (default: "quay.io/centos-bootc/bootc-image-builder:latest").
Builds a container image using Podman.
just build $target_image $tagArguments:
$target_image: The tag you want to apply to the image (default:$image_name).$tag: The tag for the image (default:$default_tag).
The below commands all build QCOW2 images. To produce or use a different type of image, substitute in the command with that type in the place of qcow2. The available types are qcow2, iso, and raw.
Builds a QCOW2 virtual machine image.
just build-qcow2 $target_image $tagRebuilds a QCOW2 virtual machine image.
just rebuild-vm $target_image $tagRuns a virtual machine from a QCOW2 image.
just run-vm-qcow2 $target_image $tagRuns a virtual machine using systemd-vmspawn.
just spawn-vm rebuild="0" type="qcow2" ram="6G"Checks the syntax of all .just files and the Justfile.
Fixes the syntax of all .just files and the Justfile.
Cleans the repository by removing build artifacts.
Runs shell check on all Bash scripts.
Runs shfmt on all Bash scripts.
For additional driver support, ublue maintains a set of scripts and container images available at ublue-akmod. These images include the necessary scripts to install multiple kernel drivers within the container (Nvidia, OpenRazer, Framework...). The documentation provides guidance on how to properly integrate these drivers into your container image.
These are images derived from this template (or similar enough to this template). Reference them when building your image!