-
-
Notifications
You must be signed in to change notification settings - Fork 52
chore: Create release artifacts #1562
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
20134b8
to
90efc7a
Compare
9f11a10
to
9b0a08d
Compare
896393c
to
3770854
Compare
3770854
to
74866c3
Compare
8ed97af
to
06a8b43
Compare
@benscobie, I (finally) had some time to take a look at this. It's a big change, but it looks solid! |
Of course! The packaging is all located within these scripts to make it easier to test locally, you just need to pass a Git SHA as an arg (can be anything really though): What I was doing locally was:
You can also execute the release workflow to generate the artifacts, just target this branch (though it is in my fork so likely won't be listed..). Just enable Dry run and uncheck Build and push Docker images. They'll be available alongside the action, e.g. https://github.com/benscobie/Maintainerr/actions/runs/14433809972 |
Very nice, thank you for the clarification! |
44dcec4
to
10549b5
Compare
I've tried to compile and use Maintainerr outside of Docker based on this PR. It works perfectly using base configuration, but I came through several issues when setting a basePath :
I've done quick dirty fixes for the 1st part (build) on my local repo to test, and I successfully managed to build a static build. I'll try to make a clean commit/PR in the coming weeks to fix the first point if you want, but I don't currently have time to look into the second error. Just wanted to report this 😄 |
We are eagerly waiting... 😄 |
I must've broken this at some point along the way as I'm sure I tested this 🤔 But yes I believe we just want to copy the same commands from the Dockerfile out (with the
We've had a few people with this issue on the Docker install (do you, if you have it installed?) so I think it's unrelated. We needed some more info from them to help track it down but never heard anything back.
Sure, sounds good! This PR is slow moving anyway, jorenn is quite busy with real life stuff at the moment. |
Quick follow-up on my report from yesterday — I’ve opened a PR to address all This includes fixes for the Once this PR is merged, I’ll let @benscobie handle merging upstream into this branch. From there, I’ll move on to the original issue I was aiming to solve: adding proper |
Thank you for fixing those issues ❤️ I've just merged the latest changes in. I haven't tested the workflow out again yet, but the merge conflicts were minimal so everything should still be okay. (EDIT: That was not the case, looking now...) FIXED |
I’ve been thinking about this PR, and I believe it would be best to split it into two separate ones:
This split would make review and merge management much easier, as this PR is getting quite large. It would also help isolate any side effects introduced by the config/code changes, since the distribution method remains unchanged—making troubleshooting simpler if issues arise. Additionally, most merge conflicts during new releases tend to come from core code changes, so merging those sooner would reduce friction down the line. The second PR could then be more polished and focused on delivering a robust alternative install experience when everyone got time to dedicate to the project, with room to explore nice extras like systemd/Windows Service integration as you mentioned. |
This PR adds artifact creation to the release workflow, providing downloadable archives alongside the GitHub release, which can be extracted, configured and ran without having to mess around with building it yourself. This also provides an avenue for non-Docker installation methods such as community-scripts/ProxmoxVE#97 . I've included an install script, basic run scripts and config files, which serve as a base for other implementations that want to use the release archives.
A basic install guide would look like:
install.ps1
/install.sh
./server/.env
and./ui/.env
start_ui.ps1
/start_ui.sh
andstart_server.ps1
/start_server.sh
to start the UI and server.An upgrade would currently involve:
server/.env
andui/.env
.env
files instead of updating the existing ones.Workflow
I have added a new step to the release workflow that builds the project in the same way as we do in Docker. This will output an archive for each platform (currently Linux amd64, Linux arm64 and Windows x64), which will then be attached to the release created in GitHub by semantic-release.
I have updated how semantic-release is called as well. First we do a dry-run to get the version number, which also validates that the actual release will go okay (most of the time). The Docker & release archives steps will then fire off in parallel, and once those are complete, semantic-release is is called again to perform the GitHub release.
Finally, I managed to reduce the Docker image size from 650MB to around 350MB by removing node dev dependencies after the build. This saving also applies to the new release artifacts.
Configuration changes
In order to make configuration easier for users, I have updated the UI & server to make use of
.env
files. There are a few different variations in-use here:%GIT_SHA%
token is replaced at build time. The .env files below will override configuration defined in here.Dockerfile
. I have removed them from there because of this issue with Portainer. There is no change for Docker users here. They can continue to specify configuration via environment variables, which have a higher precedence than .env files.As part of the build process,
.env.distribution
/.env.docker
, is copied to.env
. The.env
file is copied to.env.production
.The final output is:
/ui/.env - this was
.env.distribution
or.env.docker
/ui/.env.production - this was
.env
/server/.env - this was
.env.distribution
or.env.docker
/server/.env.production - this was
.env
Code changes
Testing
Future
I would like to improve the install, upgrade and run/stop processes in a future release. A few ideas I've had:
pm2
but I couldn't get it to work. We might be able to create a custom server that inits both the UI & server.