Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cron(d) vs Supercronic #177

Closed
modem7 opened this issue Nov 9, 2022 · 15 comments · Fixed by #186
Closed

Cron(d) vs Supercronic #177

modem7 opened this issue Nov 9, 2022 · 15 comments · Fixed by #186
Milestone

Comments

@modem7
Copy link
Member

modem7 commented Nov 9, 2022

Continuing conversation from #173.

Is Alpine cron really the best tool to use here I wonder?

I just found supercronic which seems like a potentially better tool than cron itself, and more suited to a container environment.

It also supports more traditional cron notations such as @hourly and \ giving our users finer control over their cron timings.

Also, given how the logging works with supercronic, it may also resolve #174 without additional changes as a side effect.

@modem7
Copy link
Member Author

modem7 commented Nov 9, 2022

The only downside I can see with supercronic is the multiarch aspect.

Installing it on a single arch is simple enough, but gets a bit more complicated with multi.

If we do go down this road, we may need to use someone else's work (eg https://github.com/hectorm/docker-supercronic) to copy across the relevant binary (https://github.com/hectorm/docker-supercronic/blob/master/Dockerfile.m4#L103) or do some fancy script work to determine the arch during build to grab the binaries (quite common in something like s6 builds, so we can probably find something relatively easily (eg https://gitlab.largenut.com/-/snippets/1) rather than reinvent the wheel).

@psi-4ward
Copy link

Does not work for me ... :(

FROM python:3.10.6-alpine3.16 AS build

RUN apk add --no-cache go curl
RUN mkdir /supercronic
WORKDIR /supercronic
RUN curl -sSL https://github.com/aptible/supercronic/archive/refs/tags/v0.2.1.tar.gz | tar xz --strip 1
RUN go mod vendor && go install


FROM python:3.10.6-alpine3.16
COPY --from=build /root/go/bin/supercronic /usr/bin/supercronic
COPY *.sh /
RUN apk add --no-cache bash

CMD ["/usr/bin/supercronic", "-debug", "-passthrough-logs", "-split-logs", "/etc/borgmatic.d/crontab"]
*/2 * * * * /sleeper.sh
# sleeper.sh
#!/bin/bash

graceful_shutdown() {
  echo Sleeper received $1 ... sleeping another 1s
  sleep 1
  echo Sleeper will exit now
  exit 0
}

for s in SIGHUP SIGINT SIGTERM ; do
  trap "graceful_shutdown $s" $s
done

while true ; do
    echo Sleeping
    sleep 10
done

time="2022-11-12T09:57:33Z" level=info msg="read crontab: /etc/borgmatic.d/crontab"
time="2022-11-12T09:57:33Z" level=debug msg="try parse (5 fields): '*/2 * * * *'"
time="2022-11-12T09:57:33Z" level=debug msg="job will run next at 2022-11-12 09:58:00 +0000 UTC" job.command=/sleeper.sh job.position=0 job.schedule="*/2 * * * *"
time="2022-11-12T09:58:00Z" level=info msg=starting iteration=0 job.command=/sleeper.sh job.position=0 job.schedule="*/2 * * * *"
Sleeping
Sleeping

invoke a docker stop

time="2022-11-12T09:58:30Z" level=info msg="received terminated, shutting down"
time="2022-11-12T09:58:30Z" level=info msg="waiting for jobs to finish"
Sleeping

$ echo $?
137

So looks like sleeper.sh does not receive the signal. Expected Result:

Sleeper received SIGTERM ... sleeping another 1s
Sleeper will exit now

@modem7
Copy link
Member Author

modem7 commented Nov 12, 2022

Does not work for me ... :(

FROM python:3.10.6-alpine3.16 AS build

RUN apk add --no-cache go curl
RUN mkdir /supercronic
WORKDIR /supercronic
RUN curl -sSL https://github.com/aptible/supercronic/archive/refs/tags/v0.2.1.tar.gz | tar xz --strip 1
RUN go mod vendor && go install


FROM python:3.10.6-alpine3.16
COPY --from=build /root/go/bin/supercronic /usr/bin/supercronic
COPY *.sh /
RUN apk add --no-cache bash

CMD ["/usr/bin/supercronic", "-debug", "-passthrough-logs", "-split-logs", "/etc/borgmatic.d/crontab"]
*/2 * * * * /sleeper.sh
# sleeper.sh
#!/bin/bash

graceful_shutdown() {
  echo Sleeper received $1 ... sleeping another 1s
  sleep 1
  echo Sleeper will exit now
  exit 0
}

for s in SIGHUP SIGINT SIGTERM ; do
  trap "graceful_shutdown $s" $s
done

while true ; do
    echo Sleeping
    sleep 10
done

time="2022-11-12T09:57:33Z" level=info msg="read crontab: /etc/borgmatic.d/crontab"
time="2022-11-12T09:57:33Z" level=debug msg="try parse (5 fields): '*/2 * * * *'"
time="2022-11-12T09:57:33Z" level=debug msg="job will run next at 2022-11-12 09:58:00 +0000 UTC" job.command=/sleeper.sh job.position=0 job.schedule="*/2 * * * *"
time="2022-11-12T09:58:00Z" level=info msg=starting iteration=0 job.command=/sleeper.sh job.position=0 job.schedule="*/2 * * * *"
Sleeping
Sleeping

invoke a docker stop

time="2022-11-12T09:58:30Z" level=info msg="received terminated, shutting down"
time="2022-11-12T09:58:30Z" level=info msg="waiting for jobs to finish"
Sleeping

$ echo $?
137

So looks like sleeper.sh does not receive the signal. Expected Result:

Sleeper received SIGTERM ... sleeping another 1s
Sleeper will exit now

I suspect that tini/dumb-init (or similar) is still required in this case. It might also be worthwhile to check with docker top what's actually going on inside the container with your particular dockerfile + scripts.

docker top containername x -o pid,command --forest

@modem7
Copy link
Member Author

modem7 commented Nov 13, 2022

I suspect that tini/dumb-init (or similar) is still required in this case. It might also be worthwhile to check with docker top what's actually going on inside the container with your particular dockerfile + scripts.

docker top containername x -o pid,command --forest

I did a quick test with tini, dumb-init and supervisord.

Only supervisord seemed to forward the signal properly.

Neither Cron nor Supercronic forwarded the signal to the process it calls.

@psi-4ward
Copy link

So next to test if borgmatic signals its hook scripts... 😉

@modem7
Copy link
Member Author

modem7 commented Nov 13, 2022

So next to test if borgmatic signals its hook scripts... 😉

We may need to work with the supercronic guys on it (I still think the benefits of supercronic are worthwhile, we'll figure out how to do the multiarch builds for it later) if the hook scripts don't send a signal.

@toastie89 and/or @witten may be the better people to ask regarding the signals however, as I don't know the specifics.

@witten
Copy link
Collaborator

witten commented Nov 13, 2022

I believe (although I haven't confirmed) that borgmatic passes the following signals through to any hook processes: SIGHUP, SIGTERM, SIGUSR1, and SIGUSR2.

@psi-4ward
Copy link

I believe (although I haven't confirmed) that borgmatic passes the following signals through to any hook processes: SIGHUP, SIGTERM, SIGUSR1, and SIGUSR2.

Than just forget about all the init stuff and use my proposal to signal first borgmatic and then crond, 😄

Right now I don't have an idea of what problems we are not solving with this approach

@grantbevis
Copy link
Collaborator

Thinking out side the box on this, @witten, instead of us using cron within the container would it be possible to add some sort of

on:
  schedule: 30  0  *  *  *

config to Borgmatic and we run borgmatic --scheduler or something as PID 1 and this solves all problems

@modem7
Copy link
Member Author

modem7 commented Nov 21, 2022

Thinking out side the box on this, @witten, instead of us using cron within the container would it be possible to add some sort of

on:
  schedule: 30  0  *  *  *

config to Borgmatic and we run borgmatic --scheduler or something as PID 1 and this solves all problems

Imo, we'd probably still need an entrypoint script due to our variables etc, however, agreed. It would make things a lot simpler as cron isn't great.

@modem7
Copy link
Member Author

modem7 commented Nov 21, 2022

Thinking out side the box on this, @witten, instead of us using cron within the container would it be possible to add some sort of

on:
  schedule: 30  0  *  *  *

config to Borgmatic and we run borgmatic --scheduler or something as PID 1 and this solves all problems

Imo, we'd probably still need an entrypoint script due to our variables etc, however, agreed. It would make things a lot simpler as cron isn't great.

But we could have the ENTRYPOINT as the script, then the CMD as borgmatic so it gets PID1.

@psi-4ward
Copy link

But we could have the ENTRYPOINT as the script, then the CMD as borgmatic so it gets PID1.

Thats easy. Do your stuff in entry.sh and than exec borgmatic --scheduler.

@witten
Copy link
Collaborator

witten commented Nov 21, 2022

Thinking out side the box on this, @witten, instead of us using cron within the container would it be possible to add some sort of

on:
  schedule: 30  0  *  *  *

config to Borgmatic and we run borgmatic --scheduler or something as PID 1 and this solves all problems

In theory, that would be possible. But I was also hoping not to reinvent cron inside borgmatic. 😃

EDIT: Relevant borgmatic ticket: https://projects.torsion.org/borgmatic-collective/borgmatic/issues/616

@grantbevis
Copy link
Collaborator

@modem7 - We should be able to build supercronic per arch by adding another staged build process of something like this:

FROM golang:alpine AS gobuilder
ARG SUPERCRONIC_VERSION="v0.2.1"
RUN apk add --no-cache git
RUN go install github.com/aptible/supercronic@"${SUPERCRONIC_VERSION}"

# The in the borgmatic build stage
COPY --from=gobuilder /go/bin/supercronic /usr/local/bin/

I just tested this locally (had to remove --link on the COPY as docker for mac didn't like it) and it builds supercronic that way. This would give us multi arch capabilities. I also like supercronic has a -test function we could implement on start up so if someone has a dodgy crontab we could exit 1

@grantbevis
Copy link
Collaborator

grantbevis commented Dec 2, 2022

PR: #186

@grantbevis grantbevis added this to the supercronic milestone Dec 14, 2022
@grantbevis grantbevis linked a pull request Dec 14, 2022 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants