-
Hello, I looked into the problems of AIO with podman a bit. I've got the impression that adding However I'm new to the AIO mastercontainer and its PHP code. AFAIK, the mastercontainer controls starting the other containers by using docker unix socket. I would like to interfere here and test my idea. Is there some docs about mastercontainer development? Where is the place where the PHP code starts new containers? Kind regards, aanno |
Beta Was this translation helpful? Give feedback.
Replies: 13 comments 31 replies
-
Hi,
Yes, this is how AIO works.
The code is here: https://github.com/nextcloud/all-in-one/tree/main/php#php-docker-controller. There you can also find some info on how to run this from source.
However, one of the main problems with podman currently is, that the included watchtower container which updates the mastercontainer is not compatible with podman. So this is one of the things that would need to be fixed. See containrrr/watchtower#1060. And even after fixing this, I am personally against adding support for podman, since, as stated in the docs, this would add another platform that we would need to test against. So you still have the option to either use docker rootless, use the manual-install https://github.com/nextcloud/all-in-one/tree/main/manual-install or follow #3487 which is not supported by us though. |
Beta Was this translation helpful? Give feedback.
-
Dear @szaimen , thank you for prompt answer. I completely understand that there a no plans to get podman official supported. However, I really think that AIO is just very near to unofficially run on podman right now. Podman has evolved substantially over the years. And it certainly supports the docker engine API out of the box, even in rootless mode. The following lines start the socket: # start socket in rootless mode
systemctl --user restart podman.socket
# check if socket is up
curl -H "Content-Type: application/json" \
--unix-socket /run/user/$UID/podman/podman.sock \
http://localhost/_ping
export DOCKER_SOCKET=/run/user/$UID/podman/podman.sock
export DOCKER_HOST=unix://$DOCKER_SOCKET
echo "export DOCKER_SOCKET=$DOCKER_SOCKET"
echo "export DOCKER_HOST=$DOCKER_HOST" The socket is not at However 'self reference' is still a bit difficult. |
Beta Was this translation helpful? Give feedback.
-
@aanno I'd like to help too and have some minor AIO development experience. I recently migrated my server to a newer OS and also migrated all my containers to podman user-space instead of docker. So far, with the user's docker.sock mapped, it seems to just work at a high level for basic functionality. So far I haven't reached the point of testing the backups or updates etc.
@szaimen I know your hesitation about supporting one more platform, but podman has been making steady progress and I think it's getting to a point where it can completely replace docker, which IMO is a good thing given it's architecture. How about we take a stab at identifying & fixing the issues as long as they don't break anything with regular docker or require dramatically different code? At that point it can remain a "it works but isn't officially supported state" until someone officially steps in to maintain it. Since I've migrated my personal setup to podman, I'm highly motivated to fix the basic functionality for myself at least :-) |
Beta Was this translation helpful? Give feedback.
-
It looks like containrrr/watchtower#1060 does have a proposed fix: containrrr/watchtower#2072 but looking at the release cadence of watchtower, I'm not hopeful :-( |
Beta Was this translation helpful? Give feedback.
-
Hello, thank you very much for all your comments. However, I'm able to run all-in-one on podman now without any changes to code. For all the glory details see #5090 (comment) Kind regards, |
Beta Was this translation helpful? Give feedback.
-
Dear @apparle , I'm not aware of any watchtower problem with podman and I have already update twice without any problems. |
Beta Was this translation helpful? Give feedback.
-
Hm, see my old comment for why I'm thinking that containrrr/watchtower#1060 is no longer relevant. I just copied the remark into the 1060 ticket as well. $ podman info
host:
arch: arm64
buildahVersion: 1.39.0
cgroupControllers:
- cpu
- memory
- pids
cgroupManager: systemd
cgroupVersion: v2
conmon:
package: conmon-2.1.12-3.fc41.aarch64
path: /usr/bin/conmon
version: 'conmon version 2.1.12, commit: '
cpuUtilization:
idlePercent: 98
systemPercent: 0.85
userPercent: 1.14
cpus: 10
databaseBackend: sqlite
distribution:
distribution: fedora
variant: coreos
version: "41"
eventLogger: journald
freeLocks: 2020
hostname: netzgeneration
idMappings:
gidmap:
- container_id: 0
host_id: 1003
size: 1
- container_id: 1
host_id: 720896
size: 65536
uidmap:
- container_id: 0
host_id: 1003
size: 1
- container_id: 1
host_id: 720896
size: 65536
kernel: 6.13.5-200.fc41.aarch64
linkmode: dynamic
logDriver: journald
memFree: 8054398976
memTotal: 16715886592
networkBackend: netavark
networkBackendInfo:
backend: netavark
dns:
package: aardvark-dns-1.14.0-1.fc41.aarch64
path: /usr/libexec/podman/aardvark-dns
version: aardvark-dns 1.14.0
package: netavark-1.14.0-1.fc41.aarch64
path: /usr/libexec/podman/netavark
version: netavark 1.14.0
ociRuntime:
name: crun
package: crun-1.20-2.fc41.aarch64
path: /usr/bin/crun
version: |-
crun version 1.20
commit: 9c9a76ac11994701dd666c4f0b869ceffb599a66
rundir: /run/user/1003/crun
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +LIBKRUN +WASM:wasmedge +YAJL
os: linux
pasta:
executable: /usr/bin/pasta
package: passt-0^20250217.ga1e48a0-2.fc41.aarch64
version: ""
remoteSocket:
exists: true
path: /run/user/1003/podman/podman.sock
rootlessNetworkCmd: pasta
security:
apparmorEnabled: false
capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
rootless: true
seccompEnabled: true
seccompProfilePath: /usr/share/containers/seccomp.json
selinuxEnabled: true
serviceIsRemote: false
slirp4netns:
executable: /usr/bin/slirp4netns
package: slirp4netns-1.3.1-1.fc41.aarch64
version: |-
slirp4netns version 1.3.1
commit: e5e368c4f5db6ae75c2fce786e31eef9da6bf236
libslirp: 4.8.0
SLIRP_CONFIG_VERSION_MAX: 5
libseccomp: 2.5.5
swapFree: 4294701056
swapTotal: 4294963200
uptime: 89h 51m 18.00s (Approximately 3.71 days)
variant: v8
plugins:
authorization: null
log:
- k8s-file
- none
- passthrough
- journald
network:
- bridge
- macvlan
- ipvlan
volume:
- local
registries:
search:
- registry.fedoraproject.org
- registry.access.redhat.com
- docker.io
store:
configFile: /var/home/nc/.config/containers/storage.conf
containerStore:
number: 13
paused: 0
running: 13
stopped: 0
graphDriverName: overlay
graphOptions: {}
graphRoot: /var/home/nc/.local/share/containers/storage
graphRootAllocated: 810201686016
graphRootUsed: 120170962944
graphStatus:
Backing Filesystem: xfs
Native Overlay Diff: "true"
Supports d_type: "true"
Supports shifting: "false"
Supports volatile: "true"
Using metacopy: "false"
imageCopyTmpDir: /var/tmp
imageStore:
number: 23
runRoot: /run/user/1003/containers
transientStore: false
volumePath: /var/home/nc/.local/share/containers/storage/volumes
version:
APIVersion: 5.4.0
BuildOrigin: Fedora Project
Built: 1739232000
BuiltTime: Tue Feb 11 00:00:00 2025
GitCommit: ""
GoVersion: go1.23.5
Os: linux
OsArch: linux/arm64
Version: 5.4.0
`` |
Beta Was this translation helpful? Give feedback.
-
$ podman inspect nextcloud-aio-mastercontainer
[
{
"Id": "0caa3f29d089c9d01fd011acb5823b4061dd6c716a093c8ae583e26bdfceb4fa",
"Created": "2025-04-06T12:45:21.298199003Z",
"Path": "/run/podman-init",
"Args": [
"--",
"/start.sh"
],
"State": {
"OciVersion": "1.2.0",
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 10782,
"ConmonPid": 10780,
"ExitCode": 0,
"Error": "container 0caa3f29d089c9d01fd011acb5823b4061dd6c716a093c8ae583e26bdfceb4fa: container is running",
"StartedAt": "2025-04-06T12:45:21.73392596Z",
"FinishedAt": "0001-01-01T00:00:00Z",
"Health": {
"Status": "healthy",
"FailingStreak": 0,
"Log": [
{
"Start": "2025-04-10T08:56:20.623645064Z",
"End": "2025-04-10T08:56:20.753995244Z",
"ExitCode": 0,
"Output": "Connection to 127.0.0.1 80 port [tcp/http] succeeded!\nConnection to 127.0.0.1 8000 port [tcp/*] succeeded!\nConnection to 127.0.0.1 8080 port [tcp/http-alt] succeeded!\nConnection to 127.0.0.1 8443 port [tcp/*] succeeded!\nConnection to 127.0.0.1 9000 port [tcp/*] succeeded!\nConnection to 127.0.0.1 9876 port [tcp/*] succeeded!\n"
},
{
"Start": "2025-04-10T08:56:51.611764453Z",
"End": "2025-04-10T08:56:51.778339808Z",
"ExitCode": 0,
"Output": "Connection to 127.0.0.1 80 port [tcp/http] succeeded!\nConnection to 127.0.0.1 8000 port [tcp/*] succeeded!\nConnection to 127.0.0.1 8080 port [tcp/http-alt] succeeded!\nConnection to 127.0.0.1 8443 port [tcp/*] succeeded!\nConnection to 127.0.0.1 9000 port [tcp/*] succeeded!\nConnection to 127.0.0.1 9876 port [tcp/*] succeeded!\n"
},
{
"Start": "2025-04-10T08:57:22.613406838Z",
"End": "2025-04-10T08:57:22.891650364Z",
"ExitCode": 0,
"Output": "Connection to 127.0.0.1 80 port [tcp/http] succeeded!\nConnection to 127.0.0.1 8000 port [tcp/*] succeeded!\nConnection to 127.0.0.1 8080 port [tcp/http-alt] succeeded!\nConnection to 127.0.0.1 8443 port [tcp/*] succeeded!\nConnection to 127.0.0.1 9000 port [tcp/*] succeeded!\nConnection to 127.0.0.1 9876 port [tcp/*] succeeded!\n"
},
{
"Start": "2025-04-10T08:57:53.642606774Z",
"End": "2025-04-10T08:57:53.940052831Z",
"ExitCode": 0,
"Output": "Connection to 127.0.0.1 80 port [tcp/http] succeeded!\nConnection to 127.0.0.1 8000 port [tcp/*] succeeded!\nConnection to 127.0.0.1 8080 port [tcp/http-alt] succeeded!\nConnection to 127.0.0.1 8443 port [tcp/*] succeeded!\nConnection to 127.0.0.1 9000 port [tcp/*] succeeded!\nConnection to 127.0.0.1 9876 port [tcp/*] succeeded!\n"
},
{
"Start": "2025-04-10T08:58:24.893311809Z",
"End": "2025-04-10T08:58:24.973561659Z",
"ExitCode": 0,
"Output": "Connection to 127.0.0.1 80 port [tcp/http] succeeded!\nConnection to 127.0.0.1 8000 port [tcp/*] succeeded!\nConnection to 127.0.0.1 8080 port [tcp/http-alt] succeeded!\nConnection to 127.0.0.1 8443 port [tcp/*] succeeded!\nConnection to 127.0.0.1 9000 port [tcp/*] succeeded!\nConnection to 127.0.0.1 9876 port [tcp/*] succeeded!\n"
}
]
},
"CgroupPath": "/user.slice/user-1003.slice/[email protected]/user.slice/user-libpod_pod_1a29f3a0b922bd18316b02526f761ab63c3f5de78a0739a2a5d6d96edebae7f5.slice/libpod-0caa3f29d089c9d01fd011acb5823b4061dd6c716a093c8ae583e26bdfceb4fa.scope",
"CheckpointedAt": "0001-01-01T00:00:00Z",
"RestoredAt": "0001-01-01T00:00:00Z"
},
"Image": "2d79a59a95d8f20efbe39921e57e3f7e9b70dd1d47c597afa4d4f8b2b89b6f0c",
"ImageDigest": "sha256:53d4a4ec39cfb602a54037dfe99ee7777b4357c9c6f41b23d351245a0ee4ef43",
"ImageName": "docker.io/nextcloud/all-in-one:latest",
"Rootfs": "",
"Pod": "1a29f3a0b922bd18316b02526f761ab63c3f5de78a0739a2a5d6d96edebae7f5",
"ResolvConfPath": "/run/user/1003/containers/overlay-containers/0caa3f29d089c9d01fd011acb5823b4061dd6c716a093c8ae583e26bdfceb4fa/userdata/resolv.conf",
"HostnamePath": "/run/user/1003/containers/overlay-containers/0caa3f29d089c9d01fd011acb5823b4061dd6c716a093c8ae583e26bdfceb4fa/userdata/hostname",
"HostsPath": "/run/user/1003/containers/overlay-containers/0caa3f29d089c9d01fd011acb5823b4061dd6c716a093c8ae583e26bdfceb4fa/userdata/hosts",
"StaticDir": "/var/home/nc/.local/share/containers/storage/overlay-containers/0caa3f29d089c9d01fd011acb5823b4061dd6c716a093c8ae583e26bdfceb4fa/userdata",
"OCIConfigPath": "/var/home/nc/.local/share/containers/storage/overlay-containers/0caa3f29d089c9d01fd011acb5823b4061dd6c716a093c8ae583e26bdfceb4fa/userdata/config.json",
"OCIRuntime": "crun",
"ConmonPidFile": "/run/user/1003/containers/overlay-containers/0caa3f29d089c9d01fd011acb5823b4061dd6c716a093c8ae583e26bdfceb4fa/userdata/conmon.pid",
"PidFile": "/run/user/1003/containers/overlay-containers/0caa3f29d089c9d01fd011acb5823b4061dd6c716a093c8ae583e26bdfceb4fa/userdata/pidfile",
"Name": "nextcloud-aio-mastercontainer",
"RestartCount": 0,
"Driver": "overlay",
"MountLabel": "system_u:object_r:container_file_t:s0:c1022,c1023",
"ProcessLabel": "",
"AppArmorProfile": "",
"EffectiveCaps": [
"CAP_CHOWN",
"CAP_DAC_OVERRIDE",
"CAP_FOWNER",
"CAP_FSETID",
"CAP_KILL",
"CAP_NET_BIND_SERVICE",
"CAP_SETFCAP",
"CAP_SETGID",
"CAP_SETPCAP",
"CAP_SETUID",
"CAP_SYS_CHROOT"
],
"BoundingCaps": [
"CAP_CHOWN",
"CAP_DAC_OVERRIDE",
"CAP_FOWNER",
"CAP_FSETID",
"CAP_KILL",
"CAP_NET_BIND_SERVICE",
"CAP_SETFCAP",
"CAP_SETGID",
"CAP_SETPCAP",
"CAP_SETUID",
"CAP_SYS_CHROOT"
],
"ExecIDs": [],
"GraphDriver": {
"Name": "overlay",
"Data": {
"LowerDir": "/var/home/nc/.local/share/containers/storage/overlay/2de17071e4568edc6823f2fcbe62839f224cf5fe179dc2a0f305cfd964b5b009/diff:/var/home/nc/.local/share/containers/storage/overlay/7239317a8a4bf930ee38d13f62664f2c3e509530b4d7cec91392f503df24d750/diff:/var/home/nc/.local/share/containers/storage/overlay/1984dd6a790dcd08d01c3c8419b905d656eae991e7fede5d15c340f864afb5a3/diff:/var/home/nc/.local/share/containers/storage/overlay/fb7991f6c3ee94c23e71c121fa1c0e00db53b219aee232c1cea74794018748a1/diff:/var/home/nc/.local/share/containers/storage/overlay/7767fcc453f3053f23c3706475c2d793cb91482a27d11968a9efe6c013205b11/diff:/var/home/nc/.local/share/containers/storage/overlay/fa27682e53ac65fe4fae8718842cd75aff664031d55fd9936faaee8bc299451d/diff:/var/home/nc/.local/share/containers/storage/overlay/b88d199cb66abb2318a5f0c4d270ae0a231946dd64ba52ca68091a24f591a815/diff:/var/home/nc/.local/share/containers/storage/overlay/c23ea3a3c433b7faa061885e5f921e6b624131a3e024de9f1809da695318432c/diff:/var/home/nc/.local/share/containers/storage/overlay/4b8c25d9bbe472854db35ac961f1dd91df05749e1de2431e795b17e4d283f06e/diff:/var/home/nc/.local/share/containers/storage/overlay/01359c4a1fdd85d8db0a26f4f5a740d80103a2577292a3592e4ccce24e90219e/diff:/var/home/nc/.local/share/containers/storage/overlay/ceb9d510d976d692f6a1a88d037172a87b9ca9f67cec46dbbcae1eb344a14d1f/diff:/var/home/nc/.local/share/containers/storage/overlay/cc542502cbf6643b7c8ce848d577bedfebb954b481604fa1d73bc638a8f81086/diff:/var/home/nc/.local/share/containers/storage/overlay/cfe5d154dd02251444e96d21506835cf753ce44a017bf3603d68dcd9b791884a/diff:/var/home/nc/.local/share/containers/storage/overlay/a3b01dfa6b70fa01a660ecbc08142db9c6f3511527939f60257291f90dbb133e/diff:/var/home/nc/.local/share/containers/storage/overlay/1a53ddab60b9d5afa4c54ae234763487796ede91e0dccdba4d6b0694ee8ae3a7/diff:/var/home/nc/.local/share/containers/storage/overlay/52e54c7a567e3bbca5dc57f65f66e422af7f7debd3dd57b4edc0783f31e291c1/diff:/var/home/nc/.local/share/containers/storage/overlay/166a7067eaf0b6263e3d64f43dcedf71980cbf142169ecadf904511080100e2d/diff:/var/home/nc/.local/share/containers/storage/overlay/cb972388ca4250db0a469da893a979d6879db571d07e29f2ade451303a24b68e/diff:/var/home/nc/.local/share/containers/storage/overlay/a16e98724c05975ee8c40d8fe389c3481373d34ab20a1cf52ea2accc43f71f4c/diff",
"MergedDir": "/var/home/nc/.local/share/containers/storage/overlay/43df2590ffddd90d83252e47d58cb8cddd38eceda23c4d0934d883a3360fb384/merged",
"UpperDir": "/var/home/nc/.local/share/containers/storage/overlay/43df2590ffddd90d83252e47d58cb8cddd38eceda23c4d0934d883a3360fb384/diff",
"WorkDir": "/var/home/nc/.local/share/containers/storage/overlay/43df2590ffddd90d83252e47d58cb8cddd38eceda23c4d0934d883a3360fb384/work"
}
},
"Mounts": [
{
"Type": "volume",
"Name": "nextcloud_aio_mastercontainer",
"Source": "/var/home/nc/.local/share/containers/storage/volumes/nextcloud_aio_mastercontainer/_data",
"Destination": "/mnt/docker-aio-config",
"Driver": "local",
"Mode": "z",
"Options": [
"nosuid",
"nodev",
"rbind"
],
"RW": true,
"Propagation": "rprivate"
},
{
"Type": "bind",
"Source": "/var/home/nc/scm/all-in-one/aio-on-fcos/backup",
"Destination": "/mnt/backup",
"Driver": "",
"Mode": "",
"Options": [
"rbind"
],
"RW": true,
"Propagation": "rprivate"
},
{
"Type": "bind",
"Source": "/run/user/1003/podman/podman.sock",
"Destination": "/var/run/docker.sock",
"Driver": "",
"Mode": "",
"Options": [
"nosuid",
"nodev",
"rbind"
],
"RW": false,
"Propagation": "rprivate"
}
],
"Dependencies": [],
"NetworkSettings": {
"EndpointID": "",
"Gateway": "",
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "",
"Bridge": "",
"SandboxID": "",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"80/tcp": null,
"8080/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8080"
}
],
"8443/tcp": null,
"9000/tcp": null
},
"SandboxKey": "/run/user/1003/netns/netns-cca41067-029f-4965-7801-ae1db0351ad9",
"Networks": {
"nextcloud-aio": {
"EndpointID": "",
"Gateway": "10.89.57.1",
"IPAddress": "10.89.57.2",
"IPPrefixLen": 24,
"IPv6Gateway": "fd49:dc34:d0fe:ef6b:cafe::1",
"GlobalIPv6Address": "fd49:dc34:d0fe:ef6b:cafe::2",
"GlobalIPv6PrefixLen": 80,
"MacAddress": "aa:90:38:21:49:19",
"NetworkID": "f58b65ccd74094778af2e0f1314b6c698f11a259a59729b2593d33dc3acfe03d",
"DriverOpts": null,
"IPAMConfig": null,
"Links": null,
"Aliases": [
"0caa3f29d089"
]
},
"podman": {
"EndpointID": "",
"Gateway": "10.88.0.1",
"IPAddress": "10.88.0.4",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "56:63:29:c1:2e:a6",
"NetworkID": "2f259bab93aaaaa2542ba43ef33eb990d0999ee1b9924b557b7be53c0b7a1bb9",
"DriverOpts": null,
"IPAMConfig": null,
"Links": null,
"Aliases": [
"nextcloud-aio-mastercontainer",
"0caa3f29d089"
]
}
}
},
"Namespace": "",
"IsInfra": false,
"IsService": false,
"KubeExitCodePropagation": "invalid",
"lockNumber": 7,
"Config": {
"Hostname": "0caa3f29d089",
"Domainname": "",
"User": "root",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"PHPIZE_DEPS=autoconf \t\tdpkg-dev dpkg \t\tfile \t\tg++ \t\tgcc \t\tlibc-dev \t\tmake \t\tpkgconf \t\tre2c",
"PHP_VERSION=8.3.19",
"GPG_KEYS=1198C0117593497A5EC5C199286AF1F9897469DC C28D937575603EB4ABB725861C0779DC5C0A9DE4 AFD8691FDAEDF03BDF6E460563F15A9B715376CA",
"NEXTCLOUD_MEMORY_LIMIT=1024M",
"PHP_URL=https://www.php.net/distributions/php-8.3.19.tar.xz",
"APACHE_IP_BINDING=0.0.0.0",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"container=podman",
"PHP_ASC_URL=https://www.php.net/distributions/php-8.3.19.tar.xz.asc",
"PHP_SHA256=976e4077dd25bec96b5dfe8938052d243bbd838f95368a204896eff12756545f",
"PHP_CPPFLAGS=-fstack-protector-strong -fpic -fpie -O2 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64",
"PHP_INI_DIR=/usr/local/etc/php",
"PHP_CFLAGS=-fstack-protector-strong -fpic -fpie -O2 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64",
"SKIP_DOMAIN_VALIDATION=false",
"APACHE_ADDITIONAL_NETWORK=nextcloud_frontend",
"NEXTCLOUD_MOUNT=/var/home/nc/scm/all-in-one/aio-on-fcos/backup",
"PHP_LDFLAGS=-Wl,-O1 -pie",
"WATCHTOWER_DOCKER_SOCKET_PATH=/run/user/1003/podman/podman.sock",
"APACHE_PORT=11000",
"HOME=/root",
"HOSTNAME=0caa3f29d089"
],
"Cmd": null,
"Image": "docker.io/nextcloud/all-in-one:latest",
"Volumes": null,
"WorkingDir": "/var/www/docker-aio",
"Entrypoint": [
"/start.sh"
],
"OnBuild": null,
"Labels": {
"PODMAN_SYSTEMD_UNIT": "[email protected]",
"com.docker.compose.container-number": "1",
"com.docker.compose.project": "aio",
"com.docker.compose.project.config_files": "compose.yaml",
"com.docker.compose.project.working_dir": "/var/home/nc/scm/all-in-one/aio-on-fcos",
"com.docker.compose.service": "nextcloud-aio-mastercontainer",
"io.podman.compose.config-hash": "3a910975aa3e57c8cf063ac7a74b9e6a1dc3426ee6dc247c4c77b0d2bbcf422a",
"io.podman.compose.project": "aio",
"io.podman.compose.version": "1.3.0"
},
"Annotations": {
"io.container.manager": "libpod",
"io.kubernetes.cri-o.SandboxID": "1a29f3a0b922bd18316b02526f761ab63c3f5de78a0739a2a5d6d96edebae7f5",
"io.podman.annotations.init": "TRUE",
"io.podman.annotations.label": "disable",
"org.opencontainers.image.stopSignal": "3",
"org.systemd.property.KillSignal": "3",
"org.systemd.property.TimeoutStopUSec": "uint64 10000000"
},
"StopSignal": "SIGQUIT",
"Healthcheck": {
"Test": [
"CMD-SHELL",
"/healthcheck.sh"
],
"Interval": 30000000000,
"Timeout": 30000000000
},
"HealthcheckOnFailureAction": "none",
"HealthLogDestination": "local",
"HealthcheckMaxLogCount": 5,
"HealthcheckMaxLogSize": 500,
"CreateCommand": [
"podman",
"run",
"--name=nextcloud-aio-mastercontainer",
"-d",
"--pod=pod_aio",
"--security-opt",
"label:disable",
"--label",
"io.podman.compose.config-hash=3a910975aa3e57c8cf063ac7a74b9e6a1dc3426ee6dc247c4c77b0d2bbcf422a",
"--label",
"io.podman.compose.project=aio",
"--label",
"io.podman.compose.version=1.3.0",
"--label",
"[email protected]",
"--label",
"com.docker.compose.project=aio",
"--label",
"com.docker.compose.project.working_dir=/var/home/nc/scm/all-in-one/aio-on-fcos",
"--label",
"com.docker.compose.project.config_files=compose.yaml",
"--label",
"com.docker.compose.container-number=1",
"--label",
"com.docker.compose.service=nextcloud-aio-mastercontainer",
"-e",
"APACHE_PORT=11000",
"-e",
"APACHE_IP_BINDING=0.0.0.0",
"-e",
"APACHE_ADDITIONAL_NETWORK=nextcloud_frontend",
"-e",
"NEXTCLOUD_MOUNT=/var/home/nc/scm/all-in-one/aio-on-fcos/backup",
"-e",
"NEXTCLOUD_MEMORY_LIMIT=1024M",
"-e",
"SKIP_DOMAIN_VALIDATION=false",
"-e",
"WATCHTOWER_DOCKER_SOCKET_PATH=/run/user/1003/podman/podman.sock",
"-v",
"/var/home/nc/scm/all-in-one/aio-on-fcos/backup:/mnt/backup:z",
"-v",
"nextcloud_aio_mastercontainer:/mnt/docker-aio-config:z",
"-v",
"/run/user/1003/podman/podman.sock:/var/run/docker.sock:z,ro",
"--network=bridge:alias=nextcloud-aio-mastercontainer",
"--add-host",
"nextcloud.breitbandig.de:host-gateway",
"-p",
"8080:8080",
"--restart",
"never",
"--init",
"docker.io/nextcloud/all-in-one:latest"
],
"Umask": "0022",
"Timeout": 0,
"StopTimeout": 10,
"Passwd": true,
"sdNotifyMode": "container",
"ExposedPorts": {
"80/tcp": {},
"8080/tcp": {},
"8443/tcp": {},
"9000/tcp": {}
}
},
"HostConfig": {
"Binds": [
"nextcloud_aio_mastercontainer:/mnt/docker-aio-config:z,rw,rprivate,nosuid,nodev,rbind",
"/var/home/nc/scm/all-in-one/aio-on-fcos/backup:/mnt/backup:rw,rprivate,rbind",
"/run/user/1003/podman/podman.sock:/var/run/docker.sock:ro,rprivate,nosuid,nodev,rbind"
],
"CgroupManager": "systemd",
"CgroupMode": "private",
"ContainerIDFile": "",
"LogConfig": {
"Type": "journald",
"Config": null,
"Path": "",
"Tag": "",
"Size": "0B"
},
"NetworkMode": "bridge",
"PortBindings": {
"8080/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8080"
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"AutoRemoveImage": false,
"Annotations": {
"io.container.manager": "libpod",
"io.kubernetes.cri-o.SandboxID": "1a29f3a0b922bd18316b02526f761ab63c3f5de78a0739a2a5d6d96edebae7f5",
"io.podman.annotations.init": "TRUE",
"io.podman.annotations.label": "disable",
"org.opencontainers.image.stopSignal": "3",
"org.systemd.property.KillSignal": "3",
"org.systemd.property.TimeoutStopUSec": "uint64 10000000"
},
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": [],
"CapDrop": [],
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": [
"nextcloud.breitbandig.de:host-gateway"
],
"HostsFile": "",
"GroupAdd": [],
"IpcMode": "shareable",
"Cgroup": "",
"Cgroups": "default",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "private",
"Privileged": false,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"label=disable"
],
"Tmpfs": {},
"UTSMode": "private",
"UsernsMode": "",
"ShmSize": 65536000,
"Runtime": "oci",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 0,
"NanoCpus": 0,
"CgroupParent": "user.slice/user-1003.slice/[email protected]/user.slice/user-libpod_pod_1a29f3a0b922bd18316b02526f761ab63c3f5de78a0739a2a5d6d96edebae7f5.slice",
"BlkioWeight": 0,
"BlkioWeightDevice": null,
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DiskQuota": 0,
"KernelMemory": 0,
"MemoryReservation": 0,
"MemorySwap": 0,
"MemorySwappiness": 0,
"OomKillDisable": false,
"Init": true,
"PidsLimit": 2048,
"Ulimits": [
{
"Name": "RLIMIT_NOFILE",
"Soft": 65000,
"Hard": 65000
},
{
"Name": "RLIMIT_NPROC",
"Soft": 63175,
"Hard": 65000
}
],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"CgroupConf": null
},
"UseImageHosts": false,
"UseImageHostname": false
}
] |
Beta Was this translation helpful? Give feedback.
-
Dear @apparle , let's drill down a bit:
|
Beta Was this translation helpful? Give feedback.
-
At present AIO web interface results in 502 (Bad Gateway).
|
Beta Was this translation helpful? Give feedback.
-
Does #6533 solves this issue? |
Beta Was this translation helpful? Give feedback.
-
Here's my latest setup after all the fixes (v11.3.0 or above) for reference that I use with podman v4.9.3 userspace containers along with traefik. This becomes # ~/.config/containers/systemd/nextcloud.container
[Container]
ContainerName=nextcloud-aio-mastercontainer
Image=ghcr.io/nextcloud-releases/all-in-one:latest
Pull=missing
RunInit=true
Environment=APACHE_PORT=11000
Environment=APACHE_IP_BINDING=127.0.0.1
Environment=APACHE_ADDITIONAL_NETWORK=front_net
Environment=NEXTCLOUD_DATADIR=/media/data
Environment=NEXTCLOUD_ENABLE_DRI_DEVICE=true
Environment=WATCHTOWER_DOCKER_SOCKET_PATH=%t/podman/podman.sock
Volume=nextcloud_aio_mastercontainer.volume:/mnt/docker-aio-config
Volume=%t/podman/podman.sock:/var/run/docker.sock:ro
Label=traefik.enable=true
Label=traefik.http.routers.aio-nextcloud-router.entrypoints=websecure
Label=traefik.http.routers.aio-nextcloud-router.rule=Host(`aio-nextcloud.${MYDOMAIN}`)
Label=traefik.http.services.aio-nextcloud.loadbalancer.server.port=8080
Label=traefik.http.services.aio-nextcloud.loadbalancer.server.scheme=https
Label=traefik.http.services.aio-nextcloud.loadbalancer.serversTransport="insecure-transport@file"
Network=front_net.network
Notify=healthy
# Note, above "Notify=healthy" should just work, but there's a bug in quadlet/podman v4.9.3 which is fixed in podman v5.0.0. Below is a workaround that adds the --sdnotify again in the end for v4.9.3
PodmanArgs=--sdnotify=healthy
[Unit]
Description=Nextcloud AIO
After=traefik.service authentik.service
Wants=traefik.service authentik.service
Requires=podman.socket
[Service]
Restart=on-failure
# Below command should only run after mastercontainer is healthy, which is ensured by Notify=healthy above.
ExecStartPost=/usr/bin/podman exec --env START_CONTAINERS=1 nextcloud-aio-mastercontainer /daily-backup.sh
# Below can be split into multi-line with backslashes after quadlet/podman v5.0.0
ExecStop=/usr/bin/bash -c 'if podman container exists nextcloud-aio-mastercontainer ; then podman exec --env STOP_CONTAINERS=1 nextcloud-aio-mastercontainer /daily-backup.sh; fi'
ExecStop=/usr/bin/bash -c 'while podman ps --format "{{.Names}}" | grep -v nextcloud-aio-mastercontainer | grep nextcloud-aio ; do echo "nextcloud-aio-* containers still running. Attempting to stop them manually."; podman ps --format "{{.Names}}" | grep -v nextcloud-aio-mastercontainer | grep nextcloud-aio | xargs -r -L1 podman stop; sleep 10s; done'
TimeoutStartSec=600
TimeoutStopSec=600
[Install]
WantedBy=default.target and # ~/.config/containers/systemd/nextcloud_aio_mastercontainer.volume
[Unit]
Description=Nextcloud AIO Master Container Volume
[Volume]
VolumeName=nextcloud_aio_mastercontainer Note, traefik's container is setup separately along with it's configuration yaml describing the mapping for |
Beta Was this translation helpful? Give feedback.
-
@aanno now that #5568 is merged, you can follow the latest instructions on https://github.com/nextcloud/all-in-one/blob/main/develop.md#how-to-locally-build-and-test-changes-to-mastercontainer to test code change to add Extra Host like I described in #5090 (reply in thread) :
I don't have podman v5.3.0 on my server nor have Nextcloud Talk set up, so I can't test the fix myself. But if above fix does work, I can create a PR to incorporate this feature in AIO as well. |
Beta Was this translation helpful? Give feedback.
Hi,
Yes, this is how AIO works.
The code is here: https://github.com/nextcloud/all-in-one/tree/main/php#php-docker-controller. There you can also find some info on how to run this from source.
However, one of the main problems with podman currently is, that the included watchtower container which updates the mastercontainer is not compatible with podman. So t…