You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I ran into a problem, and the (poor) documentation of zfs zone/unzone doesn't help.
For several years, I have an ubuntu server machine running with some disks with zfs, and LXD guest containers to run some processes on the zfs filesystems. Worked well for years. The zfs filesystems were mounted into the LXD guest the usual way.
However, since around upgrading to Ubuntu 24.04 / LXD 6 I had the problem, that one process, that needs to read the zfs guid in order to have a unique ID of the filesystem could not read that anymore: The files themselves were available insider the container through a cross mount, but the zfs metadata where gone, zfs list did not show anything at all.
I recently learned about the reason for that, found a website explaining to set
PID=$( lxc info GUEST| awk '/PID:/ {print $NF}' )
zfs set zoned=on FILESYSTEM
zfs zone /proc/${PID}/ns/user FILESYSTEM
and, on that day, it worked, the guest was seeing the zfs file systems.
The idea of putting all processes working on the zfs in an LXD virtual guest is that the zfs is encrypted and thus the LXD guest is started after entering the password.
But then, after reboot, nothing worked anymore:
the host could not mount the zfs filesystems anymore, since they were zoned.
the guest could not be started anymore, since the filesystems to be bind-mounted did not exist on the host (because zfs not mounted)
zfs unzone could not be performed, since it requires the file to unzone, but /proc/${PID}/ns/user did not exist, since the LXD container could not be started for the given reasons.
dead lock. Chicken and egg problem: The LXD container cannot be started without the zfs filesystems mounted on the host, and the zfs filesystems can be mounted inside the guest only.
Trying to put it on the hosts zone with
zfs zone /proc/$$/ns/user FILESYSTEM
did not work.
For the moment I could escape the problem by just setting zfs set zoned=off, but what happens if I ever set it to zoned=on again?
What if I had erased the LXD guest and thus had no trace of its namespaces?
And no, reading man 7 namespaces and man setns , man nsenter did not really help.
So could you please be a bit more talkative about what zfs zone/unzone really do, what's going on, and how to cleanly unzone without a process, i.e. escape from chicken-egg-problems like that?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
I ran into a problem, and the (poor) documentation of zfs zone/unzone doesn't help.
For several years, I have an ubuntu server machine running with some disks with zfs, and LXD guest containers to run some processes on the zfs filesystems. Worked well for years. The zfs filesystems were mounted into the LXD guest the usual way.
However, since around upgrading to Ubuntu 24.04 / LXD 6 I had the problem, that one process, that needs to read the zfs guid in order to have a unique ID of the filesystem could not read that anymore: The files themselves were available insider the container through a cross mount, but the zfs metadata where gone, zfs list did not show anything at all.
I recently learned about the reason for that, found a website explaining to set
PID=$( lxc info GUEST| awk '/PID:/ {print $NF}' )
zfs set zoned=on FILESYSTEM
zfs zone /proc/${PID}/ns/user FILESYSTEM
and, on that day, it worked, the guest was seeing the zfs file systems.
The idea of putting all processes working on the zfs in an LXD virtual guest is that the zfs is encrypted and thus the LXD guest is started after entering the password.
But then, after reboot, nothing worked anymore:
dead lock. Chicken and egg problem: The LXD container cannot be started without the zfs filesystems mounted on the host, and the zfs filesystems can be mounted inside the guest only.
Trying to put it on the hosts zone with
zfs zone /proc/$$/ns/user FILESYSTEM
did not work.
For the moment I could escape the problem by just setting zfs set zoned=off, but what happens if I ever set it to zoned=on again?
What if I had erased the LXD guest and thus had no trace of its namespaces?
And no, reading man 7 namespaces and man setns , man nsenter did not really help.
So could you please be a bit more talkative about what zfs zone/unzone really do, what's going on, and how to cleanly unzone without a process, i.e. escape from chicken-egg-problems like that?
regards
Beta Was this translation helpful? Give feedback.
All reactions