-
-
Notifications
You must be signed in to change notification settings - Fork 125
Add support for IPv6 Virtual DNS (#462) #592
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
network/qubes-setup-dnat-to-ns
Outdated
if dest is None or (vm_nameserver == dest and len(qubesdb_dns) == 0): | ||
rules += [ | ||
f"ip{ip46} daddr {vm_nameserver} tcp dport 53 reject with icmp{ip46} type host-unreachable", | ||
f"ip{ip46} daddr {vm_nameserver} udp dport 53 reject with icmp{ip46} type host-unreachable", | ||
] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This logic is new for ipv4. It is carried over from #462. (https://github.com/QubesOS/qubes-core-agent-linux/pull/462/files#r1487256297)
Previous version checked explicitly if /qubes-ip
//qubes-netvm-primary-dns6
were specifically defined, rather than checking if any dns servers are defined for the family, as is being done here.
It looks like len(qubesdb_dns) == 0
is always false by this point (due to being inside the outer else
) so not confident about the condition being correct here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not clear to me under exactly what conditions host-unreachable
should be response. Reverted this whole part until there is clarity.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See the discussion on the original PR: #462
The issue is if you have only IPv4 or only IPv6 DNS - then you wouldn't have address of such kind. And the reject rule is to avoid long timeouts when trying non-existing DNS and immediately fallback to the other one.
If you have a way to test it, try with:
- only IPv4 DNS present
- only IPv6 DNS present
- both present
In all the cases, name resolution should keep working instantly. The tests I added in core-admin PR try to exercise those cases, but I'm not 100% if it will fail on slow fallback...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, got it, thanks for the pointer.
Makes me think maybe this part could also be broken out separately... Presumably it wouldn't make much difference for users by itself concerning only IPv4 networking and should help with any debugging to have them as separate commits on main.
I probably won't be in a good place to test properly myself until after couple of weeks. For completion I guess "neither present" should also be explicitly tested (against leaks).
5c98be5
to
1efe22a
Compare
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #592 +/- ##
=======================================
Coverage 71.10% 71.10%
=======================================
Files 3 3
Lines 481 481
=======================================
Hits 342 342
Misses 139 139 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
This comment was marked as resolved.
This comment was marked as resolved.
Now that the smaller PRs are in (the last one is still going through CI), this will need a rebase (dropping those already merged commits) to resolve conflicts. |
e145853
to
982add7
Compare
Rebased. Ended up squashing all the existing commits on this branch. (lmk if you prefer to retain history during review in situations like this) |
a89ef8f
to
77e4542
Compare
openQA run is still in progress, but I already see some failures:
This job doesn't have IPv6 enabled. |
OpenQA test summaryComplete test suite and dependencies: https://openqa.qubes-os.org/tests/overview?distri=qubesos&version=4.3&build=2025090403-4.3&flavor=pull-requests Test run included the following:
New failures, excluding unstableCompared to: https://openqa.qubes-os.org/tests/overview?distri=qubesos&version=4.3&build=2025081011-4.3&flavor=update
Failed tests36 failures
Fixed failuresCompared to: https://openqa.qubes-os.org/tests/149225#dependencies 73 fixed
Unstable testsPerformance TestsPerformance degradation:15 performance degradations
Remaining performance tests:57 tests
|
Not all of the failures are caused by this PR (or the core-admin one), but it seems most of them are. |
77e4542
to
7658b78
Compare
Fixed in 7658b78. I guess I would expect this one to have been caught in Also added in a little bit of typings for this here. |
This one is still the case. |
7658b78
to
33f80a9
Compare
- Merge '1cho1ce/add-ipv6-dnat-to-ns' into master - QubesOS#462 - fix: properly assign primary/secondary DNS - fix: check ipv4 dns presence by qdb /qubes-primary-dns instead of /qubes-ip - qubes-setup-dnat-to-ns: unify ipv4/ipv6 firewall rule generation Part of QubesOS/qubes-issues#10038
33f80a9
to
c0c7d42
Compare
@3nprob Are you still interested in getting this through, or would you mind if I pick up where you left off? Getting IPv6 DNS working is desirable for Whonix (which I help develop), so if you're busy elsewhere, I'd like to continue work on these PRs. I'm happy to just provide some extra testing if you want to keep working on this though. |
It would be super sweet if you'd like to collaborate on or even pick this up! I'll see if I can give you push rights to this branch so we can continue on the same PR. Unfortunately I don't currently have a great setup to test this properly at the moment. Noting from above that it doesn't seem to be in a properly working state currently. |
Thanks! I think I can test this since my network supports both IPv4 and IPv6 and I can disable IPv6 for testing. I don't need push rights, I'll just open new PRs, but thank you :) And thanks for everything you and 1cho1ce did to get things this far! |
Unfortunately, as promising as this situation looked, it looks like it fundamentally won't work. Some routers (including mine) advertise a link-local IPv6 address to devices on the LAN, so We may have to resort to something along the lines of socat to get this to work right. |
Thanks for digging! So not that easy, eh..
So the request comes from (or via)
Just a thought: If going all the way to socat, it makes me think might as well use a more purpose-made software... In the distant past, there was dnsmasq running in netvm but this is no longer considered an option (at least not with ports exposed) due to its attack surface. I wonder if it could be an option to run |
@3nprob From some discussion in the Qubes OS Matrix room, currently systemd-resolved is intentionally being bypassed as much as possible, because upstream doesn't recommend exposing it to the LAN, and so it's unclear whether it can safely handle potentially malicious clients. A different DNS resolver that is designed to work in this way might be an option, but then you have to run a DNS resolver in sys-net which is painful because sys-net is already severely resource-constrained by design. IIUC, socat would allow us to simply DNAT the virtual IPv6 DNS address to sys-net's internal network adapter's IP address, then we can forward it to the real DNS server thereafter. It's crude, but as long as all socat does is pass bytes from point A to point B and then back again, it should work and not introduce any further attack surface. (Then again, I don't know what socat's attack surface looks like; I'd assume it's pretty close to zero, but maybe not?) @marmarek Since it doesn't look like DNAT alone is going to work, what are your thoughts on adding socat to the solution? |
Alright, this is slightly horrible but after some fighting I have things working kinda... In order to use socat to forward the DNS queries, we have to have an IP address to have socat listen on. The loopback address would be near-ideal if it weren't for the fact that the IPv6 protocol itself intentionally prevents DNAT to loopback. We can't use a link-local address for the same reason we can't trivially forward packets to the router in all situations, so we have to use a "real" IP. We can either use an IP on one of sys-nets external listening interfaces (which sounds like a distinctly bad idea if you ask me, since then anyone on the LAN can use the Qubes machine as a DNS proxy), or we can use an internal virtual interface (which sounds much better). The problem with using an internal virtual interface is that those interfaces don't even exist unless sys-firewall (or something else that uses sys-net as its NetVM) is powered on, and if there are multiple VMs using sys-net as a NetVM at the same time, there are going to be multiple internal interfaces, only one of which we can choose as the DNAT destination. For the initial proof of concept, I didn't bother trying to write code to handle this and instead just manually punched in the nftables rules to DNAT anything coming from one of the virtual IPv6 DNS addresses to The internal-facing interface is part of if group 2, and so the default At this point I was able to fire up socat with Putting all the above together:
As ugly as this solution seems to be, it does work, so it looks like this will be possible! |
Using socat enables one more trick - it doesn't care about protocol on the listening and sending side, so you can translate IPv4 to IPv6 and back. So, technically we could have DNS1 and DNS2 addresses in v4 and v6 flavors, and they would translate to upstream DNS1 and DNS2 regardless of which protocol they use. On one hand, it's tempting, since it would avoid needing every qube to try IPv6 first just to learn it isn't available at the time (when you're connected to IPv4-only network). But on the other hand, it feels like a lot could go wrong... I did used something similar before, to be able to use IPv6-only network on qubes - simply socat listening on 10.139.1.1 and forwarding to whatever IPv6 DNS was there. But I don't have that script anymore, so I can't check what I did about listening address... I guess went with a dummy interface for it. |
If we want to do that, maybe it would be easier to forget all of the IPv6 DNAT stuff, make internal DNS always go over IPv4, and then use socat to forward those DNS requests to an IPv6 server if that was the only server available? That could come with substantially simpler code, and we wouldn't need a dummy interface since we can just do what we're already doing with IPv4 DNAT. Granted, applications that refuse to even try using IPv4 DNS for whatever reason wouldn't work with that, but I don't know of any such applications, and in a pinch one could use a second |
While adding this feature, it would be nice to consider already a case where there is only IPv6 traffic. Just because it's rather awkward to have need IPv4 just for (internal) DNS when everything else is on IPv6 already. On the other hand, there s still a very long way to IPv6-only internet, so maybe it doesn't really matter yet... |
Fixes QubesOS/qubes-issues#10038