Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kaukosohva using wireguard over LAN gives bad performance. Caused by MTU? #5

Open
7oxicshadow opened this issue Jan 22, 2021 · 6 comments

Comments

@7oxicshadow
Copy link
Contributor

During my testing over LAN if I specify my LAN adapter directly the streaming seems to work fine but if I use wireguard over my LAN I appeared to be getting many dropped frames on the client causing the screen to be grey a lot of the time...

After reading on the internet about wireguard and dropped packets I came across a post about wireguard and MTU. Someone with an unstable wireguard connection discovered that manually setting the MTU for the wireguard interface dramatically increased stability.

I can confirm (for me at least) that setting the HOST and CLIENT to MTU = 1500 on the wireguard interface fixed the issue for me and wireguard acts just the same as if i was using the real network interface over LAN.

I have not been able to test over an internet connection so I can not confirm if this is even an issue as I believe WAN traffic uses a lower MTU than LAN anyway but its worth posting here incase others are having similar issues.

@raspi
Copy link
Owner

raspi commented Jan 22, 2021

This might break WireGuard over the internet as WireGuard uses UDP and you need 48 bytes for WireGuard overhead and internet uses MTU of 1500. So setting WireGuard MTU to 1500 is actually 1548. What you might be seeing is that your network gear supports Jumbo Frames (MTU 9000) and thus everything works fine, but if you switch sending packets to internet, stuff starts to break again.

My view currently is that GStreamer should be tuned to send correct sized packets to eliminate packet fragmentation (queue, udpsink, rtph264pay, ..).

@7oxicshadow
Copy link
Contributor Author

A quick update. Managed to test this with a friend over the internet today.

We left the wg0,conf files as default (other than keys) so we were using the wireguard adapters as you originally intended. When we pinged each other on the wireguard interface we had a ping of 19ms. We both have an upload of 19Mbps and 68Mbps down.

During initial testing the service was unusable. The video would be fine for the first second of starting the receiver but after that it would give periodic updates and a grey screen a lot of the time making it unreliable.

We kept experimenting and as a last ditch resort we decided to change the MTU in the sender.sh to 1320 instead of 1400. I was expecting this to make things worse but it actually fixed everything? The stream after changing that setting was flawless and the latency was so low that it was barely noticeable.

We went further and tried dropping the MTU to 1200 and it still worked perfectly.

I do not understand why it works. If I understand correctly, lowering the MTU should make things worse because you run the risk of having to send multiple packets instead of sending a single one to achieve the same thing. Again this could be something unique to our setup but all that matters is that it works (very well) and I hope to give this a proper test in the not so distant future.

(Note: To anyone who might read this. We have not tested the usbip package as we have written our own custom userspace driver to send and receive controller inputs over UDP so our latency tests might be different to your experience)

@raspi
Copy link
Owner

raspi commented Jan 24, 2021

Really interesting result. I too expect things to break if the MTU is too small as it should increase packet fragmentation and possibly WireGuard packet resends. You could additionally try higher resolutions with lot of moving data so that packets are always "full". I've been using Quake 1 fork quakespasm with original Q1 intro demos as video test source as it has a lot of moving stuff in 1080p. Additionally if possible this could be tested from different ISPs.

@raspi
Copy link
Owner

raspi commented Jan 24, 2021

You can also run tracepath between your IP addresses (actual endpoint addresses, not internal VPN IPs) to see PMTU values and possible issues.

@raspi
Copy link
Owner

raspi commented Jan 24, 2021

Also are you running WireGuard over IPv4 or IPv6 as that also affects the packet size and overhead?

@7oxicshadow
Copy link
Contributor Author

Thanks. Will try what you suggested. I seem to remember my friend (client side) reporting that the time stamps kept stopping at his end with a message inbetween. I never thought of taking a log so you could see it. Thankfully its easy to reproduce so I will get one next time.

Also i am 99% sure that we are on separate ISP's.

Your idea of keeping the packets full might actually be on to something. I think fast moving scenes were better than scenes with say a black backgound and only the character moving.

I am going to assume IPv4 as I would not know how to enable IPv6 on wireguard.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants