M
M
mduser2016-05-10 18:28:03
linux
mduser, 2016-05-10 18:28:03

OpenVPN low speed, MTU, buffers?

Hello! Help to understand please.
OpenVPN, CentOS 7 server, Windows 7
TUN client, UDP
Low speed in the tunnel, no tunnel 100Mbps, 30-40
TUN or TAP in the tunnel, no difference, tried different, normal hardware, I don't think it's about encryption/lzo, etc.
I started digging, they write that it might be a matter of MTU, I don’t have anything in the mtu configs, I launched the testing built into OpenVPN on the server, adding mtu-test to the config Test
output
May 10 16:31:00 server openvpn[8485]: client /11.22.33.44:7777 NOTE: Beginning empirical MTU test -- results should be available in 3 to 4 minutes.
May 10 16:34:01 server openvpn[8485]: client/11.22.33.44:7777 NOTE: Empirical MTU test completed [Tried,Actual] local->remote=[1569,1457] remote->local=[1569,1457 ]
May 10 16:34:01 server openvpn[8485]: client/11.22.33.44:7777 NOTE: This connection is unable to accomodate a UDP packet size of 1569. Consider using --fragment or --mssfix options as a workaround.
I don't understand what should I do? What MTU to set, and what to specify in fragment / mssfix, I read the manual this way and that, I can’t understand what needs to be tweaked ...
I checked on all interfaces on the server and on the client MTU is 1500, checked the ping from the client to the server, the packet passes only 1472, it starts to fragment more, which, as far as I understand, says that my (client) MTU is just 1500.
Do I have some problems with MTU or not? Please tell me I can't understand.
And yet, in the process of searching for information on the low speed of the openvpn tunnel, I found information about buffers, which are very low by default, as they say.
I look in the Windows 7 client log

Tue May 10 17:28:10 2016 OpenVPN 2.3.10 x86_64-w64-mingw32 [SSL (OpenSSL)] [LZO] [PKCS11] [IPv6] built on Mar 10 2016
Tue May 10 17:28:10 2016 Windows version 6.1 (Windows 7)
Tue May 10 17:28:10 2016 library versions: OpenSSL 1.0.1s 1 Mar 2016, LZO 2.09
Tue May 10 17:28:11 2016 Socket Buffers: R=[8192->8192] S=[8192 ->8192]

I look at the Linux server log
May 10 16:30:43 server systemd: Starting OpenVPN Robust And Highly Flexible Tunneling Application On server/tun...
May 10 16:30:43 server openvpn[8484]: OpenVPN 2.3.10 x86_64-redhat-linux-gnu [ SSL (OpenSSL)] [LZO] [EPOLL] [PKCS11] [MH] [IPv6] built on Jan 4 2016
May 10 16:30:43 server openvpn[8484]: library versions: OpenSSL 1.0.1e-fips 11 Feb 2013 , LZO 2.06
May 10 16:30:43 server openvpn[8485]: Socket Buffers: R=[212992->212992] S=[212992->212992]

Raises questions:
1) Buffers are different, is this normal?
2) Are they small? The number 8192 in windows is embarrassing.
Here I read about buffers.
In the server config I wrote
sndbuf 524288
rcvbuf 524288
push "sndbuf 524288"
push "rcvbuf 524288"
on the server it became - Socket Buffers: R=[212992->425984] S=[212992->425984]
on the client it became - Socket Buffers: R= [8192->524288] S=[8192->524288]
The speed has increased significantly, up to the values ​​that were expected ~ 80 Mbps
It's not clear, the client sets the specified size to 524288, and the server 425984, is that normal? Should they be the same or just stay the same?
In general, it is not clear whether my problem of low speed is in buffers or in mtu?
Do I need to change the MTU if the speed has become good with increased buffers?
Thanks for the replies.

Answer the question

In order to leave comments, you need to log in

5 answer(s)
V
ValdikSS, 2016-05-16
@mduser

In /etc/sysctl.d/network.conf add:

net.core.rmem_max = 6291456
net.core.wmem_max = 4194304
net.core.wmem_default = 212992
net.core.rmem_default = 212992

And do sudo systemctl -p /etc/sysctl.d/network.conf

A
Alexander Karabanov, 2016-05-11
@karabanov

The speed has increased significantly, to the values ​​\u200b\u200bthat were expected ~ 80 Mbps

All right, success. The rest is the cost of encryption. And other overhead...

R
res2001, 2016-05-11
@res2001

Obviously MTU had nothing to do with it. Problems with MTU manifest themselves differently.

M
mduser, 2016-05-12
@mduser

It is also strange that when sndbuf 524288 rcvbuf 524288 is set in the
Linux
config,
it sets
Socket Buffers: R=[212992->425984] S=[212992->425984]
although
cat /proc/sys/net/core/rmem_max
212992
cat /proc/sys/net/core/wmem_max
212992
type maximum 212992 should not be installed at all? Or am I not looking?

A
Artem Soshnikov, 2015-06-27
@alekskondr

Put these rules above the framework rules

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question