Answer the question
In order to leave comments, you need to log in
Why MTU exactly 1500?
Hello everyone, I have a question. Why mtu exactly 1500 for ethernet? Maybe it happened historically or is this the most optimal value, while how was it determined?
Answer the question
In order to leave comments, you need to log in
Because the IEEE802.3 standard says so
. In fact, everything is simple, with such a maximum datagram size, it is small enough so that its delivery time is not critical (especially in case of losses), but also large enough to minimize overhead at the link level.
Suppose we want to send an IP datagram of 1500 bytes. We are still at the third level of the OSI model. Now we go to the second one, we pack the datagram into an ethernet frame, that is, we add another 26 bytes, for a total we have a frame of 1526 bytes. There is also a 12-byte gap between all frames to distinguish where one frame ends and another begins. That is, for every datagram of 1500 bytes, we send a frame of 1538 bytes.
Let's not get distracted by all sorts of Fast Ethernet for now and accept that the speed of our Ethernet is 10mbps. Let's calculate how many frames can be transmitted per second:
10 Mbps / 1538 bytes = 812.74 frames / second.
Having the number of frames per second, we can estimate at what size our datagram will go and what will be the payload per channel:
812.74 * 1500 = 9 765 625 bps or ~ 9.7Mbps which is ~ 97% channel full. It is logical that if you increase the MTU, then the payload will be larger, but here you should just keep in mind packet loss and so on. In this regard, small packages are better, in the end we come to a compromise.
Of course, a lot has changed since then, but the essence remains the same. We select the packet size by overhead and the probability of loss.
I recommend reading:en.wikipedia.org/wiki/Jumbo_frame
IMHO, determined by the method of scientific poke. Read more on Google for
"why mtu 1500"
eg
But where did these notorious 1500 bytes come from, the question is more complicated. I found the following explanation - there were several prerequisites for introducing an upper frame size limit:
Transmission delay - the larger the frame, the longer the transmission takes. For early networks, where the collision domain was not limited to a port and all stations had to wait for the transmission to complete, this was a serious problem.
The larger the frame, the more likely it is that the frame will be corrupted in transit, necessitating a retransmission, and all devices in the collision domain will have to wait again.
The limitations imposed by the memory used for the interface buffers - at that time (1979), the increase in buffers significantly increased the cost of the interface.
The limitation introduced by the Length / Type field is fixed in the standard that all values above 1536 (from 05-DD to 05-FF.) indicate EtherType, respectively, the length must be less than 05-DC. (I really have a suspicion that this is more a consequence than a premise, but it seems like infa from the developers of the 802.3 standard)
Consider 10 megabit ethernet. The actual size of a frame with headers and payload is 1526 bytes. Adding here 12 bytes of space between frames , we get 1538 bytes.
10 megabits / 1538 bytes = 812 frames per second
812 frames per second * 1500 bytes of payload * 8 bits = 9752925 bps
9752925 ~ 97.5% bandwidth efficiency of 10M ethernet.
By reducing the MTU, we will increase the number of frames per second and thereby reduce the bandwidth efficiency due to unnecessary headers.
By increasing the MTU, we will reduce the number of frames per second, improve the quantitative indicators of the efficiency of bandwidth use.
Why stop at 97.5% efficiency? Because it is this number that is usually counted abroad and written with the words "very high effectiveness". However, perhaps 97.5% is enshrined in some US classification standard.
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question