in ,

How 1500 bytes became the MTU of the internet, Hacker News

Feb

10BASE ethernet card CC BY-SA 4.0 – Dmitry Nosachev

Ethernet is everywhere, tens of thousands of hardware vendors speak and implement it. However almost every ethernet link has one number in common, the MTU:

  $ ip l 1: lo:  mtu  state UNKNOWN     link / loopback : : : : 11:  (brd) : : : 11: : 11 2: enp5s0:  mtu  state state     link / ether xx: xx: xx: xx: xx: xx brd ff: ff: ff: ff: ff: ff 

The MTU (Maximum Transmission Unit) states how big a single packet can be. Generally speaking, when you are talking to devices on your own LAN the MTU will be around bytes and the internet runs almost universally on as well. However, this does not mean that these link layer technologies can’t transmit bigger packets.

For example, . (better known as WiFi) has a MTU of 2020 bytes, or if your network is using FDDI then you have a MTU around 2304 bytes. Ethernet itself has the concept of “jumbo frames”, where the MTU can be set up to 20200108213905 bytes (on supporting NICs, Switches and Routers).

However, almost none of this matters on the internet. Since the backbone of the internet is now mostly made up of ethernet links, the de facto maximum size of a packet is now unofficially set to bytes to avoid packets being fragmented down links.

On the face of it is a weird number, we would normally expect a lot of constants in computing to be based around mathematical constants, like powers of 2. 1998, however fits none of those.

So where did come from, and why are we still using it?

The magic number

Ethernet’s first major break into the world came in the form of BASE-2 (cheapernet) and (BASE-5 (thicknet), the numbers indicating roughly how many hundred meters a single network segment could span over.

Since there were many competing protocols at the time, and hardware limits existed, the original creator notes this

in an email

that the packet buffer memory requirements had some play in the magic 1998 number. (thanks to @ yeled for finding this)

In retrospect, a longer maximum might have been better, but if it The cost of NICs increased during the early days it may have prevented the widespread acceptance of Ethernet, so I’m not really concerned.

However that is not the whole story. The “Ethernet: Distributed Packet Switching for Local Computer Networks” paper from () is a early note of the efficiency cost analysis of larger packets on a network. This being especially important to ethernet at the time, since ethernet networks would ether be sharing the same coax cable between all systems, or there would be ethernet hubs that would only allow one packet at a time to be transmitted around all members of the ethernet segment .

A number had to be picked that would mean that transmission latency on these shared (sometimes busy) segments would not be too high, but also that packet header overhead would not be too much. (see some of the tables on the paper linked above on page – 20)

It would seem at best that the engineers at the time picked bytes, or around bits as the best “safe” value.

Since then various other transmission systems have come and gone, but the lowest MTU value of them has still been ethernet at bytes. Going bigger than lowest MTU on a network will either result in IP fragmentation, or the need to do path MTU detection. Both of which have their own sets of problems. Even if sometimes large OS vendors dropped the default MTU to even lower at times.

The efficiency factor

So now we know that the internet’s MTU is capped at mostly due to legacy latency numbers and hardware limits, how bad is this for the efficiency of the internet?

If we look at data from a major internet traffic exchange point (AMS-IX), we see that at least % of packets transiting the exchange are the maximum size. We can also see the total traffic of the LAN:

.

If you combine these two graphs, you get something that roughly looks like this. This is an estimation of how much traffic each packet size bucket is:

.

Or if we look at just the traffic that all of those ethernet preambles and headers cause, we get the same graph but with different scales:

AMS-IX Traffic graph

This shows a great deal of bandwidth being spent on headers for the largest packet class. Since the peak traffic shows the biggest packet bucket reading at around 246 GBit / s of overhead we can assume that if we had all adopted jumbo frames while we had the chance to, this overhead would only be around GBit / s.

But I think at this point, the ship has sailed to do this on the wider internet. While some AMS-IX traffic by packet size overhead internet transport carriers operate on (MTU) , the vast majority don’t, and changing the internet’s mind collectively has been shown time and time again to be AMS-IX traffic by packet size overhead prohibitively difficult

.

If you have more context on the history of bytes, please email them into ethernet @ benjojo.co.uk . Sadly the manuals, mailing list posts, and other context to this are disappearing fast without a trace.

If you liked this kind of stuff, you may like the rest of the blog

even if it is generally more geared towards the modern day abuses of standards :). If you want to stay up to date with what I do next you can use my blog’s RSS Feed or you can follow me on twitter

Until next time!

(Read More)

What do you think?

Leave a Reply

Your email address will not be published. Required fields are marked *

GIPHY App Key not set. Please check settings

DOLLAR STORE LED BULB, Hacker News

DOLLAR STORE LED BULB, Hacker News

Why GeForce Now's Surprise CyberPunk 2077 News Is Such a Big Deal, Crypto Coins News

Why GeForce Now's Surprise CyberPunk 2077 News Is Such a Big Deal, Crypto Coins News