Now I'm wondering if you could embed an out of band protocol in all your TCP traffic on the internet at large by misusing those mostly ignored, but accessible, bits.
At worst just to eek out ~16 more bits per packet.
I would be very surprised if they made it to the other side of the world unmangled when the URG flag is unset. I would be even more surprised if some Cisco hardware wasn't already reusing those bits for its own purposes.
I find the writing here confusing. This is what I understand, please correct me if I'm wrong. The URG field is currently used to smuggle urgent data in its value, whereas the spec says that it should be used to mark the end (and beginning) of an "urgent region" of data on the stream?
Let me try a more concise explanation. There's two separate fields in a TCP packet header: the 1-bit URG flag, and the 16-bit Urgent Pointer.
The spec says that "when URG is set, the Urgent Pointer has meaning", and means to "put the user into urgent mode", and points to an index of the stream at some point in the future. When the user reads the data at that index, signal the user to return to normal mode.
Because the wording was confusing in the original spec, everyone interpreted this as "there's a special byte at index which is considered 'urgent data'". Popular implementations removed it from the stream entirely and placed it in a special out-of-order bucket.
Maybe I’m misinterpreting that part of the original article, then. (I read it as implying that everything is hated, but UTF-8 and TCP are the least-hated – meaning someone quoting UTF-8 from that list of good things and implying it would only be put there by a non-Python programmer would be calling attention to some unpleasant part of Python’s UTF-8 handling. Phew! That reading does agree with the rest of the article with regards to TCP and with the fact that UTF-8 is a wonderful encoding, though.)