SCTP: Reinventing Internet Communication
Traditionally, if you want to write an application that works over the Internet, then you have two options for transport layer protocols. One is the Transmission Control Protocol (TCP), which provides similar semantics to a UNIX pipe. You write a stream of bytes into one end and read them out of the other end. The other option is the User Datagram Protocol (UDP), which provides similar semantics to a low latency version of a flock of carrier pigeons. You attach a message to each one: Most will probably arrive at the destination, but you have no control over the order.
For a lot of applications, TCP is useful. It's the protocol that you use for things like HTTP, SMTP, IMAP, XMPP, and so on, and it works although it's starting to look a bit dated. A lot of applications require reliable delivery but don't have strong constraints on the ordering[md]for example, a web server needs to send an HTML document and some images, but it doesn't matter how the packets are interleaved. With TCP, you need to open multiple connections to get this behavior.
UDP isn't really useful for very much. It's convenient in that it's message-oriented, rather than stream-oriented, but the lack of reliable delivery means that you generally end up implementing half of TCP on top of it whenever you use it. Almost every application that uses UDP implements TCP's rate limiting algorithm. Some also implement some form of retransmission or ordering.
In the last few years, however, we've started seeing a new layer 4 protocol widely deployed in shipping operating systems. The Stream Control Transmission Protocol (SCTP) provides a large number of features that make it more attractive than TCP or UDP for a lot of applications.
Packets and Streams
At the IP layer, everything that flows across the Internet is split up into packets. These may be any size in theory; however, the maximum packet size that any given link is required to be able to accept is 576 bytes. Some paths may allow bigger packets. For Ethernet, it's increasingly common to see 64KB or larger packets allowed on a link. The advantage of this is that you need a fixed-size header for each packet and the bigger the packet, the lower proportion of your bandwidth is wasted with packet headers. If you send a packet that is too big for a segment along the path, then it will be split up (or just dropped, if you're unlucky).
With TCP, it's the job of the protocol stack to split the data up into packets. If you provide a chunk of data longer than a single packet, then it may keep some of it around until your next write() call, or it may send a smaller packet. You have no control over this. On the receiving end, you can read data in chunks of any size, and the networking stack will provide as much or as little as it has available, or block until the amount you requested is available with no reference to the size of data that was sent.
UDP, on the other hand, expects you to send messages in packet-sized quantities. These can then be read at the other end. The order in which they arrive (or even whether they will arrive at all) is not guaranteed, but you will be able to read entire packets, rather than fragments, at the receiving end.
With SCTP, you send messages and can control the delivery semantics. Messages can be delivered either in or out of order, depending on the options provided by the sender. You can also control whether individual messages will be retransmitted if they are dropped. This is a lot more flexible than UDP, which won't retransmit anything, or TCP, which will retransmit everything. In a multiplayer game, for example, you may send constant updates from the server that are obsolete a fraction of a second later, so you won't retransmit them, and also new resources, which need reliable delivery.