Home > Articles > Programming > C/C++

Datagrams and Multicasting (Part 2)

  • Print
  • + Share This
  • 💬 Discuss
This conclusion of a two-part series on datagrams describes how to take the strengths of TCP and integrate them into UDP to create a new, reliable datagram protocol.

The Internet offers the programmer several foundational message types to fit various programming requirements and needs. The highest level, streaming (Transaction Control Protocol or TCP), guarantees ordered, reliable data from the source to the destination program. User Datagram Protocol (UDP), or simply datagrams, offers fast messaging. The problems with each protocol are based on their strengths: TCP is slower than UDP, and UDP is unreliable.

NOTE

Internet documents called Request For Comments (RFCs) define the Reliable Datagram Protocol (RDP). This article does not attempt to comply with that definition.

Beefing Up the Protocol

In its current state, UDP may not meet the demands of many applications being created for peer-to-peer (P2P) collaboration. But TCP can be a little heavy-handed for the same applications. Datagrams have some limitations, and before using datagrams to send and receive messages these limitations need examination and prioritization:

  • Reliability. The data may not even arrive at the destination. This is more of a network limitation, however; the network may drop packets because of connection or router problems.

  • Integrity. The data that your program gets may be corrupted. The network transmission can garble the message.

  • Order. The sender may transmit the messages in the right order, but the receiver may get them in a different sequence. Because the message is discrete and independent, the network may use different paths between the source and destination, and the operating system makes no attempts to reorder the messages it gets.

  • Duplication. The network can create two or more duplicates (mirrors) of the same message.

Most of these limitations are not a problem if the program doesn't depend on them. For example, in a program that sends non-critical status information to a peer host, some lost status messages won't cause anything to fail. If you want more out of the protocol than the generic "I'm alive" watchdog, however, you may find these limitations a real problem.

If you fixed all these problems, you basically would end up with TCP, so why reinvent the wheel—especially if the wheel is good enough? The idea is to fix those limitations that cause the most pain. The others you can leave as "salient features" of the new protocol. For example, discrete messaging appears to be closer to a feature than a flaw. Getting messages in a different order seems natural to those who send mail through the post office. But the other limitations are more like defects—who wants to send a message, never expecting it to be received?

  • + Share This
  • 🔖 Save To Your Account

Discussions

comments powered by Disqus