Home > Articles > Data

  • Print
  • + Share This
From the author of

System Integration

The objective in system integration should be to achieve 100% coverage of functionality and performance characterization of the complete solution. We'll talk about the use for beta testing a little later. In simple terms, if the switch is advertised to support all modes of operation of the H.323 protocol and to simultaneously interwork with other protocols, such as MGCP, SIP, and SS7, system integration is the time to throw everything at the box, from minimum to maximum physical configurations, and see how it works. One approach is to use "trees" to visually specify functional branches that can be taken for a particular protocol under various conditions and then map those branches to entry points for the protocols that interwork for call setup and processing. The need for appropriate test equipment is great in this phase, and the right test equipment might not always be available. A good load generator, for example, is necessary to make sure that buffering and queuing schemes are properly exercised.

System integration ideally should be performed with a single switch and sophisticated test equipment of various makes and flavors before connecting more than one switch together. Unfortunately, such is not case in real life, and the need arises for multiple-switch testing, which is inevitable. One problem that might arise if you rush into connecting multiple boxes together is that subtle deviations from strict protocol compliance might not be caught in the testing phase. The reason is simple: A misinterpretation of a portion of a specification could result in invalid packets being sent, and the receiving end will not complain because both sides of the wire have implemented the same misinterpretation. How can you avoid this situation (which is not as rare as you might think)? Compliance and interoperability testing is the answer. These can be achieved with the help of testing equipment and benchmarked products known to be compliant with the protocol specifications. This particular area is still kind of murky because the main protagonist protocols in packet telephony are not quite fully settled yet.

At this time, we need to do failover scenarios and rainy-day recovery procedures. Proper failover testing requires all the equipment that will participate in the test to be present, connected, and fully functional. To some degree, virtually all protocols specify actions (such as message responses or recommended procedures) when problems happen, but this goes deeper than simple protocol testing. Entire platforms need to be examined to continuing operation when a building block of the switch stops functioning as designed. The hard part is to emulate "missing arrows" and "bad arrows" in call flows. A "missing arrow" is a message that was not sent or received. A "bad arrow" is a message that contains bad parameters. Although the latter is an unlikely situation in a system that has been thoroughly tested and that had been previously functioning, the former is quite real and can happen for all sorts of reasons.

A well-designed script can insert failures in the protocol and take the state machines through their paces over all the code. After all, we all know that the majority of the code in communications switching is in management and error recovery. Therefore, a simple one-pass test to verify rudimentary functionality doesn't cut it as a confidence builder.

  • + Share This
  • 🔖 Save To Your Account