Home > Articles > Programming > Windows Programming

Coding Guidelines: Fact and Fiction

  • Print
  • + Share This
Derek M. Jones looks at low-level coding errors and the use of coding guidelines as a cost-effective means of avoiding some of the more common instances of such errors.


Faults in software can have very expensive consequences. For some applications, often known as high-integrity applications, there is a significant probability that software faults can result in death or serious injury to people. There are many ways in which faults can be created; for instance, they can be high-level design mistakes or low-level coding errors. This article looks at low-level coding errors and the use of coding guidelines as a method of avoiding some of the more common instances of this kind of error.

While coding guidelines are generally perceived by management to be a good thing, companies are rarely willing to make the necessary investment in either the production or enforcement of such guidelines. Consequently, most of the recommendations contained in many of these documents have no proven causal connection to faults in software. The net result is that many experienced developers don’t consider the recommendations contained in coding guideline documents to have a positive value.

The current renewed interest in coding guidelines is being driven by external factors. Customers have noticed that faults in software regularly cost them lots of money, and they’re starting to demand some level of software quality assurance from their suppliers. Requiring that software suppliers adhere to an appropriate set of coding guidelines provides confidence that at least some of the commonly known faults have been avoided.

This customer-driven focus has resulted in coding guideline documents being produced by industry groups; for instance, the Motor Industry Software Reliability Association (MISRA) has produced the MISRA-C guidelines, targeted at the automobile industry. This industry-wide approach spreads the development costs and creates an attractive market into which independent tool vendors can sell. In September 2005, interest in this area became "official" with the formation of the first international standards group. The purpose of this group is to produce a standard aimed at reducing software faults in high-integrity applications through the use of various kinds of coding guidelines.

In many ways, the concern about faults in software causing death and serious injury has been proactive. Very few deaths have been directly attributed to faults in software. One of the few cases of death by software fault, and the most widely quoted, is the masses overdoses of radiation caused by the Therac-25. Here, the deaths of six people were directly attributed to faulty software. The fact that this case occurred more than 20 years ago shows how rare such events have been to date (or perhaps how difficult it is to prove that faulty software was the root cause).

To ensure that guidelines are followed, the code must be checked. Performing the checks before a program starts to execute is the ideal; having to handle the error conditions generated by checks that failed during program execution introduces a great deal of complexity. Checks performed before a program executes are known as static analysis, and tools that automatically perform such checks are known as static analysis tools. (Dynamic analysis refers to guideline checks performed during program execution.) Static analysis that is performed by people often goes by the name of code review.

  • + Share This
  • 🔖 Save To Your Account

Related Resources

There are currently no related titles. Please check back later.