Home > Articles > Security > Software Security

Writing Insecure C, Part 1

  • Print
  • + Share This
David Chisnall takes a look at some of the pitfalls involved in writing secure code in C, with a guided tour of insecure code.
Like this article? We recommend

Like this article? We recommend

Use of the C programming language is often blamed for insecure code. This is not entirely a valid accusation; projects like OpenBSD show that it is possible to write secure code in C. The problem with C, in this respect, is the same as the problem with assembly-language programming: The language exposes all of the features of the architecture to you, but little else. It provides all the features you need to write tools for secure coding, but doesn't provide these tools itself.

This series will look at some of the common causes of errors in C code and how to avoid them.

Error Checking

A lot of languages these days include some mechanism for throwing exceptions. Exceptions are generally a bad idea. They make it hard to reason about the flow of control in a program, and have most of the disadvantages from which GOTO-filled programs suffered before the rise of structured programming. Exceptions have one significant advantage, however: You can't passively ignore them.

Java code, in particular, is often littered with try...catch blocks that do nothing but discard errors, but even in this case the exception mechanism has served a purpose—it has made the programmer aware that he isn't safely handling error conditions.

In C, most functions return an invalid value when some kind of error condition exists. This is typically done in one of two ways. A lot of functions return an error code or zero for success. Functions that return pointers return a valid value on success and null on failure. This situation can be slightly confusing, since zero indicates success in some functions and failure in others.

Returning a null pointer is generally okay. It's not something you can ignore easily, because you'll get a segmentation fault as soon as you try to dereference it. This approach is really dangerous only in a function that almost never fails; others will cause crashes during testing and be fixed.

The canonical example of a function that almost never fails is malloc(), along with related functions like calloc(). The C specification says that malloc should return NULL if insufficient memory exists to satisfy the request. Linux doesn't quite obey this rule: It returns NULL if the system doesn't have enough virtual address space to satisfy an allocation, but in a case of insufficient memory Linux still allocates an address range—and then fails when you actually try to use the memory. Assuming that you have a compliant C implementation, however, it's worth checking malloc return values.

In most cases, there's nothing sensible you can do if malloc fails. Even error-recovery code typically needs to allocate some memory. You can try allocating this memory when the program starts. (Remember to make sure that you touch it, so lazy allocations won't defer the actual allocation until too late.) Alternatively, you can use something like the following macro:

#define MALLOC(x,y) do { y malloc(x); if (!y) abort(1); } while(0)

This macro will test every allocation and abort the program if it fails. You can replace the call to abort with a call to your error-handling code, but be careful. One of the most recent vulnerabilities in OpenSSH was caused by running error-recovery code in a situation where the program was in an undefined state. Often, terminating is the safest practice.

Checking other functions' return values is equally important.

  • + Share This
  • 🔖 Save To Your Account