Home > Articles > Programming > C/C++

C/C++ Memory Management, Bit Fields, and Function Pointers

  • Print
  • + Share This
You're sure that your C/C++ memory-allocation code is bulletproof, but will the code work when the host platform is under stress? Consider using bit flags for applications that require low-level data access. Modern programming also routinely requires the use of complex language features such as callbacks and function pointers. As Stephen B. Morris explains, the use cases for these features are both simple and powerful.
Like this article? We recommend


Every programming language has peculiarities. C and C++ are no exception! Very often, difficulties arise when a given language feature comes into close contact with a host resource; the classic case, of course, is memory allocation. It's remarkably easy to get into trouble with memory allocation. Can you be certain that a given call to malloc() succeeded?

Bit flags are a clever (but little-used) feature of C that allow for access to memory-mapped devices. But bit flags also have vagaries. In this article, I show how bit flags touch on the areas of data alignment and memory allocation.

Finally, function pointers provide a way to treat code as data. This notion is intrinsic to functional programming languages, such as Python, JavaScript, and Java 8. It's nice to see that C got in first with function pointers, and I discuss some reasons why they make sense to use in our day-to-day coding.

Let's get into some memory management.

C Memory Management

A successful call to malloc() returns a block of uninitialized memory. Data inside the block is sometimes referred to as cruft. The memory is allocated in accordance with the platform-alignment requirements, which means that the size of the block sometimes is somewhat larger than expected. No big surprises there.

Perhaps not as well known is this: The default Linux memory-allocation strategy is optimistic. This means that even when a call to malloc() returns non-null, there is no guarantee that the memory is available. I must confess to a little surprise when I first learned this fact! Imagine the consequences of a non-null semi-failed allocation for a real-time system. In this case, a non-null allocated block may in fact be too small for our needs, which means that the system might be out of memory.

If the system is out of memory, one or more processes will potentially be terminated by the Out of Memory (OOM) killer. The intrusion of the OOM killer can be further complicated if a given process leader has immunized itself against OOM-killer operations. For example, for system stability, root-level processes might not be desirable candidates for an OOM killer; a root-level process that's hogging memory might not be a candidate for eviction.

This all sounds a little complicated and scary, right? Remember that we're discussing edge cases here—memory might be scarce or heavily allocated, or the heap could be highly fragmented. However, as devices become smaller and more heavily loaded with software, we must be aware of the finer detail of memory-allocation mechanisms.

Fixing the Memory-Allocation Doubt

In a slight departure from my usual article style, let's define a requirement and see how we might go about fulfilling it in code. In the spirit of modern agile development, we'll define this requirement as a JIRA ticket. Once the ticket is created and assigned to us, we would typically set the status of the ticket to "in-progress." Once the coding is finished and tested to our satisfaction, the ticket status can be marked as fixed, and the code is pushed into a sprint delivery pipeline.

Memory Management Requirements

To make certain that all calls to malloc() are protected, our allocated block must have an expected size and a delivered size. Checking the size of an allocated block incurs a cost. This should be no surprise—most software-development decisions involve some level of cost.

The first cost element in this case is that we are (to some extent) making the code platform-specific. The second cost element is a potential speed penalty. So how should we do it? We'll use malloc_usable_size() after calling malloc() to allocate seven bytes:

void* p = malloc(7);
size_t usable = malloc_usable_size(p);
cout << "Size of malloc_usable_size for p is " << usable << endl;

This code produces the following output:

Size of malloc_usable_size for p is 12

Notice that the call to malloc() requests a size of 7, but we receive a block of size 12. The difference is intended to facilitate the platform-alignment requirement of the memory block.

What's nice about this scheme is that, at the expense of some platform-specific code and a slight speed reduction, we now know for sure whether our call to malloc() has succeeded. No more guesswork to determine whether we have a dreaded memory-edge case. At this point, we might mark the JIRA ticket as being fixed.

Updating the Work Ticket

You might argue that one thing is missing from the above code: What if the call to malloc() fails? Good point. In the spirit of agile development, it would probably be appropriate to change the status of the JIRA ticket from fixed to open (or in-progress). Then the task is reassigned to us, allowing us to fix the last issue as follows:

size_t usable = 0;
void* p = malloc(7);
if (p != NULL) {
      usable = malloc_usable_size(p);
      cout << "Size of malloc_usable_size for p is " << usable << endl;
} else {
      cout << "Memory allocation failed" << endl;

The updated code now verifiably allocates the required memory block. It also handles the case when the call to malloc() fails. Our stated requirement can be declared fulfilled. At this point, the JIRA ticket could be marked with a status of fixed, and we can move on to our next assigned ticket!

Of course, this assumes that we don't want to implement any unit tests for the code. In cases where a code change is very small, there may not be a pressing need for a unit test. However, as a comprehensive test for this code, we could attempt to create a real out-of-memory situation, which would allow us to verify the code in depth.

Creating an out-of-memory situation necessarily moves the work into the realm of integration/QA testing. Depending on the level of criticality of the code, we might or might not opt to deliver such test artifacts. Examples might include code that operates in a hazardous environment or in a safety-critical application.

For purposes of this article, however, we mark the JIRA ticket as being fixed.

For more on good discipline in memory management, see my eBook Five Steps To Better Multi-language Programming: Simplicity in Multi-language Coding: C/C++, Java, Bash, and Python.

A Memory Management Pattern—and Accompanying Anti-Pattern

The issue of avoiding failed memory allocations has traditionally led programmers to adopt a kind of unwritten design pattern: Allocate a memory pool when the application starts. The application code can then draw from the pool as and when needed. However, not even this pattern guarantees that the size of the allocated memory block is as expected! The best plan is to be careful and at least consider using something like malloc_usable_size().

As with all design patterns, an anti-pattern will usually make an unwelcome appearance. Memory allocation is no exception to this unwritten rule. The anti-pattern in this case is mixing the traditional C and the newer C++ allocation services by mixing calls to malloc(), new(), free(), and delete(). Mixing these calls in the same code base is a bad idea. Why? The underlying implementations may not be in harmony with each other, which can lead to unpredictable results.

Minimizing the Impact of Platform-Specific Code

It's always a good idea to ensure that any platform-specific code is placed in a single location, facilitating future changes of the host platform. When using code such as malloc_usable_size() and/or _msize(), make sure it's located inside code, something like this:

#ifdef __linux__
      size_t checkAllocatedBlock(void* ptr) {
            return malloc_usable_size(ptr);
#elif _WIN32
      size_t checkAllocatedBlock(void* ptr) {
                return _msize(p);

Of course, some code duplication is involved, but it illustrates the point that, depending on the value of the platform-specific symbol, we invoke only the appropriate handler. This is better than putting calls to malloc_usable_size() and/or _msize() in multiple locations in your application code. If you later change from Linux to Windows or vice versa, only a recompilation is needed.

Let's now turn our attention to another implementation-dependent language mechanism—bit fields.

Using Bit Fields for Low-Level Programming

Bit fields are often used in situations where specific bit patterns are required in memory. Typical applications for this type of code are interactions with low-level devices that use the various bit patterns as part of some sort of communication protocol. A good example is electronic smart cards and card readers. These devices communicate with each other using a simple messaging protocol, such as Europay MasterCard Visa (EMV).

One way to implement such a protocol is by using C-based bit fields, with each protocol message being encoded as a bit field structure. Then a higher-level state machine could use these message definitions to enact the required protocol exchanges.

Let's look at these interesting bit fields.

A Simple Bit Field Structure

Following is an example of a structure containing three separate bit fields:

struct {
      unsigned char is_keyword : 8;
      unsigned char is_extern : 1;
      unsigned char is_static : 1;
} flags;

flags.is_keyword = 14;
printf("flags.is_keyword %d\n", flags.is_keyword);
printf ("Bit pattern of %d = %X\n", flags.is_keyword, flags.is_keyword);

Notice the size definition at the end of each line in the struct definition. This size dictates the number of bits in the variable—in this case, 8, 1, and 1, respectively.

The output from this code is as follows:

flags.is_keyword 14
Bit pattern of 14 = E

What about the size of the struct? Let's find out:

struct {
      unsigned char is_keyword : 8;
      unsigned char is_extern : 1;
      unsigned char is_static : 1;
} flags;

flags.is_keyword = 14;
printf("flags.is_keyword %d\n", flags.is_keyword);
printf ("Bit pattern of %d = %X\n", flags.is_keyword, flags.is_keyword);
cout << "Size of flags is " << sizeof(flags) << endl;

Here's the output:

flags.is_keyword 14
Bit pattern of 14 = E
Size of flags is 2

That's interesting! Why two bytes? Well, looking at the size of the bit patterns, the compiler seems to be doing some sort of optimization. Let's test this by adding another seven-bit item to the structure:

struct {
      unsigned char is_keyword : 8;
      unsigned char is_extern : 1;
      unsigned char is_static : 1;
      unsigned char is_static2 : 7;
} flags;

Here's the updated output:

flags.is_keyword 14
Bit pattern of 14 = E
Size of flags is 3

Now we have a 3-byte size, which is as expected for a total data size of 17 bits. That is, the alignment required is on at least a byte boundary, which raises the size to 3 bytes or 24 bits on my laptop (32-bit) platform.

You probably won't have much cause to use bit fields, but they provide an easy way to do this type of low-level programming.

Let's move away from platform-specific type coding and look at another interesting area of multi-language convergence: C function pointers and Java 8 lambdas. This is the use case of passing code as data.

Callbacks: JavaScript, C Function Pointers, and Java 8 Lambdas

The traditional use case for function pointers is when you need some type of callback mechanism. In web-based front-end programming, JavaScript provides a handy way to do this as follows, in the form of a click listener function that is added to a given DOM element:

function btnAddClicked(event) {

Now the above function is added to a DOM element with the ID btnAddIt:


The btnAddClicked function is our callback, which gets executed when the associated front-end element is clicked. In other words, the browser executes the call, which is why it's termed a callback. That's a lot of power in just a few lines of JavaScript!

How would we code C function pointers?

C Function Pointers

Here's a simple example of a C function pointer:

double do_computation(double (*funcp)(double), double baseValue)
      return (*funcp)(baseValue);

int main() {
    double (*fp)(double);      // Function pointer
    fp = sin;
    printf("do_computation() returns: %f\n", do_computation(fp, 1.0));
  • The function definition takes two parameters: a function pointer and a double value. Running it produces this output:
do_computation() returns: 0.841471
  • Why would it be a good idea to use a function pointer in this way? One reason is that the called function need not be changed if a different transcendental function is required. If we require a call to cosine, for example, we change the client code as follows:
fp = cos;
printf("do_computation() returns: %f\n", do_computation(fp, 1.0));

This approach might be useful if the do_computation() function was contained in a library, or if it was otherwise impossible to change it. This is pretty similar to the Java 8 Lambda use case.

Java 8 Lambda (or Callback) Functions

Java 8 Lambdas are very useful for reducing code clutter, particularly in the case of anonymous inner classes. Here's an example of a Lambda from one of my earlier articles, "Code as Data: Java 8 Interfaces":

public class Calculator {

    interface IntegerMath {

        int operation(int a, int b);

    interface RealMath {

        double operation(double a, double b);

    public int operateBinary(int a, int b, IntegerMath op) {
        return op.operation(a, b);

    public double operateBinary(int a, int b, RealMath op) {
        return op.operation(a, b);

    public static void main(String... args) {
        Calculator myApp = new Calculator();
        IntegerMath addition = (a, b) -> a + b;
        IntegerMath subtraction = (a, b) -> a - b;
        RealMath division = (a, b) -> a / b;
        System.out.println("137 / 12 = " +
            myApp.operateBinary(137, 2, division));
        System.out.println("40 + 2 = " +
            myApp.operateBinary(40, 2, addition));
        System.out.println("20 - 10 = " +
            myApp.operateBinary(20, 10, subtraction));

Notice how the different mathematical functions are passed into the caller myApp.operateBinary(), along with the parameters.

That's three callback mechanisms from three languages. These are powerful features that are well worth learning.


Memory management in C and C++ continues to provide fertile ground for errors. It's interesting to note that even if a call to malloc() returns non-null, the resulting block might not be large enough for our needs. However, we can handle this issue with some platform-specific code.

Bit fields provide an alternative to explicit bit masks and have the advantage of encapsulating bit data inside structures, which helps to modularize such platform-specific code. Bit fields allow us to implement features efficiently, such as with machine-to-machine protocols.

Function pointers, a slightly neglected part of the C language, provide a means of keeping library code unchanged. This can be very important for cases in which the library code has been tested extensively, and the cost of modification and retesting is very high.


  • + Share This
  • 🔖 Save To Your Account