If you’ve ever done embedded development in C/C++, you are probably familiar with bitfields. They are a handy way to reference individual bits in things like hardware registers. The problem is that bitfields can lead to performance problems and race conditions if not used properly. I hope to highlight some of the issues you should consider when using them.
First, let’s assume you need to check various fields in a hardware register with the following layout:
You could define the following bitfield to represent this register:
1: struct HwReg
3: unsigned int Base : 16;
4: unsigned int Offset : 8;
5: unsigned int Rsvd : 5;
6: unsigned int Flag : 1;
7: unsigned int Type : 2;
The total size of this data type is sizeof(unsigned int), with each line defining a different region (field) within that type (this looks confusing when you first look at it). The following code uses the HwReg bitfield to access a memory-mapped register:
1: struct HwReg* pReg = (struct HwReg*)0×80001005;
3: if (pReg->Flag && pReg->Type == TYPE_1)
5: void* address = pReg->Base + pReg->Offset;
Line 1 defines a pointer to the physical hardware register as type HwReg. We can now use this pointer to easily access the register fields. If this isn’t clear, you can read more about bitfields HERE.
The compiler doesn’t know how to optimize bitfield accesses (especially because the pointers to memory-mapped hardware registers are almost always declared ‘volatile’). This means that every access to a member of the bitfield will require a read of the physical hardware register. This can be orders of magnitude slower than accessing main memory. In the code example above, the hardware register will be read 4 times; once for each field access.
The way to remedy this is to cache a copy of the register value and then operate on that. Consider the following code:
1: unsigned int* pFullReg = (unsigned int*)0×80001005;
2: unsigned int temp = *pFullReg;
3: struct HwReg* pReg = (struct HwReg*)&temp;
5: if (pReg->Flag && pReg->Type == TYPE_1)
7: void* address = pReg->Base + pReg->Offset;
Line 1 defines a pointer to the physical hardware register. Line 2 performs the actual read into a local variable (the slowest part). This local copy is now in main memory and the CPU cache. Line 3 casts the cached value to the bitfield for easy access. Finally, all accesses to the register fields is on the cached value, which can be read very fast from L1 cache.
Another advantage to this approach is when the hardware requires locking before the register can be accessed. By caching the value, you can keep all the locking code localized to a single area of the function. Without caching, you would hold the lock for a longer period of time (possibly forcing other operations to block) and have to make sure to release the lock on every return path (more difficult with exceptions).
NOTE: Remember you are only working with a copy of the register value. If you update a value in the bitfield, you must still copy the updated value back to the register.
As stated above, each access to a field value generates its own read/write operation. Even if the CPU architecture guarantees that an individual operation is atomic, updating multiple fields are not. Thus, in a multi-threaded application you must lock the entire block of code that operates on the bitfield. I again suggest caching the value, as you only need to lock the actual read/write of the entire register.
Bitfields are a nice language construct that can help make it easier to write clean code (as opposed to using macros and bitmasks). Unfortunately, it’s all too easy to shoot-yourself-in-the-foot with bitfields if you don’t understand the pitfalls. As always, use caution when writing performance-critical code and make sure you understand how to use the available code constructs.