-
Notifications
You must be signed in to change notification settings - Fork 11
Working with Interrupts
When thinking about interrupts we have two aspects to consider. First, how is a change made in an interrupt propagated to the normal program execution (the loop()
function). For this we use the keyword volatile. Second, how can we guarantee that access to variables is atomic even when the size exceeds the natural data size (8 bit for an AVR)? For this we use atomic blocks.
The keyword volatile signals to the compiler that it has to retrieve the value from RAM each time it accesses a value. This leads to larger code that is less optimized. But for variables that are accessed during an interrupt (and this holds true for all of our I2C registers) this is necessary since otherwise the changed state might not be transferred to the local register holding the variable. Moreover, for non-volatile variables the compiler can reorder the statements at will. Probably this is not really an issue here, because with the next loop the new value will be loaded into a register anyway. This would mean that without volatile we would have a delay in behavioral change of about 1 to 2 seconds.
If we had full C11 compatibility we could use the _atomic access but alas, no such luck.
ATOMIC_BLOCK(ATOMIC_FORCEON) {}
turns off all interrupts during the execution of its block and thus is the way to access variables that are larger than 8 bit and might be read or changed by an interrupt routine. In the code you see these blocks every time we access a variable that is larger than 8 bit. This variable is copied into a local variable together with all other variables that will be used following the atomic block. This guarantees that the values will not be changed during a computation, but only between computations.
We could also use memory barriers that enforce writing all registers and reading them again, or as an alternative volatile reference casts. But all of these would be more complicated to read, thus obscuring the intent of the code and there wouldn't a large performance optimization anyway, since we are using the local copies as much as possible.
The interesting question: Why did it work without this twofold approach? Because the programming is very conservative, meaning that a changed value normally would not have been lost, but used in the next loop() instead of at once. And even if there was a very rare collision, the system recovered very fast. The main difference: Now every change is made in a very controlled way and the system has to behave as expressed.