A shortcut to determinism in real-time applications
Determinism in software is the ability to ensure that any and all paths taken through the code take a consistent amount of time to execute. Most desktop applications have no interest in this consistency because A) it doesn’t affect anything, and B) the existence of interrupts and preemptive multitasking means that you cannot do anything about it anyway.
In an embedded control application, or one running under a real-time OS however, determinism is often important because it is critical that the control outputs be changed at a consistent time with respect to the control input sampling time. On the input side, when a signal is changing at a relatively rapid rate, any error in the TIME of measurement is just as destructive to the measurement accuracy as an error in AMPLITUDE. For applications such as PID loops, this is even more important, since derivative terms are adversely affected by timing inaccuracies.
Equally important, but not always equally recognized, is the fact that the timing of the output samples has exactly the same effect. If your control output is not consistently timed, then loop stability is compromised.
One way to make your code deterministic is to go through it and make sure that all path lengths are identical. For example, if your code includes an option to scale the signal to percent of full scale or not, then the proper way to make this deterministic is to compute the full-scale percentage in all cases, then decide to use the scaled or the unscaled value depending on the option setting. This goes against the programmer’s natural instinct to do work only when required; but it’s necessary to ensure that both paths consume an equal time.
Beginners to real-time programming are rightly warned about this situation. If you look at the larger picture, you realize that the goal of all this is to ensure that the outputs change at a consistent time relative to each other and to absolute time. The assumption inherent in the idea of equalizing code paths is that you want the outputs to follow as soon as possible after the control inputs are sampled.
That assumption is true in some cases, but by no means all. There is another way to achieve the same result (consistency of output timing), without the drudgery of counting cycles and finding all the places where such path decisions are made.
By building in a hardware-controlled delay, you can achieve rock-solid precision of output timing, without fretting over every single operation. One way to do this is to use the very same clock signal that drives the input timing to drive the output timing. For example, in the following code, we:
- Create the task for the Analog Output channels
- Set the AO SAMPLE CLOCK timing parameters to the AI clock signal
- Start the task
This instructs the OUTPUT task to use the same clock as the INPUT. When you WRITE to the output task, the values you write will stay in the output buffer until the clock ticks; at that point they will be transferred to the actual analog output.
What Does this Do for Me?
By doing this, you have freed yourself from all constraints of cycle-counting and path-equalization. Your control output will change exactly one clock tick after the inputs that produce it. No more and no less. You have only one constraint: you must get the job done in one clock cycle. If you are running at a loop rate of 100 Hz, then you have 10 mSec to get your work done. If it takes 10 microseconds or 9.99 mSec, it makes NO difference, the output will change at the same point in time either way. You do have to ensure that it’s 10 mSec or less, but that’s a constraint you have in any case.
In most cases, where you are waiting on a clock-controlled input sample, performing some control processing with the measured values and writing control values out, the delay you are introducing is not significant. And a precisely controlled constant delay is often more easily dealt with in a control loop than an unpredictable jitter, and easier to produce in hardware than in software.