Monster Panel V

History requires a chart, right?       Wrong.

In Part I, recall from the requirements that we want :

— A chart, showing the history of 1-4 channels.  The history can be the last 30 seconds, or the last 30 hours, or various lengths in between.

Well, the words “history of 1-4 channels” (not to mention the word “chart”), suggests that we use a LabVIEW chart, right?  You’ve used charts to display data history before, right?  After all, charts are built for showing history – they do it all for you, right?  If you want a history of 30 hours or 30 seconds, you just specify the chart history length, and it’s done, right? If you want 1 channel or 4, you feed it a cluster of one value or of four values, and it’s done, right?


Wrong.  At least it’s not a good way of doing it.

Consider the way the chart “history” looks at things: If you have 30 hours worth of history on the chart, and then you switch channels, by default, you attach the new data to the old data.  In other words, you’re not looking at the history of your new channel, you’re looking at the history of SOME OTHER CHANNEL, with a few data points of your new channel on the far right.

So, you can avoid that by clearing the chart history when you switch channels.  But when you do that, you clear ALL FOUR channels, and you STILL don’t see the history of your new channel – you’ve lost ALL your history.

Not good. 

Combined with all the bugs present in LV2010-LV2013’s CHART behavior (Failure to apply autoscale flags to the correct plot in multi plot charts; failure to generate proper scale-change events), it doesn’t look good for this approach.

There must be a better way…

Let’s examine what the user would expect to see, rather than what the built-in indicators can provide.

If I am looking at long-term histories, then I want to look at long-term histories. If I change a given plot from “Speed’ to “Temp”, then I want to see the long-term history of the “Temp” channel – I don’t want to start over. And I certainly don’t want to tack the “Temp” data on the end of the “Speed” data, and pretend it’s one long data set. That is a false picture, and your users don’t want that. Yes, you can explain to them WHY it works that way, and yes, they can understand it, but that doesn’t make it good.

Given that you want to see the history of the “Temp” channel, even though you didn’t have it selected until now, means that you cannot use the chart behavior, unless you were to provide history for all 300 channels, and then choose which channels are visible. That would be possible, but outrageously expensive in terms of RAM usage: each chart would have 30 hours worth of 300 channels’ data, and there are 72 possible charts. That’s ugly and wasteful.  And also consider that we want to see 10 SECONDS, or 30 HOURS. If you want to see 30 seconds worth on a 900-pixel chart, the basic 10 Hz sample rate would produce 3 pixels per sample  horizontally, but 10 Hz for 30 hours is a history length of 1,080,000 samples!  And you’re going to do that for 72 charts?  You are NOT winning any efficiency points with that approach!

So we have to do something different. A better solution is to use a GRAPH, not a chart.  Forget the “convenience” of the built-in behavior, and do it ourselves.

Suppose you have a history manager: a piece of code to accept the raw data and store it.  For every sample, it accepts the new 300 channels of data, and passes it out to several history queues.  You don’t need a separate queue for each history, one queue can cover several cases.  For example, I chose these:

1800 samples @ 10 Hz = 180 sec (10 sec, 30 sec, 1 min, 3 min)

2400 samples @ 2 Hz = 1200 sec ( 5 min, 10 min, 20 min)

2400 samples @ 1 Hz = 2400 sec (30 min, 40 min)

1800 samples @ 0.5 Hz = 3600 sec ( 50 min, 60 min)

1800 samples @ 0.1 Hz = 5 Hr (3 Hr, 5 Hr)

2160 samples @0.02 Hz = 108000 sec (10 Hr, 20 Hr, 30 Hr)

As each sample comes in, we feed every sample (10 Hz) to the first queue, every 5th sample (2 Hz) to the second queue, every 10th sample (1 Hz) to the third queue and so on. You could use averaged data to insert into the queues, if you like.

The user can select a HISTORY LENGTH from the choices in parentheses (one long list), and we select the appropriate queue to use.

I chose not to use actual LabVIEW queues; considering the requirements of data extraction.  When you plot the data, you don’t always want to use ALL the data in the queue. The first queue, for example, has 180 sec worth of data, but perhaps you only want to plot 30 sec worth.  If an actual queue were used, we would have to figure out how to extract the latest 30 sec worth, and cycle ALL the data out and back into the queue, to keep it unchanged.

Instead, I used an array of the stated length (1800, 2400, etc.) and an insertion pointer, which increments and wraps around from N-1 to 0 as samples are inserted.  The oldest data is automatically deleted by being overwritten when the insertion pointer wraps around.

When plotting, I take the INSERTION pointer and back up one (wrapping around from 0 to N-1), while checking for MIN and MAX values as I go.  This results in an array that is time-backwards (element 0 is the latest, element 1 is earlier, element 2 is still earlier, etc.).  BUT THAT’S EXACTLY WHAT YOU WANT.

By setting the graph’s X MAXIMUM to zero, and the X MINIMUM to -10 (or -30 or -60, etc), and the X MULTIPLIER to the appropriate (negative) time between queue samples (10 Hz, 2 Hz, etc), then the new data appears at the right, and extends to the left.  This behavior is similar to a built-in chart, if the chart is already full.  That’s OK- the new data is now ALWAYS at the right side, and data on the left is ALWAYS older.  That makes the X-scale, if you use one, extend from 0:00 (NOW) on the right , and -0:30 (or -0:60, whatever) on the left, which conveys a true sense of the time of the data.  Note that the scale does not scroll; it doesn’t need to.  The data moves, but the point at any given pixel on the screen is always 5 seconds ago, or 5 minutes ago, etc., so the scale is correct.  If you wanted to, instead of setting the X MAXIMUM to zero, you could set it (at each plot time) to the current time, and the x-scale then becomes time-of-day, without any further action.

If you add up the queue lengths above, you get 1800+2400+2400+1800+1800+2160 = 12360 samples.

That’s a lot better than our 1,080,000 for the built-in chart.  And we can serve all 72 potential charts from this same batch of memory: it does not need duplicating.

Also, note another feature of this technique : If you use a LabVIEW chart to remember history, then you have to feed data to the chart EVEN IF THE CHART IS NOT SHOWING.  Failure to do that will mean there is a gap in your data when you turn on the chart.  We would have to be feeding data to all 72 charts, regardless of which ones are showing. That’s wasteful.

Whereas here, we can do the replot at any time, and get an accurate history, and NOT REPLOT if the graph is not currently showing.

Your CPU will thank you for not making it do useless work.

7 Responses to “Monster Panel V”

  1. Bob Schor says:

    Cool! Several years ago, I did a less-ambitious version of this. We were monitoring 24 animal stations, and wanted to see a fixed “last N minutes of data” on a Chart. I wasn’t as clever to have multiple time bases. I used a fixed-length Lossy Queue to hold the history (each channel updated its own Queue), so when I wanted to switch channels, I basically cleared the Chart History and “dumped” the relevant Queue into the Chart (to initialize it), then having Queue updates add new points. A little clunky, but it worked …


  2. Dave says:

    Part V sounds very useful, I would be interested in seeing an example program if it is available?

  3. Steve says:

    Sorry, Dave.
    I can’t do that.

    At the end of Part III, there is a sample project which shows some of the concepts.
    But this was work for hire it would be wrong to make it available.

  4. Bob Schor says:

    Between my response and Dave’s request, I actually coded up a little demo that I was using as a “teaching demo” for a LabVIEW Developer I “met” through the LabVIEW Forums who needed help with a bloated customer-driven project. Unfortunately, the customer didn’t “want it better”, so I lost a potential “mentee”. I think I still have my demo, however, and might be willing to share …

  5. Christian Butcher says:

    Thanks – I saw the link to your website on the LabVIEW forums and this looks like something I should definitely consider implementing! Currently I use an sqlite database with some controls around the x axis determining data range to query, but truncation has been a serious challenge to work out efficiently. Pre-truncated queues sound like a potentially fantastic solution.

  6. Steve says:

    Christian: There’s no way you can get decent performance out of the database for a live plotting application. By doing it this way, you get an EVENT at 10 Hz, another EVENT at 2 Hz, another EVENT at 1 Hz, etc.
    Simply choose one of those events to listen to, and then go grab the data you want when the event fires.

  7. Rahul says:

    Steve sir….. absolutely right. I used the same for 12 projects till date and found very easier… No data loss at any frequency.

Leave a Reply


    • TI Alliance


    April 2018
    M T W T F S S
    « Sep    

Testimonial  Contact Info


1-877-676-8175 (toll-free)

Fax: 1-815-572-8269

General questions:

Accounts/billing questions: