Monster Panel V

History requires a chart, right?       Wrong.

In Part I, recall from the requirements that we want :

— A chart, showing the history of 1-4 channels.  The history can be the last 30 seconds, or the last 30 hours, or various lengths in between.

Well, the words “history of 1-4 channels” (not to mention the word “chart”), suggests that we use a LabVIEW chart, right?  You’ve used charts to display data history before, right?  After all, charts are built for showing history – they do it all for you, right?  If you want a history of 30 hours or 30 seconds, you just specify the chart history length, and it’s done, right? If you want 1 channel or 4, you feed it a cluster of one value or of four values, and it’s done, right?

Right?

Wrong.  At least it’s not a good way of doing it.

Consider the way the chart “history” looks at things: If you have 30 hours worth of history on the chart, and then you switch channels, by default, you attach the new data to the old data.  In other words, you’re not looking at the history of your new channel, you’re looking at the history of SOME OTHER CHANNEL, with a few data points of your new channel on the far right.

So, you can avoid that by clearing the chart history when you switch channels.  But when you do that, you clear ALL FOUR channels, and you STILL don’t see the history of your new channel – you’ve lost ALL your history.

Not good. 

Combined with all the bugs present in LV2010-LV2013’s CHART behavior (Failure to apply autoscale flags to the correct plot in multi plot charts; failure to generate proper scale-change events), it doesn’t look good for this approach.

There must be a better way…

Continue reading “Monster Panel V” »

Monster Panel IV

51 wires?  No – use a cable.

In Part III, we talked about how to take 3672 copies of a 300-channel list and cut the memory requirements down to size.

The price we pay for that savings is a bit more work on our part.  But it’s a price well worth paying – we save 22 MBytes of RAM.

What we’re doing is restricting each channel selector to a single string/value pair, rather than storing all channels in every selector’s list.  To do this we need to intercept the MOUSE DOWN? event on the selector(which happens AFTER a mouse click, and BEFORE the menu pops up). That is exactly the right time to perform our switcheroo.  After the user chooses a new channel, we remove all the names that weren’t chosen, and are left with only one.  That is how we save RAM.

If you’re used to an event case being directly connected to a given control, or maybe two or three, you might be daunted by trying to have an event case with 51 controls as sources.  Well that’s unnecessary, because we can do it another way.  You do have to have 51 REFERENCES to those 51 selectors. But that makes changes easier later on. Rather than manipulating the list in the EVENT CASE structure, you just add or delete a reference.

Continue reading “Monster Panel IV” »

Monster Panel III

Re-think the easy ways you have used forever.

In Part I, I gave the rough outline of the task: how to manage over 12000 controls/indicators on one panel.

In Part II, we started whittling the task down to size, using sub panels and reentrancy.

 

Let’s think about reducing our memory usage, by some unusual techniques.

In LabVIEW, one of the easy ways to present a list of channels for the user to choose from, is to collect an array of strings (channel names) when the window comes up, and feed that array to the STRINGS[ ] property of a text ring or a menu ring and then just forget about it.  It’s quick, it’s easy and it works.  The user selects “Flatistrat Temperature” from a list, and may neither know nor care that he’s really specifying channel 13.  From a user’s point of view, that is excellent.

However, recall from part I that a single block in our case has 51 channel selectors. Six of those on a page, and 12 pages makes 3672 copies of that list. Consider that we might have 300 channels, and each channel name is an average of 20 characters, and there is over 22 Mbytes of RAM, just holding copies of a single list.

There has to be a better way.
Continue reading “Monster Panel III” »

Monster Panel II

Figuring out what you do NOT have to do.

In Part I, I gave the rough outline of the task: How to manage over 12000 controls/indicators on one panel.

The first thing to realize is that the beginner’s reaction (“holy crowdation Batman, that’s impossible”) is based in reality.  You could not realistically put 12000 controls on a panel and WIRE THEM ALL UP on a single diagram.  That’s waaaaaay beyond sanity.

When pondering the possibilities, you realize that there is to be no difference between one block and another, so a solution practically leaps out at you: use a subVI. For code, this has been done since the 1950s: use a single piece of code with the basic procedure, and call it from several pieces of code, with parameters used to vary the behavioral details.

But how do you apply that to front-panel things?

Continue reading “Monster Panel II” »

Monster Panel I

Handling thousands of controls is easier than you think.

LabVIEW programmers progress from the excitement of the new paradigm to just using it as a tool.  We’ve all produced some “spaghetti” code, and we’ve all had to look at somebody else’s flavor of spaghetti and thought “wow – that’s bad”.  Somehow, your own spaghetti is not quite as bad as theirs, and we all go through it.

If I were to tell you that I have a front panel with 12889  (twelve thousand eight hundred and eighty-nine) controls/indicators on it, the beginners among you might just drop your jaw and wonder, and declare impossibility, and the more experienced might start having visions of diagram wires packed sixty to the inch and panel controls in 3-point fonts all overlapped and crowded.

And then you might imagine how to make all that work. And then you might imagine why someone would do something like that.  And all those visions of spaghetti come dancing in your head.

Calm down and start coding.

Continue reading “Monster Panel I” »

Needle in the Haystack

Finding the best answer is not always straightforward.

Scientists are not programmers. Repeat that after me: scientists are not programmers. It’s not their fault; it’s just a lack of proper training.  If you are implementing some algorithm given you by a scientist, it’s important to know this and account for it.

Certain algorithms are not direct – most often for some process which is not easily reversible.  For example, I was given the task of implementing a way of finding the Wet-Bulb temperature, given the Dewpoint temperature, the Dry-Bulb temperature, and the Barometric Pressure.  Accompanying this task was some code, written by a scientist, in some form of BASIC.

To accomplish this, they started with an estimate (the DewPoint Temp) and worked forward, using the known equations to convert wet-bulb temp into dewpoint temp, then compared that result to the known dewpoint (Tdew).  If the result was less than the known dewpoint, they added a constant 0.05 degrees to the estimate, and tried again. When the result exceeded the dewpoint, they called it good and returned the latest estimate as the final answer.

Scientists are not programmers. If you asked them about this, they will say that it gets the right answer.  If you ask them how they came to choose 0.05 as the step size, after the blank stare (while they think about it), you will get an answer something like “Well, that’s the tolerance I want”. If you really press them, they will come up with “Well, any smaller and it’ll take too long – any larger and it’ll not be correct enough”, which is exactly true. That step size is somebody’s wild guess.

Continue reading “Needle in the Haystack” »

Beware Simplicity

Simpler ≠ faster : you still have to know what happens “under the hood”.

If you read the post about en masse operations, you might remember that I pointed out that you should know what is happening behind the scenes. Here is a particular case where what looks like simpler code actually takes longer to execute.  If you don’t take the time to think about what is actually going on, then you might be fooled.

Consider a pair of signals, each around 12000 samples. Regulations state that I am allowed to drop (delete) certain samples from those signals before performing statistical operations on them.  The number of points to be dropped might be 2-10%, or up  to 1200 of the points. I have the indexes to be dropped in a third array. For graphing purposes, I need to keep the dropped points in separate arrays.

Now every programmer worth his salt has fallen into the trap of deleting elements 3, 5, and 8 from an array: If you try the straightforward way, you find out that after you delete element 3, that element 5 is not in the same place it was before!  So  you either have to delete element 8 BEFORE you delete element 5 and then 3, or you have to delete element (3-0), then element (5-1), and then element (8-2).

Continue reading “Beware Simplicity” »

Terminator 2: the Sequel

Make sure that quitting time is followed by happy hour.

As mentioned earlier, a compiled LabVIEW application behaves similarly to the development system when terminating.  Namely, it leaves the main window on the screen, waiting for you to close it.  That’s handy in the DevSys, because you usually want to work some more on the program after quitting.

But in an executable, it’s not so good, because the user doesn’t understand why the window hangs around.

Continue reading “Terminator 2: the Sequel” »

Virtual Devices

When you don’t have the DAQ hardware you need…

Any version of NI-DAQ and the Measurement and Automation Explorer (MAX) released recently has provisions for “simulated” devices.  You choose which devices you want, and then NI-DAQ will pretend those devices are actually installed on your system, any calls to DAQ functions concerning that device will succeed (or fail) just as if a real device was installed.

This lets you simulate a client’s setup without having their hardware shipped to you and do most (if not all) of the programming on your own terms without being at their site.  The data produced is, of course, simulated data.  For an analog input channel it’s a sine wave, for a digital port, it’s a counting pattern.  It’s enough for you to tell if your software is working correctly with NI-DAQ.

With their hardware simulated on your machine, you can handle the basic communication part to get data in and out. Then you can install conditional-compilation pieces to substitute data more realistic for your particular situation if you need to.

You can be reasonably confident that the DAQ part of a program you develop this way will work on the real hardware, the same as it did on your simulated hardware.  Of course, for any extreme cases (high sample rate, high channel count), the simulation will be less exact, but it’s a useful feature to develop faster with fewer headaches.

What time is it, again?

The TIMESTAMP indicator is smart enough to get you into trouble.

Just ran into what at first appeared to be a bug, but turned out to be proper, if misunderstood, behavior.

I have a project which records data files. When the actual recording starts, and again when it stops, I remember the time (by using a GET DATE/TIME in SECONDS function) in a TIMESTAMP variable, which is stored in the data file. There might or might not be some calibration activity after the recording has stopped.  When the DONE button is finally clicked, I record the current time using a FORMAT DATE and TIME STRING function, into another string field called “TEST TIME”

I have a viewer which examines the data files and reports various info about them. The indicator that shows the TEST TIME is on a different window from the one that shows the START TIME and END TIME.

If there’s no CAL operations done, then the TEST TIME has always been just a few seconds later than the STOP time (enough time to react to the test being done and click the DONE button), or it could be a few minutes later.

However, I recently noticed that, on a data file sent from my client to me, that the TEST TIME was almost an hour EARLIER than the STOP time.  How could that be?  Further rummaging thru other files that I had from him shows the same thing: the TEST TIME was short of an hour EARLIER than the DONE time.  If there were CAL operations done, this difference was 45-55 minutes; if not, it was a few seconds short of an hour.

I have run thousands of tests on my machine without noticing this; my client has also run nearly a thousand, and has never brought it up.  Why is that?

Continue reading “What time is it, again?” »


Testimonial  Contact Info

207-593-8109

Culverson Software

184 Lakeview Drive

Rockland, ME 04841

General questions:

Sales@Culverson.com

Accounts/billing questions:

Accounts@Culverson.com


Logo