Data Analysis Archives | DMC, Inc. https://www.dmcinfo.com/blog/tag/data-analysis/ Tue, 23 Dec 2025 16:27:04 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 https://cdn.dmcinfo.com/wp-content/uploads/2025/04/17193803/site-icon-150x150.png Data Analysis Archives | DMC, Inc. https://www.dmcinfo.com/blog/tag/data-analysis/ 32 32 Data Logging with Panasonic GT Series HMIs https://www.dmcinfo.com/blog/21773/data-logging-with-panasonic-gt-series-hmis/ Mon, 08 Apr 2019 12:48:59 +0000 https://www.dmcinfo.com/blog/21773/data-logging-with-panasonic-gt-series-hmis/ If you're interested in storing production, sensor, or other data from your automation system, Panasonic GT series HMIs provide a flexible method for logging data from one or multiple PLCs to any standard SD card. Logs are stored in CSV files which can be opened in Excel for manipulation and analysis. In this example, I demonstrate […]

The post Data Logging with Panasonic GT Series HMIs appeared first on DMC, Inc..

]]>
If you're interested in storing production, sensor, or other data from your automation system, Panasonic GT series HMIs provide a flexible method for logging data from one or multiple PLCs to any standard SD card. Logs are stored in CSV files which can be opened in Excel for manipulation and analysis.

In this example, I demonstrate my setup for recording part inspection data using an FP series PLC and GT series HMI. Using this setup, our client can ensure their parts are within spec while identifying trends in their data, enabling them to improve their processes.

Data logging is configured using Panasonic's GTWIN software. The interface for configuring the logging is located in the system settings menu. As shown below, this HMI is configured to log to three different files based on job number. A maximum of sixteen different files can be created.

Logging Files

A single data point is recorded on the rising edge of the trigger configured for each log file. The record can be triggered cyclically, at a specific time, or from a PLC tag, as demonstrated below for Job 1.

In this example, data is captured every time a measurement is taken, as indicated by the R141 tag configured in the PLC code. After the capture is complete, the HMI writes to R200 to indicate to the PLC that the capture is complete.

Logging File

During operation, the HMI saves log data in RAM until one thousand records have been recorded, at which point the data is written to the SD card. Additionally, this transfer can be requested directly from the PLC by configuring the 'Save Setting' of the device shown below. Here, the PLC can request a RAM transfer using the R151 address.

Furthermore, according to the configured settings, data will be overwritten after 60 files have been recorded. This limit is important to keep in mind when considering how frequently the data will be moved from the SD card to a more permanent storage device.

Logging file 2

Up to this point, we have yet to configure the data which will compose each record. By default, each record will include the date/time when the trigger occurred. Here, our record is composed of two measurements used for part verification. A maximum of 128 data points can be used for each record.

Logging device

The settings for each data point can be defined individually to determine the structure of the data that will be stored. Here, our measurements are float values located at the address DT110 in the PLC.

data logging configuration

Using this setup, basic data logging can be configured for a system with minimal effort and with no additional hardware/software. If you require a more sophisticated data storage/retrieval system, consider reaching out to DMC for help designing your solution.

Learn more about DMC's Manufacturing Automation and Intelligence Services.

The post Data Logging with Panasonic GT Series HMIs appeared first on DMC, Inc..

]]>
Turning Data into Dollars using MES and MOM https://www.dmcinfo.com/blog/25512/turning-data-into-dollars-using-mes-and-mom/ Fri, 17 Jun 2016 15:41:09 +0000 https://www.dmcinfo.com/blog/25512/turning-data-into-dollars-using-mes-and-mom/ Manufacturers are rapidly increasing their capability to collect and analyze data. This data can be used to deploy KPI metrics, like OEE, providing real time and historical feedback on the productivity of manufacturing operations. This data can also be used as the basis for continuous improvement projects designed to increase efficiency and reduce waste. But […]

The post Turning Data into Dollars using MES and MOM appeared first on DMC, Inc..

]]>
Manufacturers are rapidly increasing their capability to collect and analyze data. This data can be used to deploy KPI metrics, like OEE, providing real time and historical feedback on the productivity of manufacturing operations. This data can also be used as the basis for continuous improvement projects designed to increase efficiency and reduce waste.

But how do manufacturers know what impact these projects are having on their bottom line? Are they worth the cost to deploy? DMC and AltaVia, two founding members of the Siemens MEAC (Manufacturing Operations Management Expertise Alliance Center) are partnering to answer this question while bringing advanced data analytics to your factory floor. DMC provides the data collection and manufacturing intelligence tools to capture everything that happens on the factory floor. Alta Via provides the financial model and business analysis to determine the impact of changes to manufacturing metrics on financial metrics.

By providing the data collection and analytic power of Siemens WinCC and SIMATIC IT combined with the cost modeling capability of AltaVia’s ProEO, DMC and AltaVia can not only collect and analyze data coming from your factory floor, but they can also tell you how much improvements in quality, performance, and efficiency will impact your bottom line.

In short, they translate improvements in typical KPI measures such as OEE into improvements in your bottom line.

The benefits of this approach go beyond just identifying the projects with the greatest potential, they also allow you to monitor the effects of your improvement projects in real time. You can verify not only that your KPI’s are improving, but that the company bottom line is improving as well.

Customer Benefits:

  • Data collection and Historian from existing equipment. Integration to existing automation systems.
  • Real time and historical KPI access
  • Reporting, Dashboards, and Data analysis available in: 
    • HMI/SCADA systems for Plant Floor Access
    • Web Portal for Front Office Access
    • Mobile Devices
  • Tracking of improvement project’s effect on KPI’s
  • Financial Modeling of plant operations. Integration into existing business systems.
  • Predictive impact—Which potential projects are worth investing in?
  • Real time tracking of the effect of KPI improvements on financial data—Is a project realizing its predicted impact, and why or why not?

For example, let’s suppose a manufacturer identifies that they have a tradeoff between quality and performance. They can easily optimize their line performance to maximize output and thus, their top line revenue. They could even go a step further and adjust for the wasted raw material to optimize profitability. But what they don’t know is the secondary cost of producing bad product or the secondary benefits of increased production. There may be costs to identify, evaluate, and dispose of this product that are not easily captured in this optimization. There may be reduced labor costs or scheduling optimizations that are realized from better productivity. Further, their optimized performance for profitability is not where they achieve maximum OEE. The combination of DMC’s and AltaVia’s performance captures all of this information and provides a more detailed and actionable picture of plant performance.

chart

In this case, the manufacturer needs to couple quality improvements before or as part of their performance improvement initiative. Only then will the performance improvements not only increase OEE, but overall profitability as well. Thus, you can turn your data directly into dollars.

Learn more about DMC's Siemens Solutions.

 

The post Turning Data into Dollars using MES and MOM appeared first on DMC, Inc..

]]>
Data Logging with Siemens S7-1200 PLCs https://www.dmcinfo.com/blog/26228/data-logging-with-siemens-s7-1200-plcs/ Mon, 19 Oct 2015 15:43:15 +0000 https://www.dmcinfo.com/blog/26228/data-logging-with-siemens-s7-1200-plcs/ The creation and maintenance of data records for machines is a very important part of keeping machines running efficiently. For example, tracking downtime and uptime is crucial to provide proper maintenance on the machine, while logging stoppages and other events can help provide insight into troubleshooting any system. Maintaining these data logs sometimes requires purchasing […]

The post Data Logging with Siemens S7-1200 PLCs appeared first on DMC, Inc..

]]>

The creation and maintenance of data records for machines is a very important part of keeping machines running efficiently. For example, tracking downtime and uptime is crucial to provide proper maintenance on the machine, while logging stoppages and other events can help provide insight into troubleshooting any system. Maintaining these data logs sometimes requires purchasing additional software and hardware, as well as some programming to integrate everything together.

Siemens has been providing new ways of logging data that come packed with their PLCs and HMIs, which is great because it allows us to implement these features without needing to purchase anything additional.

Logging comes as a feature in both Basic and Comfort Panels, as seen below.

Basic Panel Logging:

Comfort Panel Logging:

The Comfort Panel logging is nicer because you can store the logs in CSV files, which are easily opened in Excel, whereas the Basic Panels require the logs to be stored as TXT files.

Siemens has now also created a group of instructions just for Data Logging so that we can create custom data logs using lower cost processors.

Each of these instructions is very easy to use, and they can be packaged into a single Function Block like the one I created below.

This Function Block allows the user to write values to a data log on “Command Log”, using the data in “Data_block_1”.Logs[0].

The result of this code will be to add the values of “Int1” and “Real1” to the end of the “Data0” data log with a time stamp. “Int1” lines up with the “Var1” header and “Real1” is under the “Var2” header. If the “Data0” data log doesn’t exist yet, the code will create it.

Once the data log is open I can write the values to it, and after every write I close the data log. I used a simple state machine to move through each step of the data logging process, part of which can be seen below.

After writing to data logs I can then download, rename, and delete them easily through the webserver.

There are many different ways to log data with the S7-1200 and S7-1500 PLCs. For faster or more extensive data logging I recommend keeping the logs open while writing takes place. However, we may only have 10 logs open at a time. Also, there are limits to the lengths of data logs, so you may need to programmatically add new data logs, clear data logs, or delete them.

Contact DMC regarding Siemens S7 PLC Programming Projects.

Learn more about DMC’s Siemens S7 PLC Programming services.

The post Data Logging with Siemens S7-1200 PLCs appeared first on DMC, Inc..

]]>
Using WebDAV to Transfer Files from a Linux cRIO https://www.dmcinfo.com/blog/26523/using-webdav-to-transfer-files-from-a-linux-crio/ Wed, 08 Jul 2015 11:59:39 +0000 https://www.dmcinfo.com/blog/26523/using-webdav-to-transfer-files-from-a-linux-crio/ When using a realtime system for data acquisition or control, there is often a need to transfer files between the real time device and a PC. There are many ways to do this, but newer Linux-based NI CompactRIOs come with WebDAV and SSL support enabled by default. This makes WebDAV an easy option to use […]

The post Using WebDAV to Transfer Files from a Linux cRIO appeared first on DMC, Inc..

]]>
When using a realtime system for data acquisition or control, there is often a need to transfer files between the real time device and a PC. There are many ways to do this, but newer Linux-based NI CompactRIOs come with WebDAV and SSL support enabled by default. This makes WebDAV an easy option to use right out of the box. The first time I used it, I noticed a couple pitfalls that are worth documenting. This will be a brief post to point out those details. For this post, I used an NI cRIO-9068.

Configuration
As mentioned above, the Linux cRIOs have WebDAV and SSL support enabled by default. To confirm, open NI-MAX, and expand Remote Systems. Expand the cRIO, then Software, then NI CompactRIO. You should see SSL Support and WebDAV Server listed, as in Figure 1.

Figure 1: A properly-configured cRIO includes SSL Support and the WebDAV Server.

As long as those are available, you’re already configured.

Establishing a Connection in Windows
If a WebDAV server is running on the cRIO, then you can connect directly in Windows as if it were a traditional network shared directory. Both Windows 7 and 8 have built-in WebDAV clients. So, let’s say the cRIO’s IP address is 192.168.10.200 (and your PC is on a subnet with the cRIO):

  • Open Windows Explorer
  • In Windows 8, click the Computer menu, then Map Network Drive.
  • In Windows 7, press Alt to expose the menu bar, then click Tools, then Map Network Drive.
  • Select a drive letter.
  • Uncheck Reconnect at logon, since the cRIO may not always be available.
  • In the “Folder:” field, type “http://192.168.10.200/files”
    • Use your cRIO’s IP address, of course.
    • Don’t forget the “/files” at the end! This isn’t a placeholder for the filename you’re looking for, it is the literal string “/files.”
  • Click finish, and give it a second. It will then ask you for a username and password.
    • These are the same credentials used to log in to the cRIO over SSH, or in NI-MAX.
    • By default, the username is “admin” and the password is blank.
  • Hit OK, give it another couple seconds, and you should be presented with an Explorer window showing the files on the cRIO’s hard drive.

If this works, then you know WebDAV is working fine on your PC client and your cRIO server.

A Few Brief Words about File Paths
Note that you cannot write files anywhere you like on the cRIO’s hard drive. If you’re familiar with Linux, the hard drive layout will look familiar. /home/lvuser/natinst/bin is where we’ll make files in this example. /C/ni-rt/startup is a symbolic link to /home/lvuser/natinst/bin in order to preserve some compatibility with the conventions of older cRIOs. If none of this makes sense to you, don’t worry about it, just keep your files in /home/lvuser/natinst/bin until you get it figured out.

Transferring files in LabVIEW
If you’ve been able to connect in Windows, then you should be able to connect programmatically in LabVIEW. A complete example VI is show below in Figure 2. Note that this VI runs on the PC, so the transfer is in terms of “getting” the file from the cRIO. The example takes all of its VIs from LabVIEW’s WebDAV palette, which you can access from the Data Communication palette → Protocols palette → WebDAV palette → WebDAV Synchronous.
 
Figure 2: An example VI that transfers a file from the cRIO to the PC, then deletes the file on the cRIO.

Figure 2: A demo program that transfers a file from the cRIO to the PC, then deletes the file on the cRIO. Note that this VI runs on the PC side.

The Asynchronous VIs will also work, but will return before the operation completes. This is nice if you want to tell WebDAV to get multiple files, and then just let your program move on while those transfer in the background. However, in this example, I want to get my file and then delete it from the cRIO. I therefore use the Synchronous VIs so I know the “get” operation is complete before deleting.

The important information here is the following:

  • The “host uri prefix” input of the Open Session VI is exactly the same as what you used to connect in Windows Explorer. Don’t forget, you need the literal “/files” part at the end.
  • The “username” and “password” inputs of Open Session are the same as what you used in Windows Explorer also.
  • The “verify server” input of Open Session can be used for higher-level authentication, but for the cRIO, set it to false.
  • The “relative uri” input of Get is the path to the file you want to get, using the UNIX file path convention.
  • The “local file path” input of Get is where your file will go on your PC, using the Windows file path convention.
    • You cannot just give it a directory path. This VI is not smart enough to know that you want to put it there and keep the same filename. Your “local file path” must be a path ending in a file name.
    • It will be happy to overwrite a file on your local drive, if a file already exists at “local file path.”

Conclusion
Since the cRIO hosts the WebDAV server, the PC is acting as a client that connects, gets the files it wants, then disconnects. This is completely different than having the cRIO send (or “put”) files to the PC, but has the advantage of being already configured by default. For more information, see the online manuals for the synchronous and asynchronous VIs.

The post Using WebDAV to Transfer Files from a Linux cRIO appeared first on DMC, Inc..

]]>
LabVIEW NI Report Generation Toolkit- Using Word Templates to Create Reports https://www.dmcinfo.com/blog/28844/labview-ni-report-generation-toolkit-using-word-templates-to-create-reports/ Tue, 07 Aug 2012 12:52:26 +0000 https://www.dmcinfo.com/blog/28844/labview-ni-report-generation-toolkit-using-word-templates-to-create-reports/ Although the NI Report Generation Toolkit has its cons—namely, dependencies—it can be very particularly useful to programmatically save, create, and/or print a clean report in MS Word or MS Excel. The purpose of this blog is to share some tips, tricks, and places to take caution when creating an MS Word report from a template […]

The post LabVIEW NI Report Generation Toolkit- Using Word Templates to Create Reports appeared first on DMC, Inc..

]]>
Although the NI Report Generation Toolkit has its cons—namely, dependencies—it can be very particularly useful to programmatically save, create, and/or print a clean report in MS Word or MS Excel.

The purpose of this blog is to share some tips, tricks, and places to take caution when creating an MS Word report from a template file (.dot). This template file, which whenever opened will automatically create a new .doc file to preserve the original template, will be used as a starting canvas to which you can insert text and graphics.

To start, create a new word template file (.dot). LabVIEW will use this template to create a new document (.doc) and programmatically populate pre-defined bookmarks with data.

Bookmark locations are selected by highlighting text (this will be replaced when data is inserted) or by the current location of the cursor. Select Insert>Bookmark, and choose a name for the bookmark location. Note: Word does not like spaces in the names of its bookmarks.

Figure 1: Creating bookmarks in MS Word

Either text or graphics can be inserted as data into the bookmark locations. The VI snapshot below shows an example of doing both.

Figure 2: Using NI Report Generation Toolkit to create an MS Word report file

Under the Data2 bookmark, an image a control will be inserted using its control reference. Graphs, tables, and charts are obvious choices. One trick I found when wanting to fuse multiple controls/indicators into one image, was creating a separate tab control with clones of each control/indicator and using the tab control’s reference as the export image.

Under the Data1 bookmark, the string specified by ‘Data!’ will be inserted into the bookmark.

One common error code is Error 41110, which derives from a bookmark referenced in LabVIEW not existing in the Word template.

Before building the executable, it is crucial that the LVClass and NIReport.llb folders are added as folder snapshots to the project and selected as ‘always included’ in the project build specifications. Both are found here:

C:\Program Files (x86)\National Instruments\LabVIEW 2011\vi.lib\Utility

Figure 3: Include the LBClass and NIReport.llb in the LabVIEW build

Without either of these folders, it is likely that the executable will be broken, you will encounter a build error, or error 7 will occur during the run time when running any VI’s from the Report Generation Toolkit.

Other Considerations: Using Print Report.vi

One frustrating issue I have encountered using NI Report Toolkit’s Print Report.vi was a mysterious LabVIEW crash (see Figure 4) when running an executable on a customer’s dated test stand machine. This happened while no error was seen when running the program on my W7 machine (where I built the .exe).

Figure 4: LabVIEW crash

The program was attempting to print an Excel report, similar to the block diagram below. Ultimately, I painstakingly narrowed down the program crash to Print Report.vi and decided to do some searching on NI.com (in hindsight, I wish I would have done this sooner).

Figure 5: Be cognizant of MS Office year on build and target PC’s

Similar developers had encountered the same problem. The root cause, and ultimately the fix, was not in the program, but determined by the version of MS Office Suite installed on the computer which built the .exe and how it compares to the version of MS Office installed on the target PC. In many cases, if the target PC is running MS Office 2003 or earlier, and the program is built on a PC with MS Office 2007 or newer, this bug is seen.

Possible solutions to these issues are either upgrading the target PC’s MS Office to anything newer than 2003 or building the .exe on a PC with the same version of MS Office as the target.

Learn more about DMC’s Data Analysis, Data Mining, and Reporting expertise.

The post LabVIEW NI Report Generation Toolkit- Using Word Templates to Create Reports appeared first on DMC, Inc..

]]>
Using Siemens S7-300 PLCs to Report System Errors https://www.dmcinfo.com/blog/29646/using-siemens-s7-300-plcs-to-report-system-errors/ Thu, 20 Jan 2011 14:44:41 +0000 https://www.dmcinfo.com/blog/29646/using-siemens-s7-300-plcs-to-report-system-errors/ The Siemens S7-300 line of PLCs has about a million great features integrated into the programming environment. After I recently inherited a PLC project where some (many) of these were not implemented, I thought it might be a good idea to do a recap on some of the more useful functions Step7 has to offer […]

The post Using Siemens S7-300 PLCs to Report System Errors appeared first on DMC, Inc..

]]>
The Siemens S7-300 line of PLCs has about a million great features integrated into the programming environment. After I recently inherited a PLC project where some (many) of these were not implemented, I thought it might be a good idea to do a recap on some of the more useful functions Step7 has to offer us.

By far one of the most useful, and sadly underutilized, capabilities available to us from the Step7 environment is the “Report System Errors” utility. This utility, when activated, generates pre-built function blocks and adds them to your project. These FBs capture all diagnostic data and CPU messages, and spit it out in a human readable form perfectly compatible with the Siemens line of WinCC or WinCC Flex HMIs (though actually in fact, if you are particularly ambitious you can even use them on non-Siemens HMIs).

How much downtime has been wasted in your facility, while your maintenance team tracked down the cause of that annoying red “System Fault” light in the panel? Only to give up and pay for a programmer to come to your facility to plug in and see what the problem is? If this has happened to you, your programmer has not done his job. The RSE utility can give maintenance detailed diagnostics right on the HMI, sometimes more detailed than you might have thought possible. Instead of a blank “system fault” light and a line down for hours, imagine seeing the following on your HMI alarm log:

Name: IM151-3PN           Dosing Station 1 I/O Rack             SLOT 3                   Module: Port 2

Part Order number: 6ES7 151-3BA23-0AB0

Digital Input wire break

This is just one of thousands of possible custom error messages that get automatically added to your alarm list. You should be able to see right away why this should be in every S7-300 project. This is the kind of thing that changes downtime from “hours” to “minutes”.

So why do I find PLC projects that are not using the RSE utility? Is it because system integrators don’t know about it, or because they don’t understand how it works? This blog solves the first problem. Stay tuned for the next blog, where I talk about how it works, and specifically how to use it on WinCC and WinCC Flex HMIs.

Learn more about DMC's Siemens S7 PLC programming services.

The post Using Siemens S7-300 PLCs to Report System Errors appeared first on DMC, Inc..

]]>
LabVIEW Data Storage: TDMS Performance Tweaking https://www.dmcinfo.com/blog/29828/labview-data-storage-tdms-performance-tweaking/ Wed, 16 Jun 2010 09:56:57 +0000 https://www.dmcinfo.com/blog/29828/labview-data-storage-tdms-performance-tweaking/ In the first part of my series, LabVIEW Data Storage: Overview of TDMS, I introduced TDMS as our preferred file format and pointed users toward exploring and using TDMS themselves. In this post, I'm assuming that you are comfortable handling channels, properties, and data and want to learn more about optimizing your TDMS files to […]

The post LabVIEW Data Storage: TDMS Performance Tweaking appeared first on DMC, Inc..

]]>
In the first part of my series, LabVIEW Data Storage: Overview of TDMS, I introduced TDMS as our preferred file format and pointed users toward exploring and using TDMS themselves. In this post, I'm assuming that you are comfortable handling channels, properties, and data and want to learn more about optimizing your TDMS files to both decrease disk space and improve performance when opening and modifying files.

Preventing File Fragmentation

The TDMS file format is optimized for streaming. Therefore, data is not arranged neatly in the file. Instead, every time data is written to the file, a new chunk is added that includes some header information and then a chunk of binary data. Essentially, the header is specifying what channels and how many samples are represented in the chunk of binary data. This header plus binary data is then just tacked onto the end of the file. This allows an instance where we may have only written 100 samples to one channel, then we switch and write to another channel, and then we come back and write to the first. In a more traditional file format, the entire file would need to be re-organized on disk in order to keep the data sequential. TDMS handles this by using its headers and simply writing sequentially.

You may have noticed that as you write to a TDMS file, two files are created and grow larger as more data is written. The first is the *.tdms file as expected, but the other file is *.tdms_index. The normal *.tdms file is a dump of the packets we discussed above. However, in order to quickly read these packets later, a second file is created that lists each data chunk and exactly where it can be found. If the tdms file is an encyclopedia of data, then the tdms_index file is a table of contents that allows us to flip to the correct page without reading or caring about the pages before it.

I'm simplifying everything a bit, but for our purposes, this is a fair description of how the actual file is laid out. If you're interested, the full details of the on-disk format are available from NI here: TDMS File Format Internal Structure

So what does all this pushing to disk mean to us? Well, assume we are only saving one single sample point at a time to our tdms file. Each time we wrote we would need to generate a large header and write it to disk just to record a few bytes of data. Our file would grow incredibly large, especially compared to the actual data contained! This is called fragmentation. Luckily, National Instruments takes care of this for us in the background. A buffer is created by default for every channel we create and write to. Unfortunately, documentation on this feature is fairly sparse, but based on my testing the default value seems to be one, which is effectively no buffering. This means that each TDMS Write creates a new chunk instead of samples being stored up in the background and then written as one larger chunk that contains samples from multiple TDMS Write calls.

By default, this is usually okay since most applications are only reading, and thus writing, data at a relatively slow rate. However, over a long test, this can lead to high fragmentation and very large files. These files also generate large tdms_index files since there are so many chunks to keep track of. Other than higher disk usage, there is another detriment to these files – it takes much longer to open them later since the entire index needs to be read in and processed. In fact, a good measure of fragmentation is comparing the size of your tdms file to the size of your tdms_index file. The worst-case scenario would be if these files are near identical sizes.

There are two solutions to this problem. The simplest is to run a "defragment" on your tdms file after you are finished generating it. The TDMS Defragment function can be found on the palette with the other TDMS tools. The other solution is to prevent this fragmentation in the first place. You may already be thinking about storing your data in an array and waiting for it to reach a specific size, or maybe just not acquiring data as quickly, but there is a much more elegant solution. NI gives us access to a fairly undocumented property that will allow us to adjust their background buffer. It's called NI_MinimumBufferSize and needs to be specified for each channel you create. There is some documentation from the LabVIEW Help here: Setting the Minimum Buffer Size for a .tdms File.

Setting Buffer Size for TDMS

Determining an appropriate size for your application basically involves a balancing act between disk usage, RAM usage, and data integrity. If your application or OS crashed before a buffer is written to disk, your unwritten data will be lost. This can be avoided by using the Flush TDMS function, also available on the TDMS Palette. This function explicitly writes the buffers to disk for a given TDMS file, no matter how many samples are in them. At DMC we often use the Flush TDMS function when changing between test steps, or at a given interval (say 15 minutes) if the data rate is fluctuating or under user control. Obviously, this is very dependent on the specific implementation.

The overall goal here is to minimize our fragmentation in order to keep disk usage low and more importantly to keep opening and interacting with TDMS files quick.

Efficiently Managing Metadata

Remember earlier when I discussed the difference between binary data and metadata such as properties and names? Well, metadata can also have a strong effect on the performance of our TDMS files. Metadata is not stored and opened the same way as the raw binary data. Because metadata includes a lot of the important hierarchy and channel properties for a TDMS file it needs to be quickly accessed. If we go back to the encyclopedia analogy from above, this metadata makes up our table of contents. It would be impossible to look up the data for a specific channel without knowing the names of the channels that exist in our file. This can also be true of other properties, like channel length or datatype. For this reason, all metadata is loaded into memory when you open a TDMS file. It also means that as you write more metadata, your memory footprint grows.

This may seem fairly trivial right now, but consider the following case. You're writing an application that goes through a series of test steps. During the test, you need to monitor data from 500 channels. Maybe some of these are analog inputs, maybe some of them are data values received over CAN or serial, but the application has 500 distinct values that need to be recorded during the test. The simplest implementation is to create a new group for each step the test runs through and then save each of your channels by name into that group. Well, if we estimate an average of 10 letters per channel name, with 500 channels, and 100 steps you now have 500k bytes of extra metadata (at one byte per character). This doesn't take into account the default properties that each channel has, like NI_ChannelLength or NI_DataType. Those account for another 27 characters per channel and another 1350k bytes. When you consider parsing time and reading all of this from different places in disk, your TDMS files will open very slowly indeed, even if the files themselves aren't large.

There are a number of ways to get around this, and I'll talk about them more in depth in my next post.

Reading Only What You Need

Another option for TDMS optimization that is often under-utilized is the ability to only read certain chunks of the raw data. For example, if I am only interested in the last half hour of a test, there is no need to read in the entire data structure and then discard half of it. Instead, I can read the properties of the channel, including length, time interval, and start time and then calculate where I need to start my read and how many samples to retrieve. This doesn't provide a huge benefit with lower data rates or low channel count applications, but if we are talking about an incredibly large TDMS file, then it may not even be possible to pull the full waveform into memory.

This can also be used when analyzing past data. If you are looking for a signal to transition from 0 volts to 10 volts in a larger dataset, the most efficient way to search is via chunking. Chunking is the process of reading in a smaller sample set over and over. Instead of reading in a full array of 100k samples, we can read in 1k samples at a time and check for the voltage transition. If we really wanted to optimize the process we would tune the chunk size (multiples of 2 work well – 1024 is often a good choice) and make sure that we reused the same chunk of data for each read and inspection to prevent multiple memory allocations.

Conclusion

This should provide a good primer on dealing with complicated and larger datasets using the TDMS format. By utilizing the above techniques you can not only substantially decrease the file size, but also increase the performance and responsiveness of opening and managing these files. Obviously, the exact implementation will vary greatly from one application to another, but generally, each of these tips can be used in some facet.

For a hands-on demonstration of how we pulled all of these techniques together to implement a data file format for a Battery Management System (BMS) Validation Test Stand check out my next blog in this series: LabVIEW Data Storage: TDMS Usage Case Study

Learn more about DMC's Data Analysis, Data Mining, and Reporting expertise.

The post LabVIEW Data Storage: TDMS Performance Tweaking appeared first on DMC, Inc..

]]>
LabVIEW Data Storage: Overview of TDMS https://www.dmcinfo.com/blog/29909/labview-data-storage-overview-of-tdms/ Mon, 29 Mar 2010 12:18:11 +0000 https://www.dmcinfo.com/blog/29909/labview-data-storage-overview-of-tdms/ This is part one of my blog series detailing the use and optimization of the TDMS format. Here is a table of contents for all blogs in this series: LabVIEW Data Storage: Overview of TDMS LabVIEW Data Storage: TDMS Performance Tweaking LabVIEW Data Storage: TDMS Usage Case Study Here at DMC, almost all of our […]

The post LabVIEW Data Storage: Overview of TDMS appeared first on DMC, Inc..

]]>
This is part one of my blog series detailing the use and optimization of the TDMS format. Here is a table of contents for all blogs in this series:

LabVIEW Data Storage: Overview of TDMS

LabVIEW Data Storage: TDMS Performance Tweaking

LabVIEW Data Storage: TDMS Usage Case Study

Here at DMC, almost all of our LabVIEW applications acquire data at one point or another. This data often needs to be saved to disk for later review, display, and/or exported to a report. Years ago, logging to disk would mean working with our client to decide what type of file format fit their needs the best.

If they were concerned about disk space and needed high speed streaming rates we would use a raw binary file or other binary format like NI’s datalog. If they wanted the data available for review and calculations in Excel we would create a CSV file with the trade-off being speed and large log files. If they were simply looking to save settings or setup data we would use an INI file format. In the last couple of years though, NI has released a file format called TDM which was later expanded by a format called TDM Streaming or TDMS.

TDMS is described fairly in depth on National Instrument’s site (NI TDM Data Model). Essentially, it’s a file that includes a combination of metadata strings for things like names/properties/hierarchy and binary data for higher throughput, more byte consuming raw data. This model gives it the flexibility of an INI or XML file but with the high speed access and efficiency of a pure binary file.

TDMS Hierarchy

Because of the great utility of TDMS, we find ourselves now using it on nearly every LabVIEW project, even if it’s just to save some data during the debug phase.

To get started with TDMS, search the LabVIEW examples and start with the simplified “Write TDMS File.vi” and “Read TDMS File.vi”. You can easily access the TDMS functions from the LabVIEW Palette. Just look in the File I/O submenu.

TDMS in LabVIEW Palette

My next blog post will focus on TDMS performance and what’s happening behind the scenes. Before going much further, I recommend familiarizing youself with the concept of Groups, Channels, and Properties and how they can be used within the TDMS Hierarchy. Either look through the NI site on this subject (NI TDM Data Model) or play around with the example VIs until you have a good feel for what’s going on.

The next blog in this series is LabVIEW Data Storage: TDMS Performance Tweaking

Learn more about DMC’s data analysis, data mining and reporting expertise.

The post LabVIEW Data Storage: Overview of TDMS appeared first on DMC, Inc..

]]>
LabVIEW TDMS Write-Close Issue https://www.dmcinfo.com/blog/30280/labview-tdms-write-close-issue/ Tue, 14 Jul 2009 08:10:51 +0000 https://www.dmcinfo.com/blog/30280/labview-tdms-write-close-issue/ I discovered an interesting issue with the LabVIEW 8.6 TDMS Write / Close routine I want to share and document for anyone else unlucky enough to encounter it. In defense of LabVIEW, this issue is only encountered if the programmer uses the TDMS Write and Close out of order (yes, I admit to this). However, […]

The post LabVIEW TDMS Write-Close Issue appeared first on DMC, Inc..

]]>
I discovered an interesting issue with the LabVIEW 8.6 TDMS Write / Close routine I want to share and document for anyone else unlucky enough to encounter it. In defense of LabVIEW, this issue is only encountered if the programmer uses the TDMS Write and Close out of order (yes, I admit to this). However, in my defense, the error produced by such a ‘boneheaded’ programming mistake should not abruptly crash your system.

Background:

I was stress testing a large automated testing software package (similar to this Case Study) which we are developing for a Client using National Instruments’ LabVIEW 8.6 Developer Suite. The program is fairly large and runs many tests; however, the focus of this testing was on one test routine in particular. This routine samples and processes CAN messages from the DUT (device under test), controls a large power supply and load bank to cycle the product, checks safety and interlock systems, streams data to disk using TDMS, and has to update a pretty complex user interface. One of the advantages of using LabVIEW for this project is the ease of constructing parallel operations for such operations. As such, each of the tasks mentioned above was set in its own parallel looping structure within the testing application.

Microsoft Visual C Runtime Error

The Problem:

After running a series of tests over several days, the program was working great… for a few hours. After the program ran for any length of time, I consistently ran into a frustrating error, the dialog box shown in the image above:

“Microsoft Visual C++ Runtime Library: Runtime Error! Program: …LabVIEW.exe This application has requested the Runtime to terminate in an unusual way. Please contact the application’s support team for more information.”

The most frustrating part was that I AM THE ‘Support Team’, and I had no idea what was going on here. All I knew was that my application stopped running.

The Source:

After a few hours of debugging, and some lucky guesswork, I located the source of the bug. After running for an extended period of time, my parallel loops drifted out of sync. Specifically, by the end of a test the main loop would close the handle to my TDMS file before my file streaming loop would write its last data points. Not a good programming strategy.

Not knowing anything about this ahead of time, I would guess that in this case LabVIEW would throw a nice error, which I would have caught and at least had a good indicator of where the problem was occurring and how to fix it. However, the exception is not caught by LabView. Instead you get the mysterious Visual C++ Dialog Box and a crashed application.

Reproducing the Bug:

However, if you run a small program like shown below. You can see for yourself. The program is nothing but a textbook TDMS Open, Write, and Close… followed by a twist, another TDMS Write using the previously closed TDMS reference. Again, not knowing anything, you would think LabVIEW would execute this, but report a nice error out of the error cluster. But instead you get the Visual C++ Runtime Error.

Labview TDMS Close Handle Bug

Conclusion:

Simple fix, do not close TDMS reference until you are done with them! Alternatively, NI has a work-around shown here. Or, you could upgrade to LabVIEW 8.6.1 which fixed this issue.

On the other hand, if you ever see a Microsoft Visual C++ Runtime Library Runtime Error Dialog box sitting on your screen where your application used to be, maybe you need to check your TDMS Write/Close order.

Learn more about DMC’s data analysis, data mining and reporting expertise.

The post LabVIEW TDMS Write-Close Issue appeared first on DMC, Inc..

]]>