DMC, Inc. https://www.dmcinfo.com/ Thu, 29 Jan 2026 21:33:01 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 https://cdn.dmcinfo.com/wp-content/uploads/2025/04/17193803/site-icon-150x150.png DMC, Inc. https://www.dmcinfo.com/ 32 32 Resolving “No Space Left on Device” (ENOSPC) When Building Yocto in WSL2  https://www.dmcinfo.com/blog/41046/resolving-no-space-left-on-device-enospc-when-building-yocto-in-wsl2/ Fri, 06 Feb 2026 13:00:00 +0000 https://www.dmcinfo.com/?p=41046 After spending hours building a Yocto Linux image in WSL2, it is discouraging to see it crash with a deceptively simple error with surprisingly complex root causes: This error often shows up late in the build after you’ve already invested significant time, and it can come back even after cleaning up build files and artifacts. The reason is that WSL storage has one extra layer compared to a typical Linux workstation, and Yocto is excellent at stressing it.  This post walks through […]

The post Resolving “No Space Left on Device” (ENOSPC) When Building Yocto in WSL2  appeared first on DMC, Inc..

]]>
After spending hours building a Yocto Linux image in WSL2, it is discouraging to see it crash with a deceptively simple error with surprisingly complex root causes:

ShellScript
 No space left on device (errno = 28, ENOSPC) 

This error often shows up late in the build after you’ve already invested significant time, and it can come back even after cleaning up build files and artifacts. The reason is that WSL storage has one extra layer compared to a typical Linux workstation, and Yocto is excellent at stressing it. 

This post walks through what ENOSPC really means in WSL2, why Yocto triggers it so reliably, how to diagnose the root cause, and the correct fixes that make your builds predictable again. 

What ENOSPC Actually Means (and Why WSL2 Makes It Trickier)

On a traditional Linux machine, ENOSPC usually means one of two things: 

  • You’re out of disk space
  • You’re out of inodes (metadata entries used to track files) 

In WSL2, there’s a third common cause: the Linux filesystem lives inside a virtual hard disk file on Windows (typically ext4.vhdx). So, you effectively have two storages: 

  • Inside WSL2 (Linux view) – An ext4 filesystem with “available space” 
  • On Windows (host view) – A VHDX file that must be able to expand on the host disk 

If the VHDX can’t grow (because the Windows host drive is full, or because you’ve hit a configured limit), Linux reports ENOSPC even if you clearly remember deleting a bunch of files yesterday. 

Why Yocto Builds Hit the Wall So Fast 

Yocto produces a large amount of output and (more importantly) a massive number of small files. A typical build tree can easily exceed 100GB once you include build artifacts, shared state, downloads, logs, and repeated iterations. A representative footprint looks like this: 

build/tmp~ 90 GB
sstate-cache~ 9 GB
downloads~ 2 GB

Even a “single” build can push you into triple-digit GB usage quickly, especially with multiple machine configs, SDKs, images, or rebuild cycles. This is why even developers with large SSDs hit space issues faster than expected. 

Why You Shouldn’t Put Yocto Under /mnt/… 

It’s tempting to just place your Yocto workspace on a Windows host drive (/mnt/c/yocto/…), so it uses space on your physical SSD instead of the virtual hard disk file. But the Windows filesystem (NTFS) gets mounted into WLS via a translation layer. That cross-OS layer is convenient, but it’s not optimized for this style of compilations/builds that deal with hundreds of thousands of files. 

Microsoft’s guidance for these situations is straightforward: for best performance, keep your build files inside the WSL filesystem. So, using the physical SSD won’t work in this case, and we should keep the Yocto tree under something like ~/yocto, and not under /mnt/c… 

The Right Fix: Clean, Reclaim, and Put the VHDX on the Right Disk 

Step 1: Clean Yocto Build Artifacts (Inside WSL) 

If you need to get unblocked quickly, removing build outputs helps:

ShellScript
rm -rf build/tmp/* 
rm -rf sstate-cache/* 
rm -rf downloads/* 

This frees space inside ext4, which may be enough to complete a build. 

Step 2: Shrink/Optimize the VHDX (So Windows Actually Gets Space Back)

Deleting files inside ext4 does not necessarily reduce the physical size of ext4.vhdx. It’s common for the VHDX to grow during heavy workloads and not automatically return that space to Windows

To reclaim space on the Windows host, shut down WSL, then optimize the VHDX from PowerShell: 

ShellScript
wsl --shutdown 
Optimize-VHD -Path "D:\WSL\Ubuntu\ext4.vhdx" -Mode Full 

This approach is commonly recommended for compacting WSL VHDX files after cleanup.

Note: Optimize-VHD requires the Hyper-V VHD tooling (available on many Windows editions). If you don’t have it, you may need to enable the appropriate Windows features. 

Step 3: Move the WSL Distribution to a Larger Drive 

If your Yocto workflow is a long-term need, the most robust fix is to ensure the distro (and therefore the VHDX file) lives on a drive with plenty of headroom. 

Method 0: Why “Moving Ubuntu” in Windows Settings Doesn’t Work 

Windows has an app setting that appears to “move” Ubuntu. In practice, this often moves the application wrapper, not necessarily the underlying storage you care about (VHDX file). If you need more space, you typically need to move the distribution’s VHDX (or relocate the distro) to a larger drive using WSL-supported methods (covered below). 

Method A: Export / Import (Reliable and Widely Supported)

ShellScript
wsl --export Ubuntu D:\backup\ubuntu.tar 
wsl --unregister Ubuntu 
wsl --import Ubuntu D:\WSL\Ubuntu D:\backup\ubuntu.tar --version 2 

This moves the distro storage to D:\WSL\Ubuntu 

Method B: wsl –manage –move (Newer, Simpler) 

Newer WSL releases include a built-in move command: 

ShellScript
wsl --update 
wsl --manage Ubuntu --move D:\WSL\Ubuntu  

This is supported in WSL versions that include the –move option and is often the cleanest approach when available. 

Conclusion 

In WSL2, “No space left on device” often isn’t just about what df shows inside Linux. It’s about how WSL stores Linux data in a Windows-hosted VHDX that expands over time and may not automatically shrink. 

The most reliable path to predictable Yocto builds is: 

  1. Yocto is not a 20 GB project. Budget hundreds of GB. 
  2. Keep Yocto workspaces on ext4 inside WSL (not /mnt/*).
  3. Clean build artifacts when needed.
  4. Compact the VHDX after cleanup.
  5. Move the VHDL file to a larger drive.

If you are working through WSL2 and/or Yocto setup challenges, or “works‑on‑my‑machine” issues across your team, DMC can help you design and set up a development environment that is stable, scalable, and fast. 

Learn more about DMC’s embedded development and programming expertise and contact us today to get started.

The post Resolving “No Space Left on Device” (ENOSPC) When Building Yocto in WSL2  appeared first on DMC, Inc..

]]>
Examining Switching Architectures of Automated Test Equipment https://www.dmcinfo.com/blog/40806/examining-switching-architectures-of-automated-test-equipment/ Thu, 05 Feb 2026 13:00:00 +0000 https://www.dmcinfo.com/?p=40806 Test instrumentation can be expensive, and one effective way to reduce that cost is with switching. In the sections below, I’ll walk through a few real-world examples of switching architectures I’ve relied on over the years to help customers improve their test systems.  A typical automated test system has several layers: The first is the interface to the device under test (DUT), which […]

The post Examining Switching Architectures of Automated Test Equipment appeared first on DMC, Inc..

]]>
Test instrumentation can be expensive, and one effective way to reduce that cost is with switching. In the sections below, I’ll walk through a few real-world examples of switching architectures I’ve relied on over the years to help customers improve their test systems. 

A typical automated test system has several layers: The first is the interface to the device under test (DUT), which adapts the test system’s connections to the DUT’s specific form factor. Then there’s the signal conditioning layer, which adjusts instrumentation signal levels to match those required by the DUT. After that, there’s the instrumentation layer itself, responsible for generating or measuring signals such as digital I/O, analog I/O, communication buses, power buses, or arbitrary waveforms. 

Finally, there’s the switching architecture layer, which routes signals from the instruments out to the DUT. This layer is the focus of this discussion. 

Switching architecture diagram

The switching layer plays a critical role in both reducing cost and improving flexibility. It can dramatically reduce the amount of instrumentation hardware required while allowing the test system to maintain a common test interface that automatically reconfigures itself for multiple DUT variants. Switching can also improve throughput, enabling parallel testing of multiple units or rapid reconfiguration between tests. 

In addition, a well-designed switching architecture can give development engineers deeper insight into their systems by allowing many test points to be accessed by a shared set of instruments. 

Basic Switching Blocks 

Before diving into specific switching architectures, it’s helpful to review two fundamental building blocks that form the basis of most switching systems: the switch matrix and the multiplexer (mux). 

Switch Matrix 

The first building block is the switch matrix. You can find a more detailed discussion of switch matrices in this post, but in general, a switch matrix can be thought of as a many-to-many connection block. It allows any signal present on its interface to connect to any other signal on the same interface. 

Switch matrices are typically characterized by two main features: 

  • Number of signal interfaces: how many individual signals can connect into the matrix. 
  • Number of signal buses: how many common signal lines those interfaces can be routed onto. 

Matrices are often described using these dimensions. For example, the “14×4 matrix” shown below provides 14 interface signals (represented by the vertical lines) and 4 signal buses (represented by the horizontal lines), allowing flexible interconnection between multiple instruments and DUT channels. 

Switching architecture diagram

Multiplexer

The second fundamental building block is the multiplexer, or mux. A multiplexer provides a one-to-many or one-of-many connection path. 

For this discussion, there are three specifications to pay attention to: 

  • Channels: the number of output paths the input can be connected to. 
  • Banks: groups of channels that can be switched independently. 
  • Poles: the number of conductors switched together (for example, a single-pole or double-pole mux). 

A simple example is a three-channel, single-pole, single-bank mux, which allows a single input signal to connect to any one of three output channels. 

It’s important to note that not all multiplexers behave the same way. Some allow the input to connect to more than one output simultaneously, while others restrict it to a single output. For the purpose of this article, we’ll assume a multiplexer can connect a signal to none, one, or more than one output channel, depending on design. 

Switching architecture diagram

Other Basic Blocks 

To help illustrate the switching architectures discussed later, the following schematic symbols will be used. 

Switching architecture diagram

Real World Architectures 

A Simple Switch Matrix 

Use Case 

This is one of the simplest and most straightforward switching architectures, but also one of the most common and powerful. It forms the foundation for DMC’s Helix SwitchCore platform and served as the base architecture for a Mobile Calibration Test Stand (MCTS). 

Despite its simplicity, this design provides a great balance of flexibility, scalability, and maintainability, making it ideal for systems that need to test multiple variants of a product or perform parallel testing. 

Switching Architecture 

In this architecture, DUT signals and instrument signals share the same switch matrix. The matrix is selected to provide the appropriate number of buses and interface connections based on the test requirements. 

For example, the MCTS system implemented a 64×12 switch matrix, providing: 

  • 64 interface signals to accommodate multiple DUT configurations, and 
  • 12 signal buses for routing measurement and power signals across multiple devices. 

The system included multiple digital multimeters (DMMs) and programmable power supplies connected through the matrix. This setup allowed the system to test up to four DUTs simultaneously or quickly reconfigure itself to adapt to different DUT pinouts. 

Switching architecture diagram

Architecture Benefits 

  • Simple: A single switch matrix handles the entire routing scheme, and no additional routing layers are required. 
  • Expandable: The initial system used a 64×12 matrix, but the design allows for expansion simply by adding additional matrix modules. DMC’s SwitchCore architecture can be expanded to support thousands of signals, as in this implementation
  • Configurable: Any instrument can be routed to any DUT pin, making it easy to add new instruments or modify DUT configurations without requiring a complete redesign of the test system. 

Voltage Isolation Multiplexer 

Use Case

This architecture was used on a Power Distribution Unit (PDU) End-of-Line (EOL) Test System that supported testing for an entire family of power distribution units. 

One of the required functional tests was a high-potential (hipot) test up to 700 VDC. However, several other tests on the same DUT pins required low-voltage instrumentation. To further complicate matters, the system also needed to support multiple product variants, each with different DUT pinouts and electrical specifications. 

Switching Architecture

To meet these requirements, this switching architecture built upon the simple matrix design by adding a layer of high-voltage multiplexing. 

The multiplexer layer enabled safe high-voltage testing—such as hipot tests—while maintaining the flexibility to route low-voltage signals to the same pins. It also provided isolation between high- and low-voltage paths, protecting sensitive instruments during high-voltage operation. 

From a cost perspective, the flexibility of the matrix was critical. Rather than multiplexing every signal line, only the 12 DUT pins requiring high-voltage capability were routed through the high-voltage mux. The remaining 24 signals connected directly to the matrix. This selective approach reduced hardware costs by limiting expensive high voltage switching components to only the lines where they were needed, without sacrificing system adaptability. 

In this configuration: 

  • The DUT signals first passed through a high-voltage multiplexer with 12 banks of two channels each. 
  • The DUT interface supported up to 36 pins, with 12 routed through the high-voltage mux for hipot testing and the remaining 24 connected directly to the switch matrix. 

This architecture preserved the reconfigurability of the base matrix design while adding high-voltage capability and cost efficiency—all within a single, unified system. 

Architecture Benefits 

  • High-voltage isolation: Enables high-voltage testing functionality without sacrificing access to DUT pins for low-voltage measurements. 
  • Integrated hipot testing: Combines what are often two separate test stands (functional test and hipot test) into a single system, eliminating an extra manufacturing step and improving throughput. 
  • Cost-effective design: By routing only necessary lines through the high-voltage multiplexer, the design minimized the number of costly HV switching components required. 
  • Configurable and scalable: Maintains full flexibility to support multiple product variants and adapt to future test requirements. 

Distributed Multiplexing for Single Point Measurements and Monitoring 

Use Case 

In development test systems, engineers often need the ability to probe various signals throughout a system during debugging or validation. This is commonly done using a handheld digital multimeter (DMM). However, as systems grow to include hundreds or thousands of signals, manual probing quickly becomes impractical, time-consuming, and error prone. 

Hand probing also introduces risks: it can break configuration integrity or even violate procedural control requirements, particularly in aerospace and safety-critical applications. To address these challenges while maintaining flexibility and automation, a distributed multiplexing architecture can be implemented. We most recently leveraged this for a project using our Auto-BOB architecture. 

Switching Architecture 

This is one of the most complex switching topologies, using a multilayered approach to balance flexibility, scalability, and cost. 

While large switch matrices can provide ultimate routing freedom, they become expensive as channel count grows. By introducing layers of multiplexing, the system maintains the core configurability of a switch matrix but significantly reduces costs through signal down-selection using lower-cost multiplexers. 

This approach pays off particularly for very high channel-count systems, where the total number of signals can reach hundreds or thousands. 

To enhance reliability and safety, this architecture often employs specialized monitored multiplexers that can verify the switch position and ensure no inadvertent shorts occur within the system. 

Each DUT signal passes through multiple multiplexers, with each multiplexer creating its own measurement bus. This effectively gives each DUT signal multiple probe or measurement points that can be dynamically routed to instrumentation. Only one signal can occupy a measurement bus at a time, but by distributing signals across several buses, engineers can measure any two points in the system through the shared switch matrix. 

In one implementation, each signal group was tied to two measurement buses. For example: 

  • Measurement buses A and B serve Signal Group 1. 
  • Measurement buses C and D serve Signal Group 2. 

Because each multiplexer handles multiple DUT signals, a single signal must connect to two separate multiplexers to enable pairwise measurement while maintaining isolation. These selected signals are then routed back to the switch matrix, which connects to all instrumentation (DMMs, power supplies, etc.). 

Switching architecture diagram

An additional advantage of this design is its distributed nature. The multiplexers can be physically separated from the main switch matrix enclosure, with local multiplexers housed near DUT interfaces and the matrix located near shared instrumentation. This setup allows multiple test systems to share the same pool of measurement instruments, further reducing equipment cost and improving overall utilization. 

Switching architecture diagram
Switching architecture test stand

Architecture Benefits 

  • Highly scalable: Supports systems with hundreds or thousands of signal channels without requiring massive monolithic switch matrices. 
  • Cost-efficient: Reduces high-density matrix requirements by layering lower-cost multiplexers for signal selection, minimizing total hardware expense. 
  • Flexible and configurable: Allows any two points in the system to be connected for measurement, supporting a wide range of development and diagnostic tests. 
  • Improved safety and reliability: Monitored multiplexers verify switch positions and prevent inadvertent shorts or misconfigurations. 
  • Distributed design: Enables instrumentation to be shared across multiple test systems, reducing overall equipment investment and maximizing utilization. 
  • Supports automation and repeatability: Eliminates the need for manual probing, improving consistency and compliance in regulated or high-complexity environments. 

DUT Multiplexing for High Throughput End of Line Test 

Use Case 

In a manufacturing environment, throughput matters. In DMC’s Automated Photodiode Tester Application, multiplexers and switch matrices were combined to enable batch testing of up to 48 devices under test (DUTs). Multiplexers cycled through each DUT to perform functional testing, while a shared switch matrix handled instrument routing. This combination allowed the system to test multiple DUT variants with different interfaces and specifications using a common hardware platform. 

Switching Architecture 

This system used a 24-channel, eight-pole multiplexing card to select one of 24 DUTs. Each DUT had eight pins, with each pin connected to a separate pole of the multiplexer. Because the multiplexer provided 24 channels with eight poles each, a single mux card could support up to 24 DUTs. 

By adding a second identical card, the system expanded to 48 DUTs total. The outputs of the multiplexer cards were routed into a switch matrix, which connected to the functional test instrumentation. 

To achieve higher throughput, duplicate instrumentation and matrix modules were added to the system, one for each multiplexer card, allowing the system to operate in parallel. This configuration enabled testing over 300 DUTs per hour, dramatically increasing production efficiency while maintaining flexibility across product variants. 

Switching architecture diagram

Architecture Benefits 

  • High throughput: Parallelized multiplexing and shared instrumentation allowed testing of dozens of DUTs simultaneously, achieving over 300 units per hour. 
  • Scalable and modular: Additional multiplexer cards or matrix modules can be added to increase capacity or support new product families. 
  • Efficient use of instrumentation: Shared matrix routing maximizes the utilization of costly test hardware, minimizing total equipment investment. 

Multiplexing for Differential Serial Bus Electrical Verification 

Use Case

This is a niche but fascinating application of switching architecture. Integration engineers often verify data integrity on serial communication buses, but sometimes it’s also necessary to capture and analyze the electrical characteristics of the signal itself. 

This poses unique challenges. Serial buses often operate at data rates exceeding 1 Mbps, where any additional loading, attenuation, or reflections introduced by the test system can degrade signal quality. In these cases, the test setup must be designed to minimize its electrical impact, because it’s the signal quality itself, not just the transmitted data, that’s under evaluation. 

Architecture 

Testing multiple serial buses compounds the difficulty. To address this, DMC implemented a stubless multiplexing architecture designed to preserve signal integrity while allowing automated electrical verification and waveform capture. 

The system was constructed using several double-pole, double-throw (DPDT) relays, each acting as a two-channel, two-pole multiplexer. By chaining these relays in a tree configuration, the architecture could select any one of several serial buses to route to an oscilloscope for signal capture. 

When a bus was not selected, its signal passed directly through the system without interruption, allowing all buses to continue normal communication. When a bus was selected, its signal was routed up to the oscilloscope while simultaneously passing through the test system and back out, enabling real-time monitoring without disrupting communication. 

This design minimized the formation of stubs on the communication lines, which is critical for maintaining clean signal edges and preventing reflections that can corrupt high-speed signals. 

Switching architecture diagram

Architecture Benefits

  • Preserves signal integrity: The stubless relay topology minimizes reflections and loading, ensuring accurate electrical characterization of high-speed differential signals. 
  • Enables real-time testing: Signals can be captured while communication continues uninterrupted, allowing engineers to validate bus behavior under true operating conditions. 
  • Automates complex measurements: Multiplexed relay control allows seamless switching between buses for waveform capture, reducing manual reconnections and improving test throughput. 

Conclusion 

Switching architecture might not always be the flashiest part of a test system, but it’s often what determines how flexible, scalable, and future proof that system can be. From simple switch matrices to complex multi-layer multiplexing and distributed setups, each architecture offers a different balance of cost, capability, and control. 

In development environments, flexibility can mean the difference between a day and a week of reconfiguration. In production, smart multiplexing can translate directly into higher throughput and lower test costs. And in specialized electrical verification setup, like high-speed serial bus testing, the right switching design can be the difference between reliable measurements and misleading results. 

At the end of the day, the best architecture isn’t always the most complex: it’s the one that fits your goals, scales with your products, and keeps your test system adaptable for whatever comes next. Thoughtful switching design doesn’t just connect instruments and devices; it connects your entire test strategy to long-term success. 

Ready to take your Test & Measurement project to the next level? Contact us today to learn more about our solutions and how we can help you achieve your goals.

The post Examining Switching Architectures of Automated Test Equipment appeared first on DMC, Inc..

]]>
A Beginner’s Guide to Scripting in Siemens WinCC Unified for Advanced HMI Control https://www.dmcinfo.com/blog/40902/siemens-wincc-unified-scripting/ Wed, 04 Feb 2026 13:00:00 +0000 https://www.dmcinfo.com/?p=40902 Industrial automation projects are always specialized in their application and industry, making flexibility and adaptability necessary in all forms of programming. This extends not just to PLC controls, but to the HMI screens and devices that operators interact with regularly.  Sometimes, the built-in functions of an HMI platform are not enough to accomplish the desired goal. When creating HMI screens for Unified Basic Panels […]

The post A Beginner’s Guide to Scripting in Siemens WinCC Unified for Advanced HMI Control appeared first on DMC, Inc..

]]>
Industrial automation projects are always specialized in their application and industry, making flexibility and adaptability necessary in all forms of programming. This extends not just to PLC controls, but to the HMI screens and devices that operators interact with regularly. 

Sometimes, the built-in functions of an HMI platform are not enough to accomplish the desired goal. When creating HMI screens for Unified Basic Panels or Unified Comfort Panels, the use of scripting within components can allow for additional functionality not found in the basic properties of components.

Scripts can allow for more information to be displayed without the addition of PLC logic, such as mathematical expressions involving more than one tag. Scripts can also be used as functions triggered by an event, such as a button press on the HMI. Siemens scripting uses JavaScript and object-oriented principles to allow engineers more flexibility by using an established language for ease of implementation. This blog reviews the basics of WinCC Unified Scripting. 

Accessing Scripts – Global Modules and Functions 

When developing an HMI program, scripts can be created for use in a variety of ways. To do so, first drill down to the “Scripts” folder in the project tree under the HMI device in your project. From there, select “Add a new global module” to create a container for one or more global functions. Once the global module is created, select “Add a new global function”, which will open the script editor. From here, users can create their own custom functions.  

Siemens WinCC Unified scripting interface

Triggering Global Functions 

Once a global function is created within a global module, it can be called in a variety of ways. One method is utilizing the “Scheduled Tasks” feature, which allows a function to run based on a specified condition.

For example, to run the script “Function” when the HMI tag “eStopPressed” changes value, a scheduled task must be created. Once it is created, the default trigger option is “Tags,” which triggers on the value change of an HMI tag. Other trigger options include cycle times ranging from 500ms to one year, as well as alarms. Note that selecting a short cycle time can cause overloads, and that a function triggered by a tag cannot write to the trigger tag.  

Siemens WinCC Unified scripting interface
Siemens WinCC Unified scripting interface

Once an HMI tag is selected, select the “Events” tab under the task to select the function to run upon trigger. Click the dropdown under the name column to open the menu as shown, and scroll to the bottom to find “Script functions”>”Global module”>”Global module.Function.”

Siemens WinCC Unified scripting interface

Once a function is selected, choose parameter values to input to the function for this specific run case. These parameters can be several data types, which can be selected using the dropdown in the “Value” column.  

Siemens WinCC Unified scripting interface

Accessing Scripts – HMI Components 

HMI components have two main ways to utilize scripts: Events and Dynamic Properties. To have an HMI event trigger a script, first navigate to the HMI component and then to the “Events” tab. Select the preferred trigger event, then click the button shown below to convert the function list to a script.  

Siemens WinCC Unified scripting interface

To use a script to define a component property, first navigate to the component in question and to “Properties.” In the “Dynamization” column, use the dropdown to select Script. This will open a script editor to allow for control of the desired property.

Siemens WinCC Unified scripting interface

The variable “value” refers to the state of the property scripting for. For example, if I created a script that resulted in “value = 5” for width, the component would display with a width of five pixels. Additional code should always be written between the declaration of the “value” variable and the return.  

Script Tools 

Siemens offers a few helpful tools as part of the script editor to make it easier to create working code. Some of these tools exist as buttons on the script editor, and some are built-in features, such as: 

  • Syntax highlighting 
  • Snippets (code templates) 
  • System functions 
  • Referencing HMI objects 
  • Tooltips 
  • Autocomplete 
  • Error marking and correction 
  • Find and replace 
Siemens WinCC Unified scripting interface
Credit: Siemens

Snippets 

Scripting in WinCC Unified component properties includes pre-built sections of code called “Snippets,” which are meant to accomplish a specific, common function without building the script from scratch.

To access snippets, right-click within the script editor to open the menu. From there, you can drill into the HMI Runtime or Logic snippets. HMI Runtime snippets include functions such as opening a faceplate as a pop-up window, writing parameter sets to the PLC, and user management options. Logic snippets include basic structure for if-else statements, for-loops, and more. 

Once a snippet is added, review the code block and add or change the items inside to accurately reflect your project. For advanced users, it is possible to create custom snippets following the steps here.  

Siemens WinCC Unified scripting interface

Ready to take your Manufacturing Automation project to the next level? Contact us today to learn more about our solutions and how we can help you achieve your goals. 

The post A Beginner’s Guide to Scripting in Siemens WinCC Unified for Advanced HMI Control appeared first on DMC, Inc..

]]>
Does DMC Do Automated Test Systems for the Semiconductor Industry? https://www.dmcinfo.com/blog/41100/does-dmc-do-automated-test-systems-for-the-semiconductor-industry/ Tue, 03 Feb 2026 13:00:00 +0000 https://www.dmcinfo.com/?p=41100 Why Haven’t You Heard Much About This Lately? If you’ve browsed our recent Test and Measurement case studies, you might notice that semiconductor projects aren’t front and center. There’s a reason for that, and it’s not because DMC can’t or won’t do them. For decades, semiconductor manufacturing and its associated test system development largely moved overseas. As fabrication and assembly plants shifted to Asia, so did the capital budgets for automated […]

The post Does DMC Do Automated Test Systems for the Semiconductor Industry? appeared first on DMC, Inc..

]]>
Why Haven’t You Heard Much About This Lately?

If you’ve browsed our recent Test and Measurement case studies, you might notice that semiconductor projects aren’t front and center. There’s a reason for that, and it’s not because DMC can’t or won’t do them. For decades, semiconductor manufacturing and its associated test system development largely moved overseas. As fabrication and assembly plants shifted to Asia, so did the capital budgets for automated test equipment. U.S.-based system integrators like DMC naturally focused on industries where investment remained strong: automotive, aerospace, energy, and medical. 

A lab technician holds a semiconductor in a laboratory

What’s Changing Now? 

The tide has started turning. With initiatives like the CHIPS Act and renewed focus on supply chain resilience, billions of dollars are flowing back into U.S. semiconductor manufacturing. New fabrication and advanced packaging facilities are being built here at home, and these projects come with serious capital budgets for automation and test systems. This resurgence means the need for robust, flexible, and scalable test solutions is greater than ever.

We’ve Done This Before, and We’re Ready to Do It Again 

Although recent years have seen fewer semiconductor projects, DMC has a history in this space. Check out these classic examples from our archives: 

These projects demonstrate our ability to design and implement automated test and process control systems for high-tech manufacturing like semiconductor and telecom manufacturing environments. Engineers who worked on these systems are still with DMC, and they’d love to dust off their semiconductor expertise and apply it to modern challenges using today’s tools. 

Why DMC Is a Great Fit for Semiconductor Test Automation 

  • Deep Automation Expertise: From PXI-based test stands to MES integration, we know how to build systems that scale. 
  • Modern Tools & Frameworks: NI TestStand, LabVIEW, Python, .NET, and custom frameworks like DMCquencer and CORTEX. 
  • Industry Knowledge: We understand the critical importance of yield, traceability, and uptime in semiconductor manufacturing. 
Needles test a silicon wafer in a lab

Let’s Build the Future Together 

If you’re planning a new fab, upgrading a process line, or are supporting one, and you need automated test systems for semiconductor devices, DMC is ready to help. We combine decades of experience with cutting-edge technology to deliver solutions that meet your performance, reliability, and compliance requirements. 

Learn more about DMC’s test and measurement automation expertise and contact us today to start a conversation.

The post Does DMC Do Automated Test Systems for the Semiconductor Industry? appeared first on DMC, Inc..

]]>
Custom IoT Development Services https://www.dmcinfo.com/blog/41266/custom-iot-development-services/ Mon, 02 Feb 2026 13:00:00 +0000 https://www.dmcinfo.com/?p=41266 The Internet of Things (IoT) is a rapidly growing and evolving technical niche. Driven by the convenience and transparency gains associated with linking a physical object to a digital presence, more businesses are exploring IoT integration as part of their systems. Choosing a platform for an IoT solution is an important part of the process, […]

The post Custom IoT Development Services appeared first on DMC, Inc..

]]>
The Internet of Things (IoT) is a rapidly growing and evolving technical niche. Driven by the convenience and transparency gains associated with linking a physical object to a digital presence, more businesses are exploring IoT integration as part of their systems. Choosing a platform for an IoT solution is an important part of the process, with many options and tradeoffs to consider. Here, we will be discussing when a custom IoT solution is a good choice.

IoT Solutions Overview

IoT solutions are typically comprised of a fleet of field devices, a cloud-hosted hub to which the devices communicate, and a user portal providing visualization and control. 

The field devices could be a wide variety of “things”, including single-purpose sensors, consumer electronics, manufacturing equipment, vehicles, or many more. Each device is provided with a way to identify itself as unique compared to the rest of the fleet and a protocol for communicating with the hub.

The hub system works to send and receive messages to the devices, and to process and save data for the portal to consume.

The portal provides an interface for a user to view, handle, and react to the data provided by the field devices and may take the form of a web interface, a mobile application, or both.

These solutions provide value for their end users through increased data availability and transparency, as well as convenient device management and control. On the reporting side, messaging from the devices can relay status, utilization, and data for aggregate reporting across the system. On the management side, the portal provides an easy way to view information about the devices, download updates to the devices, or control the configuration in the field.

Custom VS. Of-the-Shelf

There are existing solutions available for purchase for a number of use cases that benefit from IoT integration; the most familiar of these might be consumer systems like thermostats or security systems. Other places you might see an off-the-shelf IoT solution could be inventory tracking for retail, or “smart building” solutions that monitor energy use and HVAC conditions.

The alternative to an off the shelf is a custom-built solution, where the user application, cloud infrastructure, devices, or all three are self-managed. These solutions might provide a way to add additional visibility to an existing process or define a new system with unique reporting and management requirements, and can be applicable across any industry, with examples ranging from agriculture to logistics to consumer products.

As such, a necessary choice when deciding to incorporate IoT is whether to go with an off-the-shelf solution or to build a custom setup.

When to Build a Custom IoT Solution

Custom IoT Solutions provide advantages in flexibility and control over their off-the-shelf counterparts. Here are some cases where those advantages might make building a custom IoT solution the right choice:

Creating or Integrating a Custom Device

When working with a custom device, the ability to control the messaging capabilities, formatting, and frequency that a custom solution provides can be very useful. Additionally, setting up the cloud side of the system to work directly with the device allows for extended remote capabilities, such as Over the Air (OTA) updates to the devices and direct control of the device or device configuration.

Specific Management or Reporting Requirements

Making specific workflows or reports work with vendor systems can be a challenge. Therefore, custom solutions deliver value in this area by reducing or removing the dependency on external systems; if the solution is built custom, it can be built to match the desired workflows and provide the desired data from the end device without excessive configuration.

Maintaining Future Flexibility

Custom solutions can change as the system does; if components change or new requirements come up, the solution can be updated to match. In addition to uncoupling the solution from a vendor’s roadmap, this can also facilitate the agile development of new systems by allowing solution components to evolve together.

Infrastructure and Cost Control

Custom solutions provide direct access to and control over the associated cloud resources. This provides complete control over how data is routed, stored, and secured relative to other business data, rather than depending on a third-party cloud tenant. If a business already maintains a cloud tenant, the infrastructure required for an IoT solution can frequently be added in a straightforward way. Direct access to these resources can also provide better visibility and control over recurring hosting costs, rather than this information being obscured by a license.

Advantages of Working with a Software Engineering Firm for Custom IoT Solutions

If building a custom IoT solution looks like the right option for your business, a software engineering firm can help your implementation project run smoothly and be completed successfully. Working with a team of engineers experienced in custom specifications and implementing the necessary components brings a breadth of experience to building the solution that you may not otherwise achieve.

Technology Expertise

The first advantage a firm like DMC can bring to your project is expertise in the technologies underlying IoT solutions. This includes experience designing and writing firmware for custom devices, implementing cloud architectures, and developing custom web or mobile applications. This expertise enables the team to build system components efficiently and cost-effectively and implement the communication interfaces between them. Additionally, experience in the platforms used means implementation can avoid common pitfalls.

Thorough Design Process

Another advantage of working with a software firm to build out a custom IoT solution is the thorough design process. Since the team frequently works to customer specifications, there is an established process to make sure the solution is designed to best match all of your requirements.

First, engineers will work with you to refine your requirements into specifications for user workflows and device communication. Then, the UI/UX team can develop mock-ups of the project interfaces. After review, the team can begin building the components and regularly review them with the client team to ensure alignment.

Project Management

Finally, working with a software firm also brings the advantage of a dedicated project manager and an established project management process for your implementation. The project manager is familiar with the tasks required to deliver the solution successfully and is equipped with tools to track the schedule, budget, and requirements. A standard cadence of meetings and status updates keeps you involved in the development effort, able to provide feedback, ask questions, and guide the solution over the course of building the solution. Additionally, as the project evolves, the dedicated project manager can quickly reprioritize tasks and generate new specifications as needed.

Explore Our Work in IoT

Ready to take your Custom IoT project to the next level? Contact us today to learn more about our solutions and how we can help you achieve your goals.

The post Custom IoT Development Services appeared first on DMC, Inc..

]]>
DMC Quote Board – February 2026 https://www.dmcinfo.com/blog/41205/dmc-quote-board-february-2026/ Fri, 30 Jan 2026 13:00:00 +0000 https://www.dmcinfo.com/?p=41205 Visitors to DMC may notice our ever-changing “Quote Board,” documenting the best engineering jokes and team one-liners of the moment. Here are a few that stood out from the last month.

The post DMC Quote Board – February 2026 appeared first on DMC, Inc..

]]>
Visitors to DMC may notice our ever-changing “Quote Board,” documenting the best engineering jokes and team one-liners of the moment. Here are a few that stood out from the last month.

Quote Board February 2026

The post DMC Quote Board – February 2026 appeared first on DMC, Inc..

]]>
MagneMotion Guide Part 11: Tuning https://www.dmcinfo.com/blog/40486/magnemotion-guide-part-11-tuning/ Wed, 28 Jan 2026 13:36:00 +0000 https://www.dmcinfo.com/?p=40486 In the previous article in this series, MagneMotion Guide Part 10: Moving Track, we discussed handling moving path nodes for MagneMotion Quickstick systems. In this article, we’ll discuss the advantages of tuning your MagneMotion system and general good practices for tuning MagneMotion.  MagneMotion Guide Series What is Tuning? Tuning is the process of adjusting the parameters of your system that control the motion profiles used to turn motion commands (i.e., move […]

The post MagneMotion Guide Part 11: Tuning appeared first on DMC, Inc..

]]>
In the previous article in this series, MagneMotion Guide Part 10: Moving Track, we discussed handling moving path nodes for MagneMotion Quickstick systems. In this article, we’ll discuss the advantages of tuning your MagneMotion system and general good practices for tuning MagneMotion. 

MagneMotion Guide Series

What is Tuning?

Tuning is the process of adjusting the parameters of your system that control the motion profiles used to turn motion commands (i.e., move this vehicle to this position) into mechanical movement. In the case of MagneMotion, tuning usually involves adjusting the Control Loop Parameters in the configuration file along with other advanced parameters that handle the system response to vehicles being out of position. 

While some systems have automatic tuning processes, MagneMotion tuning is a very iterative process. Typically, you’ll adjust tuning parameters before running your track, make observations about the current track behavior, and then iterate on the parameters again until system performance meets application requirements. 

Benefits of Tuning

There can be several different reasons why you would want to tune your system. The first is to ensure that vehicle motion is smooth. A properly tuned system will reduce vibrations and jerky movements from vehicle motion. 

Tuning can also be useful in increasing the overall throughput of your system. Mistuned vehicles can take longer to get up to speed or might slightly overshoot their targets, causing delays as they need to correct their positions. 

Similarly, tuning can also considerably reduce the current draw and thermal load on your MagneMotion motors. When a vehicle is moving smoothly and precisely, there is less need for the motors to draw an excessive amount of current to correct the vehicle’s positioning. 

Before you begin tuning your system, it is important to consider what your priorities are in tuning. A system tuned to optimize speed and responsiveness may look different from a system tuned to reduce thermal load. 

Configuration File Adjustments

See MagneMotion Guide Part 1: Creating Configuration Files for additional details.

The relevant parameters for MagneMotion tuning are found in the motor defaults section of the track. While you can adjust these parameters on a per motor basis, it’s usually a good idea to adjust them on a per path basis to keep vehicle motion consistent across your track. 

MagneMotion parameters interface

Here you can set up different sets of control parameters. To make use of a new set simply fill out the parameters and click the Enable check box. When commanding a vehicle to move (through NC Host or a controller) you can set the command to run at one of the PID sets configured here. Typically, it is a good idea to have a PID set for unloaded vehicles and another for vehicles that are loaded. Additional PID sets can be added, either to account for different load types or different types of moves along the track. 

Below is a quick description of each parameter used in a control set: 

ParameterDescriptionIncrease EffectDecrease Effect
Mass (kg)This is the mass in kilograms of the vehicle. This mass should include the mass of the puck, and nest/fixture on the vehicle, and any load placed on the vehicle. 
KpThis is the proportional gain. This controls the amount of force applied to the vehicle in response to the position error.Can cause overshooting. Increases system responsiveness.Reduces overshoot. Reduces steady-state error. Slows system responsiveness. 
KiThis is the integral gain. This controls the amount of force applied based on past values of the position error.Can cause overshooting and oscillation. Makes the system more responsive to errors over time.Reduces overshoot and oscillation.
KdThis is the derivative gain. This controls the amount of force applied based on the velocity error of the vehicle.Reduces settling time. Decreases overshoot. Large values can cause stability issues.Increases settling time. Increases overshoot.
Kff(%)This is the feedforward scale. This controls the force used to achieve the desired acceleration based on the given mass.Increases from 100% can cause the system to excessively accelerate the vehicle.Decreases from 100% can cause the system to not accelerate quickly enough. 

Tuning Process

Using Virtual Scope

To properly tune a system, you will need to run a vehicle around your track while assessing whether the vehicle’s behavior is improved after each adjustment to the system’s control parameters 

While you can certainly do this by just monitoring the system, MagneMotion has created a tool to help gather more specific motion data called Virtual Scope.

MagneMotion Virtual Scope tool

The Virtual Scope tool gathers data about a vehicle’s position error, velocity, and current use as it moves along a motor of your track. 

To use Virtual Scope in your tuning process simply hit the setup button and input the IP address of your HLC controller along with the vehicle ID you are planning on tracking and the path and motor you are running your tuning tests over. 

Note that for a running system it can be tricky to line up a scope of a specific vehicle crossing a specific motor. Make use of NC Host’s ability to track vehicle positions to setup a trace before the vehicle approaches the motor you are monitoring. 

MagneMotion interface

When a profile is captured, you can select which parameters you wish to display. You can also choose to set up a data capture to record the data from the scope into a .csv file for additional analysis. 

Data Streams

The Virtual Scope tools make use of data stream files to determine which data it is capturing from the MagneMotion motor. By default, virtual scope uses a data set that includes information on the vehicle position, error, and velocity.  

The virtual scope tool can gather additional information on things like motor temperature, power specifics or other details if the scope tool is loaded with different data streams by going to “Advanced” -> “Load Data Stream.” Rockwell does not publish these data stream files, but you can reach Rockwell ICT’s support team for alternative data streams if needed. 

Using NC Host

While you can certainly run your tuning by setting up a PLC program to run tests, it is often easier to use NC Host to control a singular mover instead of spinning up a whole PLC program or modifying your production code. 

For more details on NC Host see MagneMotion Guide Part 2: Starting up and Commissioning a Track.

NC Host also has a tool that allows you to use a temporary set of PID parameters to reduce the number of times you’ll need to change the configuration file of your system. This means you can try out PID values without needing to restart your system. (Note that any parameters changed through NC Host will be reset the next time the system restarts or the relevant path is reset). 

Other Tuning Considerations

On top of the typical PID-based configuration adjustments, there are also several additional configuration modifications that can be made to improve your system’s behavior. 

Disable Control Thrust

 If a vehicle is having trouble settling into position, the configuration file can be adjusted to disable the control thrust on a vehicle when it is within a certain tolerance of its target. This comes at a cost to how closely the vehicle will hold to its destination but removes any oscillation or additional thrust from a vehicle that is within tolerance of its position. 

MagneMotion advanced parameters interface

Motion Limits

If you are having trouble ensuring smooth motion on your vehicles, configuring a stricter velocity limit or acceleration limit can prevent the vehicle motion from overshooting its target too aggressively at the cost of overall system speed. Importantly, this also is the easiest way to reduce temperature issues on a motor. 

MagneMotion motor parameters interface

Arrival Tolerance 

Similarly, if a station has too large of a settling time where vehicles are adjusting themselves to get into a station position, you can adjust the arrival position and velocity targets. This way, if on that particular path vehicle tolerance is not too strict, you can reduce the delay between when a vehicle arrives at its destination and when MagneMotion indicates that the vehicle is in position. 

MagneMotion motor parameters interface

This parameter only affects when MagneMotion considers a vehicle ‘in position’ and does not change the motion profile of a vehicle. 

Host Controller Communication 

You can also potentially reduce settling time by adjusting the system’s communication with its host controller. For high throughput systems, changing the Send Vehicle Status Period or the Vehicle Records/Status Period values can reduce the delay caused while waiting for the Node Controller to tell the host controller that the vehicle has arrived. Care should be taken when adjusting these values to ensure that you are not overloading the communication resources of the Node Controller or the Host Controller. 

On a host controller, this update frequency also depends on the cycle time of the MagneMotion device handler. 

Also note that these settings are specifically for systems that make use of Rockwell’s ICT library, see MagneMotion Guide Part 3: Controlling a System with a PLC

MagneMotion ethernet/ip interface

Learn more about DMC’s MagneMotion programming expertise and contact us today to get started on your next project. 

The post MagneMotion Guide Part 11: Tuning appeared first on DMC, Inc..

]]>
Data Center Construction: How Are You Testing Your Power Systems? https://www.dmcinfo.com/blog/40950/data-center-construction-how-are-you-testing-your-power-systems/ Mon, 26 Jan 2026 14:58:08 +0000 https://www.dmcinfo.com/?p=40950 Data centers are the backbone of our digital economy. As construction projects surge to meet growing demand, a critical question often goes unasked: How are you testing your power systems, and are you doing enough?  When it comes to reliability, every component in your power infrastructure matters. UPS systems, battery backups, switchgear, and backup generators all play […]

The post Data Center Construction: How Are You Testing Your Power Systems? appeared first on DMC, Inc..

]]>
Data centers are the backbone of our digital economy. As construction projects surge to meet growing demand, a critical question often goes unasked: How are you testing your power systems, and are you doing enough? 

When it comes to reliability, every component in your power infrastructure matters. UPS systems, battery backups, switchgear, and backup generators all play a role in keeping operations online. But here’s the catch: testing individual components isn’t the same as testing the entire system. Each piece can pass its own validation, yet the integrated system may still fail under real-world conditions. Failure modes often emerge only when these components interact under load, during transitions, or in response to unexpected events.

Why Testing Matters 

Downtime in a data center is costly and sometimes catastrophic. Power systems are complex, and their performance under stress determines whether your facility can deliver on its uptime promises. Yet many projects rely on manual checks or incomplete testing strategies that leave gaps.  

Consider these questions: 

  • Are your UPS systems validated for transient response during failover? 
  • Has your switchgear been tested under realistic load conditions? 
  • Do your backup generators respond correctly during commissioning? 
  • Is your battery management system (BMS) ready for real-world charge/discharge cycles? 

If you’re not confident in the answers, you’re not alone. Many teams assume that OEM testing or basic commissioning is enough. It probably isn’t. 

Where Testing Should Happen 

Comprehensive testing should occur at multiple stages: 

  • Factory Acceptance Testing (FAT) – Validate performance before equipment leaves the OEM 
  • On-Site Commissioning – Confirm integrated system behavior under real-world conditions
  • Production Testing for OEMs – Ensure every unit meets specifications before shipment

Key subsystems to focus on: 

  • UPS and Battery Backup Systems – Runtime validation, transient response, and BMS functionality 
  • Switchgear and Power Distribution – Failover logic and power quality under dynamic loads
  • Backup Generators – Load simulation and response monitoring during commissioning 
  • Energy Storage Systems – Charge/discharge cycles and safety interlocks
Diagram of data center power system testing

The Better Way 

DMC brings decades of experience testing DC power systems for the automotive and aerospace industries, where reliability is non-negotiable. We apply that expertise to data center DC and AC power infrastructure with automated test solutions that: 

  • Execute custom or standard test scripts
  • Control load banks to simulate real-world conditions
  • Monitor power quality and transient response
  • Log and report results for compliance and traceability

DMC excels at thinking outside the box. It’s what sets us apart. We don’t just deliver cookie-cutter solutions; we design and integrate unique systems tailored to your requirements and challenges. Whether you need a turnkey test station or a creative approach to integrate with existing equipment, we’ll find a way to make it work. 

Component-Level Testing 

Before you can trust the entire system, you need confidence in its building blocks. Component-level testing ensures that each UPS, battery module, switchgear panel, and generator meets its specifications under controlled conditions. This step verifies: 

  • Electrical performance and safety compliance 
  • Firmware and control logic functionality
  • Proper response to simulated faults and load changes

Component testing is essential for quality assurance and regulatory compliance, but it’s only the first step. Even when every component passes, integration can introduce new failure modes. That’s why system-level testing is equally critical. 

System-Level Testing 

Component-level testing ensures each part works as intended. But data centers operate as complex systems, and failures often occur at the interfaces: 

  • UPS and generators may not synchronize during transfer 
  • Switchgear logic may falter under simultaneous load changes 
  • Battery systems may behave unpredictably during extended outages

System-level testing replicates these scenarios before they happen in production, saving time, money, and reputation. 

Further Reading 

Let’s Start the Conversation 

Data center reliability starts with rigorous testing. If you’re asking: 

  • Who’s doing this testing? 
  • How are we testing? 
  • Is there a better way? 

The answer is yes; there is a better way. Let’s talk about how DMC can help you validate every critical subsystem and the entire power system before it goes live. 

Contact us today and let’s discuss your data center power component and system-level testing challenges. 

The post Data Center Construction: How Are You Testing Your Power Systems? appeared first on DMC, Inc..

]]>
DMC’s Path to Cybersecurity Maturity Model Certification (CMMC) Level 2 Compliance  https://www.dmcinfo.com/blog/41016/dmcs-path-to-cybersecurity-maturity-model-certification-cmmc-level-2-compliance/ Thu, 22 Jan 2026 13:00:00 +0000 https://www.dmcinfo.com/?p=41016 DMC is committed to and working toward achieving CMMC Level 2 compliance by the end of 2026. This path reflects our ongoing investment in safeguarding controlled information and supporting our aerospace and defense clients with confidence. DMC has been actively developing this initiative since April 2025 and working with the Greentree Group to support our compliance and audit readiness preparation.  What is CMMC?  The Cybersecurity Maturity Model […]

The post DMC’s Path to Cybersecurity Maturity Model Certification (CMMC) Level 2 Compliance  appeared first on DMC, Inc..

]]>
DMC is committed to and working toward achieving CMMC Level 2 compliance by the end of 2026. This path reflects our ongoing investment in safeguarding controlled information and supporting our aerospace and defense clients with confidence. DMC has been actively developing this initiative since April 2025 and working with the Greentree Group to support our compliance and audit readiness preparation. 

What is CMMC? 

The Cybersecurity Maturity Model Certification (CMMC) is a DoD-developed framework designed to assess and enhance the cybersecurity posture of organizations that handle sensitive government information. Unlike self-attestation models of the past, CMMC introduces standardized practices, processes, and third-party assessments to ensure consistent protection across all contractors and subcontractors.

Understanding CMMC Level 2

CMMC Level 2 is designed for organizations that handle Controlled Unclassified Information (CUI). It requires full implementation of the 110 security controls defined in NIST SP 800-171, covering areas such as: 

  • Access control 
  • Incident response 
  • Risk management 
  • System and communications protection 
  • Configuration management 
  • Audit and accountability 

For more information, visit the Department of Defense website

DMC’s Commitment to Cybersecurity

DMC has long prioritized security, quality, and operational excellence across our engineering and technology services. Our ongoing efforts toward CMMC Level 2 compliance build on that foundation, strengthening internal controls, policies, and processes to align with DoD expectations. 

By working toward CMMC Level 2 by year’s end, DMC is positioning itself to continue supporting customers in regulated industries while reinforcing our commitment to protecting the data entrusted to us. 

Ready to take your project to the next level? Contact us today to learn more about our solutions and how we can help you achieve your goals. 

The post DMC’s Path to Cybersecurity Maturity Model Certification (CMMC) Level 2 Compliance  appeared first on DMC, Inc..

]]>
Siemens S7-1200 Web Server https://www.dmcinfo.com/blog/40637/siemens-s7-1200-web-server/ Mon, 19 Jan 2026 15:22:54 +0000 https://www.dmcinfo.com/?p=40637 In the 13 years since Tim wrote his blog post on the Siemens embedded web server on S7-1200 and S7-1500 PLCs, a lot has changed regarding PLC security and access control. In this blog post, we will be walking through creating a user-defined web page using the latest security features in TIA Portal V20 and […]

The post Siemens S7-1200 Web Server appeared first on DMC, Inc..

]]>
In the 13 years since Tim wrote his blog post on the Siemens embedded web server on S7-1200 and S7-1500 PLCs, a lot has changed regarding PLC security and access control. In this blog post, we will be walking through creating a user-defined web page using the latest security features in TIA Portal V20 and firmware v4.7.

Project Setup

We will start by creating a new project in TIA Portal, navigating to the “Devices & Networks” page, and adding our chosen PLC from the catalog. For this example, we will be using a `6ES7 212-1BE40-0XB0` more commonly known as a 1212C AC/DC/RLY.

Once the PLC is added, a pop-up will appear asking you to configure PLC security settings for the project. If you clicked cancel or had previously checked the box to prevent this pop-up from being shown, you can access it by clicking on the PLC in the Devices & networks page (the actual PLC, not the gray box around it), selecting the Properties and General tabs, selecting the Protection & Security dropdown, and clicking the Start security wizard button.

PLC CPU

The first step is creating a password to protect the PLC project and data on the PLC. Both DMC and Siemens recommend setting this password to improve OT security, and since we will be enabling a web server on this device (while presumably connecting it to a larger IT network), it is a critical step towards protecting the PLC. It is important to note that once applied, this password cannot be removed – make sure it is documented in a secure place like a password manager available to everyone on your team. A Post-it on your monitor is not a great place to store this password.

PLC

The next security setting is the mode for PG/PC and HMI communication. Unless you know you will be communicating with a legacy device, you should leave this at the default option to only allow secure communication.

PLC HMI Communication

The next setting will be for setting up access control. This should be enabled to restrict who can perform actions on the PLC, such as changing the run mode or accessing our user pages.

The “access levels” option is a legacy option that was the only security option before V20. If setting up a new project, it is recommended to leave this unchecked and use the new User Management and Access Control (UMAC) feature.

PLC access protection

In the next step, we will create one or more administrator users. This user will be able to perform all actions on a PLC that may be restricted, such as changing the mode between run/stop, updating safety settings (if available on your selected PLC model), changing drive parameters, or downloading new programs. It is important to store the login information in a secure place, and having a backup administrator user is a good idea.

PLC Administrator Setup

The next security setting is selecting whether anonymous access should be enabled. This could be useful to provide read-only access to any user without logging in, but it does reduce security slightly. For this demo, we will leave it disabled.

PLC Anonymous Access

The next page will need no changes as we are not using access control levels, and the last page will provide an overview of the settings you have chosen.

PLC Overview

At this point, we can click Finish and get started changing PLC settings for the web server.

Getting Started

Back in Devices & Networks, open up the PLC properties. Each section below will guide you through the settings that need to be changed specifically for the web server.

General

  • Here we can set the PLC name, plant information, and other designators.

PROFINET Interface [X1]

  • Here, we will need to set an IP address and Subnet Mask.
  • If this device will be accessible from an IT network, make sure the router address is set so that the PLC can be accessed outside the immediate local network.
  • If the PLC will be performing any tasks where the time of day needs to be known, or if you would like logs to have accurate timestamps, enable the time-of-day synchronization and enter NTP addresses. For this example, we will use several NIST servers located in different geographic regions.
NIST time-of-day synchronization
  • At the bottom of this tab, make sure to enable the web server on this interface (we will enable the web server module later).
Web Server Access
  • Note: if your PLC has multiple interfaces, one may be set up for connecting to local devices such as drives, remote IO, or local HMIs, and the other may be configured for connection to a larger IT network. The router address, NTP server, and web server access would only be needed on the interface used for IT network access.

Web Server

  • Check the box to enable the web server on all modules of the device and to permit access with HTTPS only.
Web Server general
  • Certificate Type: Hardware-generated
    • For this blog post, we will skip the differences between the two.
  • Further updates will be made here once we create an actual web page.

Time of Day

  • Set the time zone and daylight savings settings according to your location.

Adding Users and Web Server User Roles

Before we download and test our setup so far, we will need to configure user roles to allow access to the web server and create some non-administrator users who can access it.

User Roles

In the project tree, open Security settings > Users and roles and then select the Roles tab.

We will create two new roles: the first is a Web Server Admin, who can perform all tasks through the web server. The existing administrator users will be added to this.

PLC security settings user-roles

The second role will be a Web Server User role with permission to open only the user-defined pages we create.

Web server user role

Users

At this point, we can go back to the Users tab. The administrator users created during the project security setup should be added to the Web Server Admin role. We will “forget” to add the second administrator user to demonstrate what the web server looks like if these permissions are not given to the administrator.

PLC users

Clicking <Add a new user> lets you create a new non-administrator user with only the “Web Server User” role assigned, which we created earlier.

Creating a new non administrator user

Testing our Setup So Far

To confirm that all our security settings are working, we can download the hardware and software to the PLC and try accessing its webpage.

By entering the previously set IP address of our PLC in a browser window, we should be greeted by a Siemens landing page, and we can press Enter to view our PLC’s webpage. Make sure to enter `https://` before the IP address, and click your browser’s “accept the risk and continue” button if you’re prompted with a certificate warning.

PLC web page

By logging in with our first administrator login (dmcAdmin1) we can see that we can access all pages as well as change the run mode of the PLC.

PLC administration login

Logging out and back in as our “incorrectly configured” admin account shows that the user can log in, but because none of the web server runtime rights were granted, we cannot perform any actions that an authenticated user can.

Siemens S7 1200

Finally, by logging in as the standard user with permissions to only access the User Pages, we can see that the user can access the user defined pages tab, but is unable to perform administrator actions such as switching the PLC operating mode, downloading or backing up the program, etc.

S7 1200 Station standard user

Optimization

We have now shown that the Siemens PLC web server is enabled and secured, but if you spend any time navigating even the existing pages, you may notice that the web pages feel sluggish and slow to respond. There are two options that can be changed to speed up the pages.

Communication Load

By default, the PLC is set to spend no more than 20% of its CPU cycles performing communications. For some use cases, you may not have a large or complicated PLC project and would prefer a more responsive web server, at the cost of a slower scan time. By navigating to the PLC properties and increasing the limit from 20% to 50%, the project will become noticeably faster. During testing with one of my customers’ projects, web page loading time was approximately halved, while scan times went from 3-5ms to 4-11ms.

PLC communication load

Enabling HTTP Communications

During the web server setup process, we checked a box to enable only HTTPS (secured and encrypted) communications. Doing so requires the PLC to spend significantly more of the Communication Load time encrypting and decrypting web traffic and less time transferring web pages. Disabling this option will make the web pages load much faster. If your PLC is properly secured behind a VPN or is fully disconnected from the internet, then the decrease in security is not too severe, but keep in mind that anyone with access to the network could theoretically read the unencrypted HTTP information as it is transferred. This means that login information and everything displayed will be transferred in unsecured, plain text.

Enabling HTTP communications

Changing this setting had the most significant impact on page loading time, making the PLC-hosted web page feel almost as responsive as any site on the web, but comes with the cost of the greatest impact on security.

Next Steps

At this point, our PLC project and user permissions are correctly configured to enable user-defined pages. For more details on writing custom HTML pages that interface with the PLC program, check out Tim’s excellent write-up, which is mentioned at the start of this post.

Siemens S7 1200 Station

Ready to take your Automation project to the next level? Contact us today to learn more about our solutions and how we can help you achieve your goals.

The post Siemens S7-1200 Web Server appeared first on DMC, Inc..

]]>