IoT Archives | DMC, Inc. https://www.dmcinfo.com/blog/category/application-development/iot/ Thu, 29 Jan 2026 21:33:01 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 https://cdn.dmcinfo.com/wp-content/uploads/2025/04/17193803/site-icon-150x150.png IoT Archives | DMC, Inc. https://www.dmcinfo.com/blog/category/application-development/iot/ 32 32 Custom IoT Development Services https://www.dmcinfo.com/blog/41266/custom-iot-development-services/ Mon, 02 Feb 2026 13:00:00 +0000 https://www.dmcinfo.com/?p=41266 The Internet of Things (IoT) is a rapidly growing and evolving technical niche. Driven by the convenience and transparency gains associated with linking a physical object to a digital presence, more businesses are exploring IoT integration as part of their systems. Choosing a platform for an IoT solution is an important part of the process, […]

The post Custom IoT Development Services appeared first on DMC, Inc..

]]>
The Internet of Things (IoT) is a rapidly growing and evolving technical niche. Driven by the convenience and transparency gains associated with linking a physical object to a digital presence, more businesses are exploring IoT integration as part of their systems. Choosing a platform for an IoT solution is an important part of the process, with many options and tradeoffs to consider. Here, we will be discussing when a custom IoT solution is a good choice.

IoT Solutions Overview

IoT solutions are typically comprised of a fleet of field devices, a cloud-hosted hub to which the devices communicate, and a user portal providing visualization and control. 

The field devices could be a wide variety of “things”, including single-purpose sensors, consumer electronics, manufacturing equipment, vehicles, or many more. Each device is provided with a way to identify itself as unique compared to the rest of the fleet and a protocol for communicating with the hub.

The hub system works to send and receive messages to the devices, and to process and save data for the portal to consume.

The portal provides an interface for a user to view, handle, and react to the data provided by the field devices and may take the form of a web interface, a mobile application, or both.

These solutions provide value for their end users through increased data availability and transparency, as well as convenient device management and control. On the reporting side, messaging from the devices can relay status, utilization, and data for aggregate reporting across the system. On the management side, the portal provides an easy way to view information about the devices, download updates to the devices, or control the configuration in the field.

Custom VS. Of-the-Shelf

There are existing solutions available for purchase for a number of use cases that benefit from IoT integration; the most familiar of these might be consumer systems like thermostats or security systems. Other places you might see an off-the-shelf IoT solution could be inventory tracking for retail, or “smart building” solutions that monitor energy use and HVAC conditions.

The alternative to an off the shelf is a custom-built solution, where the user application, cloud infrastructure, devices, or all three are self-managed. These solutions might provide a way to add additional visibility to an existing process or define a new system with unique reporting and management requirements, and can be applicable across any industry, with examples ranging from agriculture to logistics to consumer products.

As such, a necessary choice when deciding to incorporate IoT is whether to go with an off-the-shelf solution or to build a custom setup.

When to Build a Custom IoT Solution

Custom IoT Solutions provide advantages in flexibility and control over their off-the-shelf counterparts. Here are some cases where those advantages might make building a custom IoT solution the right choice:

Creating or Integrating a Custom Device

When working with a custom device, the ability to control the messaging capabilities, formatting, and frequency that a custom solution provides can be very useful. Additionally, setting up the cloud side of the system to work directly with the device allows for extended remote capabilities, such as Over the Air (OTA) updates to the devices and direct control of the device or device configuration.

Specific Management or Reporting Requirements

Making specific workflows or reports work with vendor systems can be a challenge. Therefore, custom solutions deliver value in this area by reducing or removing the dependency on external systems; if the solution is built custom, it can be built to match the desired workflows and provide the desired data from the end device without excessive configuration.

Maintaining Future Flexibility

Custom solutions can change as the system does; if components change or new requirements come up, the solution can be updated to match. In addition to uncoupling the solution from a vendor’s roadmap, this can also facilitate the agile development of new systems by allowing solution components to evolve together.

Infrastructure and Cost Control

Custom solutions provide direct access to and control over the associated cloud resources. This provides complete control over how data is routed, stored, and secured relative to other business data, rather than depending on a third-party cloud tenant. If a business already maintains a cloud tenant, the infrastructure required for an IoT solution can frequently be added in a straightforward way. Direct access to these resources can also provide better visibility and control over recurring hosting costs, rather than this information being obscured by a license.

Advantages of Working with a Software Engineering Firm for Custom IoT Solutions

If building a custom IoT solution looks like the right option for your business, a software engineering firm can help your implementation project run smoothly and be completed successfully. Working with a team of engineers experienced in custom specifications and implementing the necessary components brings a breadth of experience to building the solution that you may not otherwise achieve.

Technology Expertise

The first advantage a firm like DMC can bring to your project is expertise in the technologies underlying IoT solutions. This includes experience designing and writing firmware for custom devices, implementing cloud architectures, and developing custom web or mobile applications. This expertise enables the team to build system components efficiently and cost-effectively and implement the communication interfaces between them. Additionally, experience in the platforms used means implementation can avoid common pitfalls.

Thorough Design Process

Another advantage of working with a software firm to build out a custom IoT solution is the thorough design process. Since the team frequently works to customer specifications, there is an established process to make sure the solution is designed to best match all of your requirements.

First, engineers will work with you to refine your requirements into specifications for user workflows and device communication. Then, the UI/UX team can develop mock-ups of the project interfaces. After review, the team can begin building the components and regularly review them with the client team to ensure alignment.

Project Management

Finally, working with a software firm also brings the advantage of a dedicated project manager and an established project management process for your implementation. The project manager is familiar with the tasks required to deliver the solution successfully and is equipped with tools to track the schedule, budget, and requirements. A standard cadence of meetings and status updates keeps you involved in the development effort, able to provide feedback, ask questions, and guide the solution over the course of building the solution. Additionally, as the project evolves, the dedicated project manager can quickly reprioritize tasks and generate new specifications as needed.

Explore Our Work in IoT

Ready to take your Custom IoT project to the next level? Contact us today to learn more about our solutions and how we can help you achieve your goals.

The post Custom IoT Development Services appeared first on DMC, Inc..

]]>
Updating Microsoft Defender for IoT https://www.dmcinfo.com/blog/15825/updating-microsoft-defender-for-iot/ Tue, 26 Nov 2024 17:01:50 +0000 https://www.dmcinfo.com/blog/15825/updating-microsoft-defender-for-iot/ Microsoft Defender for IoT is a powerful security tool that can help to protect your IoT/OT environment. Almost every new update improves the detection, brings new features to help protect your systems and fixing issues in previous releases, you need to keep your sensors up to date. Since there are multiple deployment options, including cloud, […]

The post Updating Microsoft Defender for IoT appeared first on DMC, Inc..

]]>
Microsoft Defender for IoT is a powerful security tool that can help to protect your IoT/OT environment. Almost every new update improves the detection, brings new features to help protect your systems and fixing issues in previous releases, you need to keep your sensors up to date. Since there are multiple deployment options, including cloud, on-premises, or hybrid networks, your update options might also be different. 

How to Update Defender for IoT

1. Navigate to portal.azure.com and search for Microsoft Defender for IoT. 

Microsoft Defender for IoT welcome screen

2. Navigate to “Sites and Sensors.” 

3. In the “Site and Sensors” list, choose the sensor you want to update. 

Microsoft Defender for IoT sites and sensors list

4. You’ll have two options: “Remote update” (push an online upgrade) or “Download upgrade files” (manually upgrade from the local network). 

Microsoft Defender for IoT upgrade options

5. In our example we will update locally to 22.3.10 and push an online update to 24.1.3 after. 

6.1.1. If you select “Download upgrade files,” you need to choose your sensor version (in our case it’s 22.3.9) and if you have a local manager. 

Microsoft Defender for IoT download upgrade

6.1.2. From the “Available versions” list you need to choose what version you want to update. If you need to update to an older version, choose “Show more.” 

Microsoft Defender for IoT versions

6.1.3. Navigate to your Defender for IoT internal IP address and login with CyberX credentials.

Microsoft Defender for Iot sensor sign in
Microsoft Defender for IoT dashboard

6.1.4. Navigate to system settings, software update, upload file. Choose the upgrade file you previously downloaded and click open. When it finishes uploading it will display “Status: Updated successfully (Agent will reboot in 30 seconds)." Note: it took ~5 minutes.

6.1.5. When it’s done, you can see that the version of the system changed.

Microsoft Defender for IoT new version
Microsoft Defender for IoT system settings
Microsoft Defender for IoT downloads
Microsoft Defender for IoT software update

6.2.1. Choose “Step one: Send package to sensor.” 

Microsoft Defender for IoT send package

6.2.2. Choose the version you want to install (the latest available version will display). If you need an older version, click on “Show more” and choose the version you prefer to install. Then, click on “Send package.”

Microsoft Defender for IoT install version

6.2.3 You should see the status under “Sensor version” tab.

Microsoft defender for iot sensor version

6.2.4. When it’s done, the status will change to “Ready to update."

microsoft defender for iot ready to update

6.2.5. Navigate to the “Remote update” and choose “Step two: Update sensor” and click “Update now”, “Confirm update.”

microsoft defender for iot confirm update
microsoft defender for iot confirm update

6.2.6. The “Sensor version" will change to “Installing."

microsoft defender for iot installing

Learn more about DMC's IoT expertise and contact us for your next project. 

The post Updating Microsoft Defender for IoT appeared first on DMC, Inc..

]]>
Exploring ThingsBoard: An IoT Platform for Your Next Project https://www.dmcinfo.com/blog/16032/exploring-thingsboard-an-iot-platform-for-your-next-project/ Tue, 20 Aug 2024 16:23:30 +0000 https://www.dmcinfo.com/blog/16032/exploring-thingsboard-an-iot-platform-for-your-next-project/ The Internet of Things (IoT) is transforming industries by enabling smarter, data-driven decisions. To fully harness the power of IoT, you need the right platform—and that’s where ThingsBoard comes in. As a comprehensive IoT platform, ThingsBoard excels in device management, data visualization, and more, making it an ideal choice for various software development projects. At […]

The post Exploring ThingsBoard: An IoT Platform for Your Next Project appeared first on DMC, Inc..

]]>
The Internet of Things (IoT) is transforming industries by enabling smarter, data-driven decisions. To fully harness the power of IoT, you need the right platform—and that’s where ThingsBoard comes in. As a comprehensive IoT platform, ThingsBoard excels in device management, data visualization, and more, making it an ideal choice for various software development projects.

At DMC, we offer a wide range of software consulting services, and our experience with ThingsBoard is just one part of our extensive toolkit. Whether you need help with IoT solutions, custom software development, or anything in between, we’ve got you covered. In this post, we’ll explore the capabilities of ThingsBoard and how you can leverage it to build a professional IoT solution.

What is ThingsBoard?

ThingsBoard is an IoT platform that stands out for its versatility in device management, data visualization, and customer management. These features are all available out of the box and can be set up with minimal overhead. It supports multiple device protocols, including MQTT, CoAP, HTTP, and others, making it compatible with a wide array of hardware and software systems.

The platform offers flexible hosting options—whether you prefer cloud-based solutions on Azure or AWS, or a self-hosted environment. This flexibility allows you to scale your IoT projects as needed, making ThingsBoard a strong choice for businesses of all sizes.

How DMC Leverages ThingsBoard for IoT Success

At DMC, our software consulting services encompass a wide range of technologies, and ThingsBoard is one of the many tools we use to deliver top-notch IoT solutions.

AirVac Vacuum Sewer Systems

For AirVac, a subsidiary of Aqseptance Group, we utilized ThingsBoard to develop a custom IoT dashboard for managing vacuum sewer systems. The project involved real-time telemetry data and a detailed vacuum station dashboard, demonstrating ThingsBoard’s robust visualization capabilities.

The Device Map Dashboard allows an operator to easily view all their deployed devices and any active alarms or relevant telemetry on them.

Live telemetry plots with adjustable time ranges are incredibly simple to implement in Thingsboard as shown below.

More Traditional SCADA style dashboards can be implemented as well as shown below.

Why Choose ThingsBoard for Your IoT and Software Development Needs

Data Visualization

One of the key strengths of ThingsBoard is its powerful data visualization tools. The platform’s built-in widgets and customizable dashboards allow for real-time monitoring and analysis, making it easier to gain actionable insights from your IoT data. Check out the live demo to see ThingsBoard in action (see it in Dark Mode by clicking the icon in the top right).

Asset and Device Management

ThingsBoard makes asset management straightforward with its built-in user access controls. Whether managing a handful of devices or thousands, the platform’s scalable architecture ensures that your IoT solution grows alongside your business. 

Alarm and Notification Handling

ThingsBoard’s alarm handling features are another highlight. With trigger-based alarms linked to device telemetry, you can integrate notifications via services like Twilio to stay informed about critical events in real-time.

Partner with DMC for Your Next Software Project

Whether you’re exploring the potential of IoT with ThingsBoard or need a partner for a complex software development project, DMC is here to help. Our team of experts is ready to guide you through every step of your project, ensuring that you achieve your goals efficiently and effectively.

If you’re searching for a reliable partner to implement an IoT solution or any other software project, look no further. Let’s collaborate to bring your vision to life with a platform and a partner that understands the full spectrum of software development needs.

Learn more about DMC’s IoT expertise and contact us for your next project. 

The post Exploring ThingsBoard: An IoT Platform for Your Next Project appeared first on DMC, Inc..

]]>
An Introduction to Node-Red: Processing and Sending PLC Data to the Cloud https://www.dmcinfo.com/blog/17181/an-introduction-to-node-red-processing-and-sending-plc-data-to-the-cloud/ Thu, 05 Oct 2023 18:26:04 +0000 https://www.dmcinfo.com/blog/17181/an-introduction-to-node-red-processing-and-sending-plc-data-to-the-cloud/ In this blog, I'd like to introduce a prominent software in the realm of Industrial IoT applications: Node-Red. In a sentence, Node-Red is a graphical development tool that is quickly becoming the industry standard for IIoT applications. It's referred to as a "graphical development tool" because you are essentially writing JavaScript code with nodes instead of […]

The post An Introduction to Node-Red: Processing and Sending PLC Data to the Cloud appeared first on DMC, Inc..

]]>
In this blog, I'd like to introduce a prominent software in the realm of Industrial IoT applications: Node-Red. In a sentence, Node-Red is a graphical development tool that is quickly becoming the industry standard for IIoT applications. It's referred to as a "graphical development tool" because you are essentially writing JavaScript code with nodes instead of text. Each node is a visual programming element. Its capabilities for PLC data-processing are broad but, fundamentally, the software is very straightforward to use.

Before we discuss the functionalities of Node-Red, it's important to address one question: when would I use Node-Red? Node-Red is most commonly used in the following cases:

  1. A client wants to store PLC data in the cloud. This could be logging data in an Azure Hub database and then using that data to generate PowerBI models.
  2. A client wants to be remotely alerted when something in the PLC logic occurs. Perhaps they want to be notified when a temperature sensor reads a certain value and take action accordingly.
  3. Both of the above!

There are edge cases where Node-Red is used to remotely control PLC tags, but this is typically not recommended. In short, any time you have PLC data that you want to process and send to the cloud (whether to store it in a database or notify you directly), Node-Red will do the job.

It's important to note that Node-Red needs some hardware to connect to your PLC. This could be a PC or IoT device that is connected to your network. In my case, I used a Siemens IoT2050 Advanced that came with Node-Red pre-installed.

The overall layout of the Node-Red is relatively simple. On the left, you have your toolbox of "nodes". Each provides a different functionality, and there is a large library of community-made nodes that you can import using the "manage palette" option in settings.

In the center, you have your working area. This is where you will drag in nodes to write your processing logic. Notice the tabs at the top of the working area; each tab is called a "flow". These are like pages in an Excel sheet or function blocks in a PLC program. Each flow can store tag data in its own memory, like the internal memory of a PLC function block.

The various icons on the top-right corner provide you with a variety of details about your project, but the most important one to have open is the debug window. The debug window acts similarly to the console window in any programming environment. This window is where you will see the outputs of your code. You can locate this by navigating to the beetle icon.

The last important feature of the Node-Red layout is the big red "Deploy" button. This runs your code. Node-Red behaves similarly to a PLC; once you deploy your logic, it will run continuously until you re-deploy the program with any changes.

Debugging: Inject Node & Debug Node

Before we can discuss some basic Node-Red logic, we should understand two essential debugging/early development nodes: Inject and Debug.

Node-Red logic is initiated by a message being sent. In practice, this might be a PLC Boolean flipping to "True". For troubleshooting purposes, it's useful to be able to inject a message whenever you want to initiate a flow. This is where the inject node becomes crucial for initial logic development.

While the inject node is essential for starting a flow, the debug node is essential for observing the output of a flow. It would be impossible to troubleshoot without knowing what your output is. Wiring your node logic into a debug node sends the message payload into the debug window. Any node's output can be wired into a debug node. This gives you the ability to observe your message payload at every stage of your processing logic.

Message Structure

There's one more topic I'd like to discuss before we get into some basic logic examples. I believe it's important to understand the structure of Node-Red messages. In the previous paragraph, I mentioned that a Node-Red flow could be triggered by a PLC Boolean flipping to TRUE. In Node-Red, the message would consist of two parts. The first is the "topic". When you configure your PLC tag import node, you'll assign a label for the PLC tag. Whatever you choose to label your tag will become the message topic.

The second part of the message is the "payload". This is the contents of the message. For a Boolean, this would be TRUE or FALSE. Node-Red message payloads can come in many forms. For more complex payloads, the payload is often a JSON object or JavaScript string. This message structure is important when trying to do message customization.

Each Node-Red message is a JavaScript "msg" object. Within various nodes, you'll frequently see "msg.payload" being referenced and/or altered. This should make some sense intuitively. The important contents of the PLC tag will live exclusively in the "payload" key of the "msg" object. Most data-processing logic will, then, deal exclusively with the message payload. So why do we need this "topic" element? Message topics can be very useful in data filtering contexts, and I will give you a simple example of such contexts in the following section.

Data Filtering: Switch Node

Now, we can finally get into some basic Node-Red logic.

The most basic level of data processing is filtering. In the first rung, I'm injecting the integer 3 to initiate my flow. The yellow node you see is a "switch" node. The switch node allows you to filter data based on conditions. In this case, I'm only allowing the payload to continue through the switch node if its value exceeds 1000. Clearly, 3 would not meet the condition and thus the debug node would output nothing to the debug window. 

The above image shows the configuration of the switch node. Note how I'm checking the property of "msg.payload" against the condition ">= 1000" — the second and third rungs of the data filtering logic add in an element of message topic filtering. In those examples, I only want data with topic "RPM" to pass. As such, I have added a switch node in series that is configured to check if "msg.topic" equals "RPM".

Payload Configuration: Change Node

While data filtering is the most fundamental data processing function, outputting a naked "TRUE" or "1032" to your database is not usually ideal. Typically, you want to take your PLC data and either combine it with other tags or transform it into a more human-friendly form. For example, instead of a contextless payload of "74," I might instead prefer the payload to say: "Room Temp (F): 74." The "change" node helps you do just that. 

In this flow logic, we have some Boolean input giving us a "TRUE" or "FALSE". We use a switch node to route the flow according to the payload value. If the value is "TRUE," we send the message through the first output. Otherwise, route the message to the second output. The change node looks very similar to the switch node, but its function is very unique. As the name implies, we are usually changing the payload itself.

The change node here is configured to change the value of the payload "TRUE" to the string "The motor is on". While this example shows a simple functionality, the change node is extremely powerful. This node enables you to save payloads to Node-Red memory, construct payload objects, and much more. I will provide an example of saving to Node-Red memory later on in this blog.

In Practice: S7-In Node & MQTT-Out Node

The inject and debug nodes are useful for troubleshooting, but they do not have any use during actual operation. In practice, we need to use a PLC tag-importing node and some network-out node. In this example, I've used the Siemens S7-In node and the MQTT-Out node. These are the entry and exit points of Node-Red. Raw PLC data comes in through the S7-In node, and processed payloads are sent out through the MQTT-Out node.

Instead of an inject node in the previous examples, you would use the corresponding PLC-In node for your system. The configuration of this PLC-In node is simple: fill in the IP-address of the PLC in the "Connection" tab and the PLC tag address(es) in the "Variables" tab.

When addressing PLC tags, you should reference this Node-Red documentation. You can configure this node to inject the PLC tag every so often or only when the tag changes value.

Additionally, you can configure the node to inject every configured PLC tag or one specific tag. Here, I've configured it to only inject the "bDMC" tag that sits in a PLC data block. It is worth noting that for a Siemens PLC, Node-Red can only pull tags from an unoptimized data block. 

The pink node on the right is the MQTT-Out node. Notice how it's wired in parallel to the debug node. This way, whatever is sent through the MQTT-Out node will also show up in the debug window. Each network protocol node is slightly different, so I won't dive into the MQTT node specifically. You shouldn't run into anything out of the ordinary when configuring these nodes.

Storing Data in Node-Red Memory

The last thing I want to touch on is the data storing functionality of Node-Red. Without the use of Node-Red's internal memory, the extent to which you could customize your payloads would be severely diminished.

Consider the following example: I have PLC tags that record the temperature and humidity of a certain room. I want to trigger a flow each time my temperature exceeds a certain threshold. Following the ideas so far, we could easily write a flow that sends the temperature payload to some online database, but what if I wanted the humidity to be sent along with the temperature data? Here, we would need to use Node Red's capability to store data into flow memory.

This is a basic flow that pulls in every configured PLC tag and stores it in "flow" memory. Node-Red has a "flow" and "global" object in which you can store your tags. These tags are persistent across executions of the program.

As the names imply, you can only access data saved within the "flow" object while inside your flow. Data saved within the "global" object can be accessed across all flows. Remember that "flows" are similar to pages in an Excel sheet. In this example, I'm storing every PLC tag within a "myData" object. This object is nested within the "flow" memory. To call the saved tags, I simply address them by appending their topic (whatever you named them) to the flow.myData address.

This is the configuration of the change node used to store data. Note the "flow." prefix is selected from the dropdown menu. It is defaulted to "msg.".

Now that you know how to filter data, configure payloads, and store data in Node-Red, you possess the fundamentals for any PLC data-processing needs. 

Learn more about our Industrial IoT Solutions expertise and contact us for your next project!

The post An Introduction to Node-Red: Processing and Sending PLC Data to the Cloud appeared first on DMC, Inc..

]]>
How to Programmatically Generate Global Virtual Channels with LabVIEW https://www.dmcinfo.com/blog/17340/how-to-programmatically-generate-global-virtual-channels-with-labview/ Fri, 04 Aug 2023 11:35:51 +0000 https://www.dmcinfo.com/blog/17340/how-to-programmatically-generate-global-virtual-channels-with-labview/ How many times have you had to create numerous global virtual channels for a project? It’s the tedious task of going into NI MAX and creating a channel one by one based on an IO sheet. In this tutorial, I will show you how to programmatically create and save a global virtual channel in LabVIEW. […]

The post How to Programmatically Generate Global Virtual Channels with LabVIEW appeared first on DMC, Inc..

]]>
How many times have you had to create numerous global virtual channels for a project? It’s the tedious task of going into NI MAX and creating a channel one by one based on an IO sheet.

In this tutorial, I will show you how to programmatically create and save a global virtual channel in LabVIEW. I will tie it all together by generating the same signals from an IO sheet in Excel.

Background

I wanted to reuse a library DMC developed that controls relays and reads their feedback. I preferred not to modify this library because we were referencing the repo in another DMC project; modifying the library would change it for another team’s project. The inputs to this library were global virtual channels connecting to the control and feedback of each relay. I could have manually created global channels for our 100+ relays, but instead, I let LabVIEW do all the work.

NI Max Setup

In this example, I added a simulated cDAQ to NI Max. I also added two NI 9403 DIO module and gave it the names below:

screenshot of a computer screen

The Code

The below code is what we will use to create 6 global virtual channels. This code will iterate between a relay control (Digital Control) and relay feedback (Digital Feedback).

Screenshot of code on a computer

First, I do modulo two to iterate between the control and feedback enum I made. I then use “format into string” to create the name of the channel. 

To prevent errors, clear the task name. I also like to delete the task in case it exists. If the task doesn’t exist, an error will be thrown, so make sure to clear errors coming out of that VI.

  • Factoid: You can feed a string into a task name and it will typecast automatically for you.

Create the task by feeding the task coming out of the previous VI into the “New Task Name” input of “DAQmx Create Task”. 

  • Factoid: You can feed a task into a string input as well*, and it will be typecast automatically.

Create the hardware line you want to connect the channel to. I do this by using format string again. Make sure to use the name of the module you have in NI Max.

  • Factoid: If you want a list of all available channels, you can drop a “DAQmx Physical Channel” onto the block diagram and browse for available lines.
Image of code on a screen

Create the DAQmx virtual channel. I use a Digital Output for the controls and Digital Input for feedback.

Save the global channel using "DAQmx Save Global Channel.vi".

Below, are the channels that were created in NI Max.

screenshot of code on a laptop

Reading From an IO Sheet

Now that you can programmatically create global virtual channels, you can take it a step further and create channels from information contained in an IO sheet like the one below.

screenshot of spreadsheet on a computer

You can read this information by using “Read Delimited Spreadsheet.vi” and removing the top row of headers. After that, feed the inputs into the code from before, like so.

Screenshot of code on a laptop

And viola! Four new global virtual channels have been made from your spreadsheet.

Screenshot of four global virtual channels

Deleting Channels

You can delete the channels programmatically just as easy as you made them by using “DAQmx Delete Saved Global Channels.vi”. 

Screenshot of laptop screen

Why is this useful?

This is useful because you don’t have to manually type in global virtual channels to NI Max. This prevents fat fingering a wrong IO channel. This will also save you time depending on how many channels you have. It also prevents errors from happening by pulling straight from the IO list. When collaborating with coworkers on an IO list, you don’t have to worry about someone else making a change in the list that doesn’t get propagated into the code. The code will pick up on it!

Another benefit of note is you can decrease the amount of fluff in an NI Max file. All you will need in your NI Max file are the hardware and module names. If a coworker needs to add a channel, they can add it to the IO sheet, and it will automatically be imported into your code when you pull their code. No more pushing and pulling the NI Max file nor importing and exporting channels causing less committing and merging of NI Max channels between developers.

Learn more about DMC’s LabVIEW Programming and contact us today for your next project!

The post How to Programmatically Generate Global Virtual Channels with LabVIEW appeared first on DMC, Inc..

]]>
Azure IoT Hub Data Processing and Storage https://www.dmcinfo.com/blog/19924/azure-iot-hub-data-processing-and-storage/ Fri, 15 May 2020 12:30:46 +0000 https://www.dmcinfo.com/blog/19924/azure-iot-hub-data-processing-and-storage/ Overview When dealing with several or millions of IoT devices the amount of data can quickly become overwhelming to your processing logic and storage solution. To begin reducing the complexity of dealing with the data, it’s helpful to group the data into one of two main categories: hot or cold. That is to say: does the […]

The post Azure IoT Hub Data Processing and Storage appeared first on DMC, Inc..

]]>
Overview

When dealing with several or millions of IoT devices the amount of data can quickly become overwhelming to your processing logic and storage solution. To begin reducing the complexity of dealing with the data, it’s helpful to group the data into one of two main categories: hot or cold. That is to say: does the data need to be analyzed immediately (hot), or can we look at the data at our own convenience (cold).

After the data is ingested and initial processing is complete, it needs to be stored. Is the data similar enough that it can be logged in a database (structured), or does it vary widely and needs more flexible data storage (unstructured). Also, how much data are you expecting to collect: petabytes, or megabytes? Who needs access to the data for analysis, and what if they’re located on the other side of the world? The delays in processing gigabytes of data while dealing with the latency of traversing the globe can quickly become frustrating.

These are all things that need to be thought through before you begin development. Hopefully, this helps provide some assistance with planning your IoT solution. If you have any additional questions or need help with your IoT solution, send us an email.

Data Processing

When your IoT nodes start producing data the server(s) need to be ready to handle that data as efficiently as possible, which means you need to determine if data is ‘hot’ or ‘cold’.

Hot Data

An example of ‘hot’ data might be motor speed. Perhaps the motor can run up to 110% of its nominal speed for up to 5 seconds without issue, but if it exceeds 115% for more than 5 seconds it needs to be disassembled and internal parts replaced. The motor speed information needs to be analyzed and acted upon quickly to avoid downtime and costly repairs.

Cold Data

‘Cold’ data, on the other hand, might be information like the volume of liquid pumped by the motor over the last 5 minutes. Knowing the exact flow at every given moment might not provide any valuable insights, so cold data can be batched and processed when reasonable.

To complicate matters more, hot and cold data is more of a sliding scale, with common nomenclature of ‘hot’, ‘warm’, ‘cool’, and ‘cold’. Data may not fall into a single category. Using the motor speed example from above; the motor speed may also need to be processed as cool or cold data because analysis has shown that motor speed tends to increase marginally day over day before motor failure. So, not only is the real-time data important but saving the data for later to identify trends may also be important. This dual processing path is quite common.

Structured vs Unstructured

Will all the IoT nodes produce data with the same type of structure or parameters; or is each node unique and there’s little to no overlap of the data structure between nodes?

Azure Data Lake

Image credit to Microsoft.

Blob Storage

To log unique (unstructured) data, Azure Storage Blob is likely your best option. It’s not as unmanageable as ‘blob’ makes it sound, that’s just referring to the fact that it’s meant to hold different types and models of data from videos and text, to pictures and audio recordings. Blob storage is pretty analogous to what you’re used to with your desktop or laptop computer with having different folders that contain other folders or files with lots of different file types. It’s the default storage option when creating a storage account.

Blob storage can be broken down into 3 types: Append, Page, Block.

  • Append: Data can only be appended to the blob. Think a log file, new log entries are appended to the end and the blob continues to grow.
  • Page: Good for data that needs frequent read/write access with high performance and low latency. General-purpose storage.
  • Block: Block type storage is optimized for large data sets and provides parallel read/write to different blocks. It's for handling large (Gb’s) files over the network.

Page and Block type Blob storage can handle up to Terabytes of data, but there comes a point where an organization of ‘Big Data’ starts to become a limiting factor. Your data is worthless if you can’t efficiently query it. That’s when you need to consider a solution like Azure Data Lake. Data Lake provides additional organizational and security tools, as well as some built-in analytics.

CosmosDB

If the data is in a more consistent format, you may want to consider CosmosDB as the storage solution instead of blob storage. CosmosDB provides several features that may be beneficial for your organization and your data. Similar to SQL databases, CosmosDB provides the option to specify indexes on the data which can significantly improve query performance.


Image credit to Microsoft.

Also, CosmosDB was built with data sharing and replication in mind. A CosmosDB database can be configured to replicate data to multiple Azure regions around the globe giving employees in both North America and Australia, for example, access to the same data at the same time. It’s also possible for both regions to write data back into Cosmos at the same time. This is all in addition to a failover location which can be configured to take over operations in the event of a natural disaster or civil disturbance in the primary location.

Learn more about DMC's IoT solutions and Azure solutions. Contact us for more information. 

The post Azure IoT Hub Data Processing and Storage appeared first on DMC, Inc..

]]>
DMC Joins Siemens’ MindSphere Partner Program as a Gold Partner https://www.dmcinfo.com/blog/19927/dmc-joins-siemens-mindsphere-partner-program-as-a-gold-partner/ Thu, 14 May 2020 16:22:45 +0000 https://www.dmcinfo.com/blog/19927/dmc-joins-siemens-mindsphere-partner-program-as-a-gold-partner/ Partnership enables DMC to deliver new MindSphere Industrial IoT applications DMC is proud to announce that we have joined the MindSphere Partner Program, Siemens’ partner program for Industrial IoT solution and technology providers. “I’m excited to continue DMC’s history of partnering with Siemens by joining the MindSphere partner community,” said DMC President and CEO, Frank […]

The post DMC Joins Siemens’ MindSphere Partner Program as a Gold Partner appeared first on DMC, Inc..

]]>
Partnership enables DMC to deliver new MindSphere Industrial IoT applications

DMC is proud to announce that we have joined the MindSphere Partner Program, Siemens’ partner program for Industrial IoT solution and technology providers.

“I’m excited to continue DMC’s history of partnering with Siemens by joining the MindSphere partner community,” said DMC President and CEO, Frank Riordan “With over two decades of experience in factory automation, we are well-positioned to provide industry-leading IIoT solutions to our clients.”

About MindSphere

MindSphere®, the cloud-based, open IoT operating system from Siemens, connects products, plants, systems, and machines, enabling businesses to harness the wealth of data generated by the Internet of Things (IoT) with advanced analytics.

MindSphere Partnership

DMC recently completed a successful MindSphere project for a manufacturer that had no network connectivity. We were one of first integrators to use WinCC OA Connector to pull data from hundreds of standalone OEM machines and push it to MindSphere. DMC developed a comprehensive data aggregation system that provides valuable insight into operational data, highlighting inefficiencies and improving factory effectiveness.

As a MindSphere Gold Partner, DMC has technical staff trained by Siemens through the MindSphere technical curriculum, multiple MindSphere applications developed or being developed, and a joint go-to-market agreement to assist our customers in achieving substantial business value through IoT technology generally and MindSphere specifically.

Our partnership with Siemens demonstrates DMC’s continued commitment to providing industry-leading solutions for industrial manufacturing. As a result of the partnership, we will be able to augment our current IoT offerings with MindSphere and pass on the benefit of the latest in training and technology to our clients.

“Siemens welcomes DMC to the MindSphere ecosystem as it extends its long-standing partnership with Siemens,” said Rohit Khera, Global Vice President Strategic Alliances, Siemens Digital Industries Software.  “DMC’s deep industry expertise and its focus on providing IIoT solutions that pull and process data from systems with low connectivity, high complexity, and diverse platforms can provide our joint customers with great value in their digital transformation initiatives.”

DMC and Siemens

DMC has comprehensive experience with a wide range of Siemens technologies and hundreds of successful project implementations for customers around the world. Our expertise includes recognition as a MindSphere partner, a SIMATIC IT Partner, a member of the Siemens MEAC (MOM Expertise Alliance Center) Alliance Partner Network, Siemens Solution Partner certifications in WinCC SCADA, Industrial Communications, and Advanced Factory Automation, and having a very large number of Siemens S7 certified engineers in North America.

About DMC

DMC is a project-based engineering and software development firm focused on software development and control systems. We develop and implement solutions for a wide range of industries using a variety of technologies and platforms. Since 1996, DMC has succeeded in helping hundreds of clients increase efficiency and productivity by delivering world-class solutions throughout the globe.

###

Contact
Jessica Mlinaric
jessica.mlinaric@dmcinfo.com
312.255.8757

The post DMC Joins Siemens’ MindSphere Partner Program as a Gold Partner appeared first on DMC, Inc..

]]>
DMC to Host IIoT Webinar https://www.dmcinfo.com/blog/20003/dmc-to-host-iiot-webinar/ Tue, 21 Apr 2020 14:43:48 +0000 https://www.dmcinfo.com/blog/20003/dmc-to-host-iiot-webinar/ DMC is excited to share that we are hosting an upcoming webinar on the Industrial Internet of Things (IIoT) led by DMC Project Director Patrick Corcoran. There are two opportunities to attend the webinar on May 8 and May 13, 2020. About the Webinar The IIoT Ready PLC- Your Journey From the Edge to the […]

The post DMC to Host IIoT Webinar appeared first on DMC, Inc..

]]>
DMC is excited to share that we are hosting an upcoming webinar on the Industrial Internet of Things (IIoT) led by DMC Project Director Patrick Corcoran. There are two opportunities to attend the webinar on May 8 and May 13, 2020.

About the Webinar

The IIoT Ready PLC- Your Journey From the Edge to the Cloud
DMC has been helping many clients make IIoT a reality for their business. Join us for an informational session on the IIoT of today, exploring platform options, PLC capabilities, and next steps toward bringing your cloud-connected applications to life.

Registration

There are two opportunities to attend the IIoT webinar:

Friday, May 8
2 – 3 p.m. CT
Register here

Wednesday, May 13
2 – 3 p.m. CT
Register here

Learn more about DMC’s IIoT expertise

The post DMC to Host IIoT Webinar appeared first on DMC, Inc..

]]>
How To Setup a WinCC OA Application to Push Data to MindSphere https://www.dmcinfo.com/blog/20593/how-to-setup-a-wincc-oa-application-to-push-data-to-mindsphere/ Thu, 07 Nov 2019 10:39:24 +0000 https://www.dmcinfo.com/blog/20593/how-to-setup-a-wincc-oa-application-to-push-data-to-mindsphere/ MindSphere and WinCC OA are two powerful tools offered by Siemens that can allow for streamlined aggregation, processing, and visualization of massive volumes of data. Now thanks to some in-built functions within WinCC OA, you can, with a bit of setup and planning, easily pass your data from your OA application up to your MindSphere […]

The post How To Setup a WinCC OA Application to Push Data to MindSphere appeared first on DMC, Inc..

]]>
MindSphere and WinCC OA are two powerful tools offered by Siemens that can allow for streamlined aggregation, processing, and visualization of massive volumes of data. Now thanks to some in-built functions within WinCC OA, you can, with a bit of setup and planning, easily pass your data from your OA application up to your MindSphere tenant. In this blog, I’ll walk you through the steps of setting up this connection, and I’ll offer some added commentary and advice to make your experience easier.

Setup: WinCC OA Manual Generation

The first thing you need to do is create your assets in both the WinCC OA project and the MindSphere tenant. I’ll start with the WinCC OA project. Open Gedi in your project and from the top toolbar navigate to ‘SysMgm’ < ‘Settings’ < ‘MindSphere Configuration.’ This will open a window like the one shown below.

The MindSphere Configuration menu allows you to organize you data structure to meet the needs of your SCADA, IoT, or other types of applications

Within here you can manually instantiate datapoints for you MindSphere application. There are three levels of organization to your MindSphere data. At the highest level are assets, then within an asset, there are data sources, and within those data sources, there are datapoints.

What exactly defines an asset or data source within your project is up to you. For instance, an asset could represent one of several lines on a factory floor, and each data source could be a machine or PLC used within that line. It is up to you to decide how best to structure your data.

To create an asset, simply right click the whitespace in the column on the left and select ‘Create MindSphere Asset.’ You can right-click that asset to add a data source and right-click that data source to add datapoints. When creating assets and data sources, all you really configure is the name. When creating datapoints, you must also select a datatype and enter the units. Datatype selection is limited to ints, longs, double, Booleans, and floats, so be aware of this when creating your application. It is also crucial that you do not leave units blank.

To map your OA data point to a data point in an asset in MindSphere, the units on the two data points must match exactly. We’ll touch more on this later.

Setup: WinCC OA Scripted Generation

If you are working with a large volume of datapoints, then manually entering them all, as I showed above, can be impractical. Fortunately, there is a way to script the generation of these points. Whenever an asset, source, or point is created in the MindSphere Configuration menu, an internal datapoint is created in Para of the type _MC_Asset, _MC_Datapoint, or _MC_Datasource respectively.

The structure of the internal datapoints created in Para by the MindSphere Configuration tool

What this means is that you can write a script that uses the dpCreate and dpSet functions to create your data structure and populate some necessary fields. You must be careful when doing this, though, to set up your data correctly. Make sure your asset’s DataSources is populated with the names of all associated data sources.

Do the same for making sure the data source has its associated points and assets entered and that the points have their associated data source. Don’t forget to set your points’ unit and type as well. Type is represented by an integer from 1 to 5, where each number equates to one of the datatypes mentioned earlier. Try first making a simple data structure as I did here and dig around the generated internal points, so you understand how it’s all supposed to be set up.

Setup: MindConnect Lib Asset

Next, we will create our MindConnect Lib asset. The MindConnect Lib assets are what your assets in OA are going to establish a connection with an upload data to. There is a one to one relation between MindConnect Lib assets in the tenant and assets in your OA project so you will need to make one MindConnect Lib asset for each asset in your OA project.

From your home page in MindSphere, go to the asset manager and select ‘Create asset.’ Select MindConnectLib and give your asset a name.

MindSphere offers assets designed for applications with IoT, OPC UA and much more but here we are interested in the MindConnectLib asset type

Next, we’ll set up our connection to the OA asset. Click the large button with the puzzle piece on it and select your preferred security profile.

MindSphere is the missing piece to the puzzle that is making a complete Cloud based data solution for your Industry 4.0 application

I’ve used SHARED_SECRET for all of my applications. After hitting continue, you’ll be taken to another screen with the boarding configuration. Click the buttons to generate a boarding key and copy it to your clipboard.

Now go back to the OA MindSphere configuration menu, click on your asset, and paste the onboarding key into the white box then hit the upload button at the bottom.

Once your asset is onboarded you've established a connection between your MindSphere Tenant and OA project

Your assets in the OA project and tenant are now connected. You’ll notice that the Configuration ID has been filled in for your asset and all associated datapoints. Now, if you go back to your asset in the MindSphere tenant and click the button with the puzzle piece on it, you’ll be taken to a new screen.

Here you can see the configuration of your asset and its data mappings. The configuration shown should match exactly with what you set up in OA.

The configuration displayed here should match exactly with your OA configuration

Setup: MindSphere Assets for Data Mapping

To actually work with and visualize the data that you’re sending to you MindConnect Lib asset, you need to create assets to map that data to. To do this, you will need to create some aspects and types. Both aspects and types are configurable data structures. A type will contain aspects, and you’ll create instances of types in the assets menu like how you create the MindConnect Lib asset.

Similar to the structure of the MindSphere configuration in the OA project, it’s up to you to decide how best to structure your data. To make an aspect, go to your tenant’s asset manager, select ‘Aspects’ from the menu on the left, and select ‘Create aspect.’ Add at least one variable to make a valid aspect and make sure to set a ‘Unit’ value that exactly matches the units you set up in OA for the data point you intend to map to it. You will not be allowed to map your data points if the units don’t match.

A simple aspect containing only one data point. Remember to match aspect units to units set on Datapoints in the OA configuration

Similar to how you created an aspect, you will now create a type. Types are just another layer in the data hierarchy of MindSphere. Types can contain variables, aspects, and have an associated image for an organization. Additionally, an instance of a Type is an asset, whereas you cannot instantiate aspects as assets.

Add whatever aspects and variables seem sensible for your type, but be aware that you cannot map your OA data points to the standalone variables in a type. They must be mapped to a datapoint in an aspect in that particular type.

A simple type containing my aspect and an extra variable

Now go back to assets and create an instance of the type you just created. Go to the Configuration/Data mappings page for your MindConnect Lib asset and go to the Data mappings tab. If you expand the data structure you should see a button that says ‘Link variable’ for each data point. Click on that and navigate to the appropriate data point in the instance of the custom type that you created.

Data must be mapped from the MindConnectLib asset to a custom type to be viewed and processed

Once mapped, it should look something like the above image. Something significant to note is that these datapoint mappings are somewhat fragile, and any changes made to your OA assets, like adding points or sources or changes units, will undo these mappings. For that reason, you should try to finalize your data structure as much as possible before going through the effort of mapping points.

Pushing Data to MindSphere

You have now made all the necessary arrangements and are ready to push that data up to MindSphere from your OA project. I’ve made a straightforward script that goes over all the essential actions that must be done to upload data.


#uses "classes/mindSphere/MindSphereAsset"

// Example script on how to push data from OA to MindSphere
main()
{
  // Instatiate your MindSphere asset within the script
  MindSphereAsset asset = MindSphereAsset("TestAsset");

  // Generate some sample data to send to MindSphere
  dyn_int testValues = makeDynInt(13, 67, 89);

  // Create some time values to send up with your data points
  time currentTime = getCurrentTime();
  dyn_time times = makeDynTime(currentTime - 2, currentTime - 1, currentTime);

  // Prepare your values to be uploaded
  int err = asset.prepareValues("TestPoint1", testValues, times);

  // Upload values to MindSphere
  err = asset.uploadPreparedValues();
  if (err == 0) {
    DebugN("Upload Successful!");
  }
}

Note the steps in the above script. You must first instantiate your asset within your script via the MindSphereAsset function. The string inputted here must be identical to the name of the asset you created earlier in the configuration menu.

Next, I make an array of sample data to upload. You can also upload points one at a time, but it will be more computationally intensive. For every bit of data that you create, you also need a timestamp. Here I take the current time then have offsets by one and two seconds. Before you upload your values, you must prepare the values for upload.

The first input here is the name of the datapoint you’re uploading to, and it must match exactly with the name of the intended datapoint. After you’ve prepared your values, call asset.uploadPreparedValues to upload all values that have been prepared for that asset since the last successful attempt to upload values. You can call asset.prepareValues as many times as is necessary before executing the upload, and uploading data in large batches rather than small chunks will be a lot faster computationally.

If the upload was successful, then the value of err, the output from the upload function, should be equal to zero.

Viewing Your Data in MindSphere

There’s a ton that you can do with your data once it’s in MindSphere, but that’s a whole different discussion, so here we are simply going to view the time series of our data. To view your data, go back to the home page of your MindSphere tenant and navigate to the Fleet Manager. Search for the instance of your custom Type that you created and click on it. The page should then show all the aspects associated with your type in the central window. Click on an aspect to see the time series of incoming data. To view data that cannot be graphed, like strings and Booleans, click on the checkerboard looking button in the bottom right corner under the graph.

As you can see in the figure below, there are two instances of the three values I sent with my simple script because I ran the script twice.

Time series data is conveniently easy to view in the Fleet Manager.

Final Tips

  • Take time to plan out your data organization in advance.
    • The systems of aspects, types, and assets and points, sources, and assets can seem convoluted at first, but if you take the time to plan out your project, then these organizational layers can prove very useful.
  • Be careful of breaking your tag mappings in MindSphere.
    • Remember that making any changes to your OA asset configuration will break all of your tag mappings, and you will have to delete and reestablish those mappings manually. This is another reason to have a good organization plan in mind before digging into things.
  • One call to Upload 1000 datapoints is a lot faster than 1000 calls to upload one datapoint each.

Learn more about DMC's WinCC OA services and our Siemens expertise. Contact us with any inquiries. 

The post How To Setup a WinCC OA Application to Push Data to MindSphere appeared first on DMC, Inc..

]]>
4 Tips for Launching a Product on Emerging Cellular Networks https://www.dmcinfo.com/blog/20846/4-tips-for-launching-a-product-on-emerging-cellular-networks/ Fri, 04 Oct 2019 14:51:46 +0000 https://www.dmcinfo.com/blog/20846/4-tips-for-launching-a-product-on-emerging-cellular-networks/ The major cellular network providers (AT&T, Verizon, T-Mobile) have launched or are in the process of launching upgrades to their LTE networks that will enable the deployment of millions of new, previously impractical IoT solutions. LTE-M (aka CAT-M1) and NB-IoT networks are designed specifically for low-power, low-cost, low-bandwidth devices. By reducing the bandwidth, carriers can offer connectivity at increasingly lower monthly costs. Devices […]

The post 4 Tips for Launching a Product on Emerging Cellular Networks appeared first on DMC, Inc..

]]>
The major cellular network providers (AT&T, Verizon, T-Mobile) have launched or are in the process of launching upgrades to their LTE networks that will enable the deployment of millions of new, previously impractical IoT solutions. LTE-M (aka CAT-M1) and NB-IoT networks are designed specifically for low-power, low-cost, low-bandwidth devices. By reducing the bandwidth, carriers can offer connectivity at increasingly lower monthly costs. Devices using cellular modules may cost as little as $1 monthly.

These new networks are the future of IoT. However, as this writing, many of these networks still have a lot of kinks to work out. Cellular module vendors are rapidly developing and lunching modules for these networks. These modules are all new and so is the network, so challenges launching a product on one of these networks are to be expected.

What to Do

1. Know your Module Vendor
When selecting a cellular module vendor make sure they have a good support network. With new network deployments, there will be a LOT of updates happening in the background. The cell providers will be making adjustments and so will the module vendors. It’s important to understand the level of support you will get from the module vendor if your device happens to uncover a bug or incompatibility between the module and the network, or worse, between the module and a specific cell tower.

SARA-R4 by ublox SIM7000X by Simcon nRF9160 by Nordic
ublox SARA-R4 series, SIMCom SIM7000X, and Nordic nRF9160

2. Understand the Module Power Requirements
Most of these cellular module vendors provide a lot of marketing material about the low power requirements of their modules. These low power modules promise battery-powered cellular connectivity with years of battery life. It’s true, these modules have very low average currents while operating. The important thing to understand is that they also have relatively high peak current demands, which means you need to select your battery and design your power circuitry carefully. You won’t see any cellular IoT devices running on coin cells anytime soon.

Lithium Thionyl Chloride is popular cell chemistry for super long-life Wi-Fi or Bluetooth IoT devices, but most of the batteries made with this chemistry fail to provide the high peak current required by the cellular modules. Spiral wound Lithium Thionyl Chloride batteries use the same chemistry but provide higher peak current (by increasing the electrode surface area), however these batteries may still not meet the peak current demands.

Spiral bound lithium battery diagram
Spiral wound Lithium Thionyl Chloride batteries

Instead, you may need to consider hybrid batteries like these from Xeno Energy which consist of a Lithium Thionyl Chloride cell and a parallel supercapacitor. The supercapacitor can provide high peak currents while exhibiting very low leakage.  Although the cost is higher for this battery topology, it provides the performance needed for long life cellular IoT devices. 

Hybrid battery graphic
Hybrid battery

3. Test Your Module in a Wide Variety of Conditions
These modules contain closed-source vendor-supplied firmware. The operation of the module may not be fully documented. Part of your testing protocol should include adding attenuators to the cellular antenna (to simulate poor cellular reception) and observing how the module functions with reduced signal strength.

Modules draw significantly more average current when operating at this reduced signal level. Not knowing this ahead of time can be problematic, especially if your product advertises a minimum battery life expectancy. The actual battery life will largely depend on your connection to the cellular network. The total energy required to transmit a message from your device to the backend server can vary significantly depending on your signal strength and the number of retries required. 

Add attenuators to the cellular antenna for testing
Add attenuators to the cellular antenna to test at a reduced signal strength

4. Expect and Plan Time for Network Issues
These networks are going to be great, but until they are fully stable your product launch is going to go slower than you hoped. Add extra time in your launch schedule to account for these unknowns and extend your engineering budget to account for the time required to work through these issues. 

Keep in mind that different cell towers may contain different vendor hardware. Each of these vendors may be interpreting the CAT-M1/LTE-M specification differently and you may encounter situations where the firmware in your device performs better on some towers than others. If you encounter issues like this, move your device to a different location to see if it picks up a different tower and starts working. Eventually, these issues will be worked out by the carriers and module providers, but you should initially be prepared to work through these situations during your testing and product rollout.           

Two cell towers
The firmware in your device may perform better on some cell towers than others due to hardware differences

Learn more about DMC’s IoT solutions

The post 4 Tips for Launching a Product on Emerging Cellular Networks appeared first on DMC, Inc..

]]>