Application Development Archives | DMC, Inc. https://www.dmcinfo.com/blog/category/application-development/ Thu, 29 Jan 2026 21:33:01 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 https://cdn.dmcinfo.com/wp-content/uploads/2025/04/17193803/site-icon-150x150.png Application Development Archives | DMC, Inc. https://www.dmcinfo.com/blog/category/application-development/ 32 32 Custom IoT Development Services https://www.dmcinfo.com/blog/41266/custom-iot-development-services/ Mon, 02 Feb 2026 13:00:00 +0000 https://www.dmcinfo.com/?p=41266 The Internet of Things (IoT) is a rapidly growing and evolving technical niche. Driven by the convenience and transparency gains associated with linking a physical object to a digital presence, more businesses are exploring IoT integration as part of their systems. Choosing a platform for an IoT solution is an important part of the process, […]

The post Custom IoT Development Services appeared first on DMC, Inc..

]]>
The Internet of Things (IoT) is a rapidly growing and evolving technical niche. Driven by the convenience and transparency gains associated with linking a physical object to a digital presence, more businesses are exploring IoT integration as part of their systems. Choosing a platform for an IoT solution is an important part of the process, with many options and tradeoffs to consider. Here, we will be discussing when a custom IoT solution is a good choice.

IoT Solutions Overview

IoT solutions are typically comprised of a fleet of field devices, a cloud-hosted hub to which the devices communicate, and a user portal providing visualization and control. 

The field devices could be a wide variety of “things”, including single-purpose sensors, consumer electronics, manufacturing equipment, vehicles, or many more. Each device is provided with a way to identify itself as unique compared to the rest of the fleet and a protocol for communicating with the hub.

The hub system works to send and receive messages to the devices, and to process and save data for the portal to consume.

The portal provides an interface for a user to view, handle, and react to the data provided by the field devices and may take the form of a web interface, a mobile application, or both.

These solutions provide value for their end users through increased data availability and transparency, as well as convenient device management and control. On the reporting side, messaging from the devices can relay status, utilization, and data for aggregate reporting across the system. On the management side, the portal provides an easy way to view information about the devices, download updates to the devices, or control the configuration in the field.

Custom VS. Of-the-Shelf

There are existing solutions available for purchase for a number of use cases that benefit from IoT integration; the most familiar of these might be consumer systems like thermostats or security systems. Other places you might see an off-the-shelf IoT solution could be inventory tracking for retail, or “smart building” solutions that monitor energy use and HVAC conditions.

The alternative to an off the shelf is a custom-built solution, where the user application, cloud infrastructure, devices, or all three are self-managed. These solutions might provide a way to add additional visibility to an existing process or define a new system with unique reporting and management requirements, and can be applicable across any industry, with examples ranging from agriculture to logistics to consumer products.

As such, a necessary choice when deciding to incorporate IoT is whether to go with an off-the-shelf solution or to build a custom setup.

When to Build a Custom IoT Solution

Custom IoT Solutions provide advantages in flexibility and control over their off-the-shelf counterparts. Here are some cases where those advantages might make building a custom IoT solution the right choice:

Creating or Integrating a Custom Device

When working with a custom device, the ability to control the messaging capabilities, formatting, and frequency that a custom solution provides can be very useful. Additionally, setting up the cloud side of the system to work directly with the device allows for extended remote capabilities, such as Over the Air (OTA) updates to the devices and direct control of the device or device configuration.

Specific Management or Reporting Requirements

Making specific workflows or reports work with vendor systems can be a challenge. Therefore, custom solutions deliver value in this area by reducing or removing the dependency on external systems; if the solution is built custom, it can be built to match the desired workflows and provide the desired data from the end device without excessive configuration.

Maintaining Future Flexibility

Custom solutions can change as the system does; if components change or new requirements come up, the solution can be updated to match. In addition to uncoupling the solution from a vendor’s roadmap, this can also facilitate the agile development of new systems by allowing solution components to evolve together.

Infrastructure and Cost Control

Custom solutions provide direct access to and control over the associated cloud resources. This provides complete control over how data is routed, stored, and secured relative to other business data, rather than depending on a third-party cloud tenant. If a business already maintains a cloud tenant, the infrastructure required for an IoT solution can frequently be added in a straightforward way. Direct access to these resources can also provide better visibility and control over recurring hosting costs, rather than this information being obscured by a license.

Advantages of Working with a Software Engineering Firm for Custom IoT Solutions

If building a custom IoT solution looks like the right option for your business, a software engineering firm can help your implementation project run smoothly and be completed successfully. Working with a team of engineers experienced in custom specifications and implementing the necessary components brings a breadth of experience to building the solution that you may not otherwise achieve.

Technology Expertise

The first advantage a firm like DMC can bring to your project is expertise in the technologies underlying IoT solutions. This includes experience designing and writing firmware for custom devices, implementing cloud architectures, and developing custom web or mobile applications. This expertise enables the team to build system components efficiently and cost-effectively and implement the communication interfaces between them. Additionally, experience in the platforms used means implementation can avoid common pitfalls.

Thorough Design Process

Another advantage of working with a software firm to build out a custom IoT solution is the thorough design process. Since the team frequently works to customer specifications, there is an established process to make sure the solution is designed to best match all of your requirements.

First, engineers will work with you to refine your requirements into specifications for user workflows and device communication. Then, the UI/UX team can develop mock-ups of the project interfaces. After review, the team can begin building the components and regularly review them with the client team to ensure alignment.

Project Management

Finally, working with a software firm also brings the advantage of a dedicated project manager and an established project management process for your implementation. The project manager is familiar with the tasks required to deliver the solution successfully and is equipped with tools to track the schedule, budget, and requirements. A standard cadence of meetings and status updates keeps you involved in the development effort, able to provide feedback, ask questions, and guide the solution over the course of building the solution. Additionally, as the project evolves, the dedicated project manager can quickly reprioritize tasks and generate new specifications as needed.

Explore Our Work in IoT

Ready to take your Custom IoT project to the next level? Contact us today to learn more about our solutions and how we can help you achieve your goals.

The post Custom IoT Development Services appeared first on DMC, Inc..

]]>
A Complete Guide to Planning Your IIoT Solution https://www.dmcinfo.com/blog/20635/a-complete-guide-to-planning-your-iiot-solution/ Fri, 26 Sep 2025 16:00:00 +0000 https://www.dmcinfo.com/?p=20635 IoT or Internet of Things is a “system of interrelated computing devices, mechanical and digital machines, objects, animals, or people provided with unique identifiers and the ability to transfer data over a network without requiring human-to-human or human-to-computer interaction.” The Internet of Things continues to develop as technology advances, and the need to interact with […]

The post A Complete Guide to Planning Your IIoT Solution appeared first on DMC, Inc..

]]>
IoT or Internet of Things is a “system of interrelated computing devices, mechanical and digital machines, objects, animals, or people provided with unique identifiers and the ability to transfer data over a network without requiring human-to-human or human-to-computer interaction.” The Internet of Things continues to develop as technology advances, and the need to interact with devices in a new way continues to develop.

Out of IoT, IIoT, or the Industrial Internet of Things, is emerging as a common and necessary term. This guide provides insight into IIoT and into DMC’s process for completing successful IIoT projects.


Table of Contents


IIoT Overview

Much like IoT, IIoT uses sensors and other technology but in an industrial setting to leverage real-time data to monitor and control devices in the field and communicate and display that data in a way that allows for better decision making in industrial processes.


Our Five-Step Process

DMC has completed hundreds of projects incorporating a wide range of solutions. Through experience, our engineers have developed a process for implementing IIoT solutions. We begin by starting at the lowest level and then fill in the gaps, going up the stack.

DMC IIoT Five Step Process

Step One: Field Device Platform Selection

Getting reliable, accurate data from the physical system is the primary challenge for any system design and the most important decision you can make. Once you secure the data, you can do anything you want with it. Answer the following questions to select hardware used in the field:

  • What are you trying to measure or control?
  • What does the device in the field need to be able to do on its own? How complex?
  • How many devices do you anticipate on deploying?
  • Are you the end-user and maintainer of this equipment, or are you selling a solution (operated and maintained by others)?
  • What does new device provisioning look like? (the ideal process)
  • How will this device be powered?
Step One

Keep in mind, the software platform is determined by hardware selection. For example, when you pick a particular PLC, you have to use the manufacturer’s software to program it. For an embedded device this will be C++, etc.

Device Hardware: Selecting the Right Parts

Let’s say the average PLC costs $1,000, so every time you put one out in the field, that is another $1,000 of hard costs. PLCs are great for some things while NI has developed expert tech for other uses. DMC engineers can leverage experience from hundreds of completed projects to advise on making the best choice for each project.

0-10 Devices – Small Deployments

10-100 Devices – Larger Deployment, off the shelf products, but start to have cost-optimized decisions

100-1000+ Devices – Discuss an embedded solution because the hardware starts to get expensive

Step Two: Determine Communications

After determining your device platform, deciding how your networking devices are configured is key. Consistent communication between devices is essential. DMC’s engineers help scope what needs to be done. Consider the following:

  • How are you going to communicate with devices?
  • Where is the internet coming from?
    • Cellular, Wi-Fi, from the plant?
  • What happens when the internet is not available?
    • Local caching, buffer and retry, operational impacts
  • What are the protocol security requirements?
    • Encryptions, certificates, secure comm management

Step Three: Determine Cloud Platform

There are a lot of cloud platforms to choose from when you reach this phase of the process. Ask yourself, what out of the box services (provided by these hosting entities) will your application need/or take advantage of? Some cloud platforms include Azure, AWS, and Google.

This phase is when we need to assess how to save on custom development, and where it’s possible to use solid foundational pieces already developed for these types of applications. Ask yourself:

  • Do you need a website?
  • Do you need database(s)?
  • Do you need user management?
  • Do you need integrations to other cloud services?
  • Do you need SMS, e-Mail, or other mass notification capabilities?
  • Do you need an AI engine or advanced analytics support?
  • Do you need a flexible reporting framework (points to things like PowerBI)?
  • What type of data store is needed?
    • How much data?
    • How often will it be sampled?
    • How will the data be used?
  • Where and how will security be enforced for cloud resources?
  • How many monthly active users do you anticipate on this cloud application?
  • What in house cloud/web development resources do you have?
    • What are they comfortable with and willing to maintain?

Step Four: Web Application Development

DMC’s full-stack development team builds custom web applications with intuitive interfaces designed for usability and stable back ends designed for scalability. 

Consider the following during this step:

  • Define the UI/UX experience
  • How are you going to onboard new users?
  • How are you going to onboard new devices?
  • How are you going to manage devices?
  • How are users going to view data?
  • What access restrictions should apply? (user levels)
  • What types of notification and alerts are required?
  • When should devices be alerted to changes?
  • What visualizations for data or information are required?
  • What type of reporting is required? How are users notified of reports?
  • Is a native Mobile App also required?
  • Is a generic API (accessible by third parties) required?
  • Define support plan for end-users of the application

Step Five: Go Live and Maintenance

  • Are you using continuous Integration tools in Development, Staging, and Production environments?
  • Do you have planned downtime for production-level updates?
  • Are database migrations required? Data integrity checks?
  • Deploy and active service health monitors?
  • Are support and service avenues (emails/phone) active and being monitored?

Industry Credentials

DMC holds several key industry credentials with leading technology providers.

Our Work

Ready to take your Automation project to the next level? Contact us today to learn more about our solutions and how we can help you achieve your goals. 

The post A Complete Guide to Planning Your IIoT Solution appeared first on DMC, Inc..

]]>
Azure AI: Natural Language Processing Solutions https://www.dmcinfo.com/blog/37390/azure-ai-natural-language-processing-solutions/ Wed, 30 Jul 2025 19:43:02 +0000 https://www.dmcinfo.com/?p=37390 Source: Develop natural language solutions in Azure – Training | Microsoft Learn Natural Language Processing (NLP) refers to artificial intelligence that can see, hear, speak with, and understand users. Models with NLP can be applied to input such as text, audio, and real-time speech. With Azure AI’s Language service, we can enable applications to understand […]

The post Azure AI: Natural Language Processing Solutions appeared first on DMC, Inc..

]]>
Source: Develop natural language solutions in Azure – Training | Microsoft Learn

Natural Language Processing (NLP) refers to artificial intelligence that can see, hear, speak with, and understand users. Models with NLP can be applied to input such as text, audio, and real-time speech. With Azure AI’s Language service, we can enable applications to understand us in dynamic ways.

At DMC, we pride ourselves on our vast skill set. AI solutions are just a small part of our application development services. From IoT solutions to custom software from scratch, we are more than ready to tackle any project that comes our way. In this post, we’ll explore the fundamentals of Azure AI’s NLP, dive into key features like text analysis, question answering, and conversational language understanding, and showcase how DMC can craft tailored NLP solutions to meet your business needs.

What is Natural Language Processing?

Natural Language Processing empowers computers to understand, interpret, and generate human language in a way that’s both intuitive and contextually accurate. Azure AI’s NLP suite enables businesses to process vast amounts of text, extract actionable insights, and create intelligent applications that enhance customer interactions and streamline operations.

Building robust NLP models from scratch is complex, requiring extensive training and computational resources. Azure AI simplifies this process with pre-trained models that deliver high performance out of the box. At DMC, we customize these models to fit your specific requirements, making it easy to integrate NLP into your applications. By provisioning an Azure AI Language resource in your Azure Subscription, you can manage and fine-tune your NLP solution effortlessly through the Azure Portal.

What Can Azure AI’s Language Service Do?

Azure AI’s Language Service offers a range of powerful capabilities, including:

  1. Text Analysis: Extracting sentiment, key phrases, entities, and language from text.
  2. Question Answering: Providing precise answers to user queries based on documents or knowledge bases.
  3. Conversational Language Understanding: Interpreting user intents and entities for intelligent chatbots.

Let’s explore these three core NLP capabilities in detail:

1. Text Analysis

Text analysis enables systems to extract meaningful insights from unstructured text, such as customer feedback, social media posts, or documents.

How Text Analysis Works

Azure’s Text Analytics API processes text inputs using advanced machine learning models to perform tasks like:

  • Sentiment Analysis: Determining whether text expresses positive, negative, or neutral sentiment.
  • Key Phrase Extraction: Identifying the main ideas or topics in a text.
  • Entity Recognition: Detecting people, places, organizations, or other entities.
  • Language Detection: Identifying the language of the text, supporting multiple languages.

The API handles complex scenarios, such as noisy or ambiguous text, and delivers structured JSON outputs for easy integration into applications.

Use Cases for Text Analysis

  • Customer Insights: Analyzing reviews to understand customer sentiment and identify product issues.
  • Content Moderation: Filtering inappropriate comments on social platforms.
  • Market Intelligence: Extracting trends from survey responses to guide business strategy.

With AI Text Analysis, we can use text analysis to build solutions that transform raw text into actionable insights that help businesses get the most out of their knowledge base.

2. Question Answering

Question answering allows systems to provide accurate, context-aware responses to user queries based on documents or custom knowledge bases.

How Question Answering Works

Azure’s question answering capabilities involve using deep learning models to comprehend questions and extract answers from structured or unstructured data. Key steps include:

  • Context Understanding: Analyzing the query and source content to identify relevant information.
  • Answer Retrieval: Extracting precise answers, often with confidence scores.
  • Knowledge Base Integration: Supporting custom datasets, such as company manuals or FAQs.

In short, a question answering system involves integrating with existing AI models, such as gpt-4o, to create a more “human-like” interface with your knowledge base. Later, we will explore a more in-depth example of a system built by DMC that does this.

Use Cases for Question Answering

  • Customer Support: Chatbots answer FAQs, reducing support ticket volumes.
  • Employee Productivity: Staff can access instant answers from internal documentation.
  • Education: E-learning platforms provide students with responses to their questions based on course materials.

DMC excels at building question-answering solutions that enhance efficiency and user experience, customized to your organization’s unique data and workflows.

3. Conversational Language Understanding

Conversational Language Understanding (CLU) powers intelligent chatbots and virtual assistants by interpreting user intents and extracting entities from natural language inputs.

How Conversation Language Understanding Works

Azure’s CLU service analyzes conversational inputs to:

  • Intent Classification: Identify the user’s goal, such as booking a service or requesting information.
  • Entity Extraction: Recognize specific details, like dates, locations, or product names.
  • Multi-Turn Conversations: Maintain context across multiple exchanges for seamless interactions.

Use Cases for Conversational Language Understanding

  • Retail: Chatbots assist with product searches, order tracking, or returns.
  • Healthcare: Patients schedule appointments or access medical advice via conversational bots.
  • Smart Devices: Voice assistants process commands to control IoT devices.

DMC’s knowledge of CLU allows us to create conversational AI solutions that feel natural and intuitive, driving more impactful engagement with users in contrast to traditional solutions.

Real-World Solution: DMC AI Chatbot

To demonstrate the power of Azure AI NLP, let’s walk through an AI Chatbot solution that was implemented in-house here at DMC.

dmc ai chatbot

The Challenge

DMC has a large and complex knowledge base that has been accumulated over our 20+ years of experience. We face two problems with this:

  • Finding answers in this knowledge base can take time for those unfamiliar with its structure.
  • A common practice is to reach out to large groups in Slack to ask common questions which results in:
    • Downtime between the question being asked and it being answered.
    • Pulling employees from higher priority tasks to answer previously answered questions.

The Solution

DMC was able to propose and execute a solution that includes:

  • Knowledge Base Integration: Using Azure resources, we were able to take our existing knowledge base and transform it into a structure that is retrievable by our chatbot.
  • Custom AI Chatbot: Azure can provide a pre-built AI model that we were able to customize to our exact needs by adjusting prompts, choosing its personality, and providing it with DMC-specific terms for better comprehension. The chatbot is able to take a user’s question, answer it with the existing documentation, and even provide the direct source that it retrieved the information from!
  • Slack Integration: Our chatbot is accessible in Slack, our primary workplace communication platform, making it easy to use for our existing employees, with little to no guidance on getting started.

Implementation

To build this solution, we utilized our Azure subscription to provision the necessary resources and integrate them with available APIs to fully integrate them into our workplace. This required us to do the following:

  • Instantiate an Azure Search Service: For our chatbot to become an expert on DMC, we had to provision an Azure Search Service. This service takes our existing knowledge base in Confluence and transforms it into a searchable format by indexing every piece of information available. This required hitting the API to our Confluence instance.
  • Provide Chatbot with the Knowledge Base: Hooking up our chatbot to our search service allows it to submit keyword-based queries and choose results that best answer a question asked by a DMC employee.
  • Customizing the Chatbot: To limit unpredictable behavior, we needed to clearly define the process for handling unclear questions and multiple questions in one prompt. We also provided the chatbot with definitions for common acronyms and terms used exclusively at DMC to enhance its comprehension skills.
  • Integrate with Existing Resources: DMC employees use Slack daily, so it was essential that we integrate our new chatbot with it. This makes the chatbot easily accessible to our employees.
  • Keep the Chatbot Informed: Our knowledge and documentation are constantly expanding, which means we must keep our chatbot up to date. This meant that we needed to implement a function that is run once per week to help our chatbot keep up!

Benefits

This solution helped solve the issues shown earlier:

  • Quick Knowledge Retrieval: With the new chatbot, users can ask questions and get instant feedback, without having to search Confluence manually.
  • Reduced Downtime: With instant feedback, employees experience less downtime before getting answers.
  • Manage Employee Priorities: Employees can spend less time asking and answering previously answered questions, which translates into more time available for higher-priority tasks.

Ethical Considerations

Deploying NLP solutions requires responsible practices to address privacy, bias, and transparency. At DMC, we ensure ethical AI use by considering the following:

  • Bias Mitigation: Regularly auditing models to prevent unfair treatment across diverse customer groups.
  • Data Privacy: Implementing end-to-end encryption and anonymizing data in compliance with GDPR and other regulations.
  • Transparency: Clearly informing customers when they interact with AI chatbots.
  • Human Oversight: Incorporating human-in-the-loop review for complex queries or sensitive interactions.

As a Microsoft partner, DMC adheres to Microsoft’s responsible AI guidelines, ensuring our NLP solutions are ethical and trustworthy. More details on Microsoft’s guidelines can be found in their documentation here.

Why DMC?

At DMC, we blend our technical expertise with a deep understanding of your business goals. Our experience with Azure AI’s NLP enables us to deliver customized solutions that drive efficiency, enhance customer experiences, and unlock new opportunities. From retail to healthcare, we’re equipped to tackle complex challenges across industries.

Let’s Recap

Azure AI’s NLP capabilities—text analysis, question answering, and conversational language understanding—offer endless possibilities for innovation. At DMC, we’re more focused on harnessing these tools to build tailored solutions that solve your unique challenges.

Ready to transform your business with Azure AI’s NLP? Contact us today to start your journey with DMC!

The post Azure AI: Natural Language Processing Solutions appeared first on DMC, Inc..

]]>
Azure AI: Computer Vision Solutions https://www.dmcinfo.com/blog/36209/azure-ai-computer-vision-solutions/ Tue, 22 Jul 2025 20:20:01 +0000 https://www.dmcinfo.com/?p=36209 Computer vision refers to artificial intelligence systems that can perceive the world visually. These systems can be applied to camera input, images, or video. There are several problems that can be solved with the help of Azure’s AI Vision Service. With both facial recognition and optical character recognition the possibilities are endless!  At DMC, we […]

The post Azure AI: Computer Vision Solutions appeared first on DMC, Inc..

]]>
Computer vision refers to artificial intelligence systems that can perceive the world visually. These systems can be applied to camera input, images, or video. There are several problems that can be solved with the help of Azure’s AI Vision Service. With both facial recognition and optical character recognition the possibilities are endless! 

At DMC, we pride ourselves on our technical skillset. AI solutions are just a part of our application development services. From IoT solutions to custom software from scratch, we are ready to tackle any project that comes our way. In this post, we will introduce the fundamentals of Azure AI Vision, facial recognition, and optical character recognition. We will also talk about what an AI Vision solution might look like from DMC. 

Azure AI Vision Diagram

What is Azure AI Vision? 

Creating solutions that can “see” the world and interpret it is a core area of artificial intelligence. Computers don’t exactly have eyes, but they are capable of processing info from videos, images, or live camera feeds. 

The architecture for computer vision models is complex and requires significant amounts of training and resources to perform at a high level, i.e. identify objects correctly with confidence. With Azure AI Vision, we can create complex models much quicker and easier than producing one from scratch. While these existing models can be functional out of the box, we can build on top of them to create custom models that fit your exact needs! 

Using these models can be done with ease by provisioning an Azure AI Vision resource in an Azure Subscription, which allows us to use the Azure Portal to easily manage and modify your AI solution. 

What Can Azure AI Vision Do? 

Azure AI Vision offers several image analysis capabilities which include: 

  • Extracting text from images using Optical Character Recognition (OCR) 
  • Generating captions and descriptions of images 
  • Common object detection 
  • Tagging visual features 
  • Detecting and recognizing faces 

Let’s dive deeper into these capabilities, starting with the ones that fall under facial recognition. 

Azure AI facial recognition diagram

Facial Recognition 

Facial recognition is one of the most powerful capabilities of Azure AI Vision, enabling systems to detect, analyze, and identify human faces in images or video feeds. This technology has a wide range of applications, from enhancing security systems to personalizing user experiences. Azure AI Vision provides robust facial recognition tools that are both accurate and easy to integrate into custom solutions. 

How Facial Recognition Works 

Azure’s facial recognition capabilities rely on sophisticated machine learning models that analyze facial features in images or video frames. These models detect faces by identifying key landmarks, such as the eyes, nose, and mouth, and then generate a unique facial signature based on these features. The process involves several key steps that ascend in complexity: 

  • Face Detection: Identifies the presence of faces in an image or video and determines their locations within the input
  • Facial Attribute Analysis: Extracts attributes like age, gender, facial hair, or even emotional expressions (e.g., happy, sad, neutral)
  • Face Identification: Matches detected faces against a database of known faces to identify individuals to a known group
  • Face Verification: Confirms whether two faces belong to the same person by comparing their facial signatures

Azure AI Vision’s facial recognition APIs make it simple to integrate these capabilities into applications. The service also supports real-time analysis, making it ideal for applications like live surveillance or interactive kiosks. 

Use Cases for Facial Recognition

Facial recognition opens a variety of possibilities for businesses and organizations. Some potential use cases include: 

  • Security and Access Control: Implementing facial recognition for secure building access or device unlocking, replacing traditional keycards or passwords
  • Retail and Marketing: Analyzing customer demographics and emotions in stores to tailor marketing campaigns or improve customer experience
  • Event Management: Streamlining check-in processes at events by identifying attendees through facial recognition

At DMC, we are confident in our ability to leverage Azure’s facial recognition capabilities to build tailored solutions for clients across industries. Whether it’s enhancing security protocols or creating personalized customer interactions, our team is equipped to fine-tune these models to meet specific requirements while ensuring high accuracy and reliability. 

Azure OCR diagram

Optical Character Recognition (OCR) 

Optical Character Recognition (OCR) is another cornerstone of Azure AI Vision, enabling systems to extract text from images, scanned documents, or video feeds. This capability is designed for digitizing physical documents, automating data entry, and making content searchable. Azure’s OCR technology is highly accurate and supports a wide range of languages and formats, making it a versatile tool for our clients. 

How OCR Works 

Azure’s OCR functionality, powered by the Read API, processes images or PDFs and converts text within them into machine-readable data. The process involves: 

  • Text Detection: Identifying regions in an image that contain text, regardless of orientation or font style
  • Text Recognition: Converting detected text into digital characters, preserving formatting where possible
  • Language Support: Recognizing text in multiple languages, including handwritten and printed text
  • Layout Analysis: Understanding the structure of the document, such as paragraphs, tables, or lists, to maintain context 

The Read API is designed to handle complex scenarios, such as noisy images, low-resolution scans, or text in mixed languages. Developers can easily integrate OCR into applications by calling the API and receiving structured JSON output containing the extracted text and its coordinates. 

Advanced OCR Features 

Azure AI Vision’s OCR goes beyond basic text extraction. Some advanced features include: 

  • Handwritten Text Recognition: Extracting text from handwritten notes or forms, which is particularly useful for industries like healthcare or education
  • Form Recognition: Automatically extracting key-value pairs from structured documents like invoices, receipts, or IDs 
  • Multi-Page Document Processing: Handling large documents, such as contracts or reports, with consistent accuracy across pages
  • Real-Time OCR: Processing live camera feeds to extract text instantly, ideal for applications like license plate recognition

Use Cases for OCR 

OCR is a game-changer for organizations looking to streamline operations and reduce manual work. Some practical applications include: 

  • Document Digitization: Converting paper-based records, such as medical charts or legal contracts, into searchable digital formats
  • Automated Data Entry: Extracting information from invoices, receipts, or forms to populate databases without human intervention
  • Accessibility: Enabling text-to-speech systems to read printed text aloud for visually impaired users
  • Logistics and Transportation: Reading shipping labels or license plates in real time to improve supply chain efficiency
azure computer vision

Real-World Solution: Enhancing Security with Azure AI Vision

To illustrate the power of Azure AI Vision, let’s explore a high-level example of how DMC could leverage this technology to solve a security problem for a client. 

The Challenge

A large corporate campus with multiple buildings wants to enhance its security measures and streamline access control. The client faces two key issues: 

  1. Unauthorized Access: The current keycard-based system is vulnerable to lost or stolen cards, allowing potential unauthorized entry to sensitive areas. 
  2. Incident Reporting: Manual logging of security incidents, such as identifying individuals in surveillance footage, is time-consuming and prone to errors. 

The Solution

DMC proposes a comprehensive Azure AI Vision solution that combines facial recognition, OCR, and object detection to address these security challenges. The solution includes: 

Secure Access Control: Using Azure’s Face API, the system verifies employee identities at entry points by matching their faces against a secure database of authorized personnel. The system operates in real time, granting or denying access within seconds. 

Automated Incident Logging: Azure’s OCR capabilities are integrated into a security management platform that processes surveillance footage and incident reports. The system extracts text from identification documents or name badges captured in footage, cross-referencing with employee records to log incidents accurately. Object detection identifies potential security threats, such as unrecognized items in restricted areas. 

Security Analytics Dashboard: A centralized dashboard provides real-time insights into access logs and incident reports, powered by Azure’s facial recognition and OCR data. Security teams can monitor entry patterns, flag suspicious activities, and generate reports for compliance audits. 

Implementation

To build this solution, DMC provisions an Azure AI Vision resource within the client’s Azure Subscription. The development process includes: 

  • Custom Model Training: Fine-tuning Azure’s facial recognition models with a dataset of employee images (with consent) to ensure high accuracy in identification under various lighting and angle conditions
  • OCR Integration: Configuring the Read API to extract text from identification documents, badges, or signage in surveillance footage, supporting multiple formats and orientations
  • Object Detection: Training a custom object detection model to recognize specific items (e.g., bags, devices) that may pose security risks
  • Security Platform Development: Building a web or desktop platform that integrates the Face API, Read API, and object detection models, with a user-friendly interface for security personnel
  • Security and Compliance: Implementing end-to-end encryption for all data, including facial signatures and extracted text, and ensuring compliance with privacy regulations according to state and federal guidelines. Human-in-the-loop oversight is incorporated, allowing security staff to review and intervene in real-time decisions.

Benefits 

The solution delivers measurable results for the corporate client: 

  • Enhanced Security: Facial recognition ensures only authorized personnel access restricted areas, minimizing risks from lost or stolen keycards. 
  • Operational Efficiency: Automated incident logging reduces manual work and improves the accuracy of security reports. 
  • Proactive Monitoring: Real-time analytics enable security teams to detect and respond to potential threats quickly. 
  • Scalability: The Azure-based solution can be expanded to additional campus locations or integrated with other security systems. 

Ethical Considerations 

While facial recognition and image analysis is a powerful tool, it’s important to address ethical considerations. Privacy, consent, and data security are critical when deploying these solutions. Azure AI Vision adheres to strict compliance standards, and at DMC, we prioritize responsible AI practices, ensuring that facial recognition systems are implemented transparently and with user consent. As a registered partner with Microsoft, we are committed to responsible AI practices in deploying both facial recognition and OCR solutions in accordance with their general guidelines.  

We ensure responsible use and integration of visual AI solutions by: 

  • Thoroughly assessing Azure AI Vision’s capabilities to ensure they align with client needs, while performing sufficient testing to understand its capabilities and limitations
  • Respecting individuals’ privacy by collecting data only with explicit consent and using it solely for authorized purposes
  • Incorporating human-in-the-loop oversight to enable real-time intervention, maintaining human decision-making to prevent harm
  • Prioritizing security through robust controls to protect data integrity and prevent unauthorized access

More information on Microsoft’s guidelines on responsible used of AI Image Analysis can be found in their documentation provided here

Why DMC?

At DMC, we combine technical expertise with a deep understanding of our clients’ business needs. Our experience with Azure AI Vision allows us to deliver tailored solutions that drive real value. Whether it’s enhancing customer experiences, automating processes, or unlocking new insights, we’re ready to help our clients succeed. 

Let’s Recap

Azure AI Vision is a transformative technology that empowers businesses to “see” and understand the world in new ways. With capabilities like facial recognition and OCR, it opens endless possibilities for innovation. At DMC, we’re passionate about harnessing these tools to solve complex challenges and deliver measurable results. From automation to agriculture, our team is equipped to build custom AI solutions across industries that meet your unique needs.

Ready to explore the potential of Azure AI Vision? Contact us today to start your partnership with us today! 

The post Azure AI: Computer Vision Solutions appeared first on DMC, Inc..

]]>
Azure AI: Knowledge Mining Solutions https://www.dmcinfo.com/blog/36190/azure-ai-knowledge-mining-solutions/ Thu, 03 Jul 2025 08:00:00 +0000 https://www.dmcinfo.com/?p=36190 Knowledge mining is a field of artificial intelligence that pertains to extracting key insights from unorganized data. Like data mining, which finds patterns and correlations, knowledge mining takes things to the next level by also contextualizing knowledge across a wide range of data formats. Azure AI Search allows us to do this on existing knowledge […]

The post Azure AI: Knowledge Mining Solutions appeared first on DMC, Inc..

]]>
Knowledge mining is a field of artificial intelligence that pertains to extracting key insights from unorganized data. Like data mining, which finds patterns and correlations, knowledge mining takes things to the next level by also contextualizing knowledge across a wide range of data formats. Azure AI Search allows us to do this on existing knowledge bases to help organize and enhance your business.  

At DMC, we pride ourselves on our vast technology skillset. AI solutions are just a part of our application development services. From IoT solutions to custom software from scratch, we are ready to tackle any project that comes our way. In this post, we will introduce the fundamentals of Azure AI Search, AI Enrichment, and Multimodal Search. We will also talk about what an AI Search solution might look like from DMC. 

knowledge mining illustration

What is Azure AI Search? 

Azure AI Search is a Microsoft-owned and managed, cloud-based service that enables applications to have enterprise-grade information retrieval. The service has the traditional characteristics of a search service, such as indexing, keyword search, and knowledge stores. However, Azure AI search can enhance your search capabilities using AI Enrichment, Multimodal Search, and more! All these features live in the Azure ecosystem, making it easy to scale and apply a secure solution that fits your needs. 

What Can Azure AI Search Do? 

Azure AI offers several standard features of a search service, with the addition of AI capabilities. In this article, we will take a deeper look at AI Enrichment and Multimodal Search. Other features exist such as: 

  • Indexing 
  • Vector and hybrid search 
  • Full-text search 
  • Full Lucene query syntax search 
  • Relevance scoring 
  • Semantic ranking 
  • Knowledge stores 

AI Enrichment

AI Enrichment in the Azure AI Search service takes raw, unstructured data and transforms it into structured, searchable content using other Azure AI Services such as Computer Vision and Language Processing. What this means is that we can take inputs, such as PDFs and text files, and extract meaningful information from them so they can be accurately retrieved in response to a search query. 

How AI Enrichment Works

Diagram of how AI enrichment works

The process for AI Enrichment can be broken down into three phases: 

  1. Date Importing 
  2. Enrichment and Indexing 
  3. Output Exploration

Data Importing is the step where an indexer connects a data source with unstructured documents and pulls them into the search service. 

Enrichment and Indexing is the largest and most complex step of the process. Enrichment starts with the indexer opening files and extracting key data from them, such as dates, keywords, or any other customizable entities. During this process, an “enriched” version of the document is created. This enriched document can either be temporary or stored for future reuse. The indexer then applies field mappings, which are paths between the source data and search index. We also create a path between the enriched data and search index. 

In the final step, Output Exploration, we start to see the end goal of setting up this service, which is to be able to navigate a previously unsearchable data source. This can appear in the form of a simple search bar or passed onto a chatbot that can dynamically retrieve the enriched data and present it in a user-friendly, conversational format.  

Use Cases for AI Enrichment

  • Knowledge Management: Extract entities and key phrases from internal documents to support advanced search
  • Customer Support: Enrich support tickets with metadata to improve query routing
  • Research Analysis: Process academic papers or reports to extract insights for searchable archives

Multimodal search is the ability to ingest, understand, and retrieve information across multiple content types, including text, images, video, and audio. This enables us to search using more diverse methods such as similarity search and hybrid queries, which we will talk about more in this section.  

How Multimodal Search Works

Multimodal Search uses AI to vectorize and index non-text content, enabling similarity search and hybrid queries. The process involves the following steps: 

  1. Content Ingestion 
  2. Vectorization 
  3. Index Storage 
  4. Query Processing 

Content Ingestion is where we extract data from text, images, PDFs, and more via indexers.  

Vectorization uses AI models to convert unstructured content into vectors. Vectors are used to capture semantic or visual features of data. They are essential in getting accurate return results when querying for non-text elements, such as visual features. 

The process of storing the vectors and their associated metadata for fast retrieval is the Index Storage step. 

Finally, Query Processing is where a user’s search input is matched against relevant vectors so that semantically similar results can be returned.  

So, what is the end goal of multimodal search? While the overall goal is to expand your organization’s knowledge sharing and storage capabilities, the two primary added features are similarity searches and hybrid queries.

This technique leverages vector embeddings to find content that is conceptually or visually like a query, rather than relying solely on exact keyword matches. For example, a user uploading an image of a red dress can retrieve similar dresses based on visual features like color and style, even if the textual descriptions differ. Similarity search enables semantic and visual matching across diverse data types. 

Hybrid Queries

Multimodal Search supports hybrid queries that combine similarity search with traditional keyword search for more comprehensive results. By merging results using Reciprocal Rank Fusion, Azure AI Search ensures that both semantic relevance and exact matches are considered. For instance, a query like “blue sneakers” can retrieve results based on both the text description and vectorized images of sneakers, providing a balanced and highly relevant output. 

With these two query processing features, an application can answer questions like “What is the process to approve a purchase order?” even when the only description of the process lives inside an embedded diagram in a PDF file. 

diagram of query processing

Use Cases for Multimodal Search 

  • Conversational AI: Enable chatbots to answer queries using text, images, or PDFs from a knowledge base 
  • E-Commerce: Allow users to upload product images to retrieve descriptions or reviews
  • Healthcare: Query medical images and patient records for diagnostics
  • Media Analysis: Search video or audio archives alongside text for comprehensive insights

To demonstrate the power of Azure AI Search, let’s explore a high-level example of how DMC could leverage this technology to solve a knowledge organization problem for a client. 

The Challenge

A multinational consulting firm needs a search solution to help consultants quickly access industry trends, regulations, and internal best practices. Their knowledge base includes thousands of PDFs, images, and internal reports across multiple languages, but traditional search tools deliver irrelevant results and struggle with non-text data. This leads to countless hours spent manually searching documents, impacting productivity and client response times. 

The Solution

DMC proposes that the firm implement Azure AI Search with AI Enrichment and Multimodal Search to create an intelligent search platform. By enriching and vectorizing their knowledge base, this enables consultants to retrieve accurate, contextually relevant content using text or image queries, streamlining research and improving client outcomes. This solution would include: 

Complete Knowledge Base Ingestion: Using Azure’s pre-built indexers vectorize and index a pre-existing knowledge base to make it easier to navigate. 

Enhanced Data Retrieval: With AI Enrichment and Multimodal Search, the search service can quickly and accurately retrieve information based off a query. 

Implementation

To build this solution, DMC provisions an Azure AI Search resource within the client’s Azure subscription. The development process includes: 

  • Data Ingestion 
    • Store PDFs, images, and reports in Azure Blob Storage, ingested via Azure AI Search indexers
  • AI Enrichment Pipeline 
    • Apply Optical Character Recognition (OCR) to extract text from scanned PDFs and images (e.g., charts, infographics) 
    • Use entity recognition to tag regulations, companies, and dates in documents
    • Employee translation skills to index multilingual content for global teams
  • Multimodal Search Setup 
    • Vectorized images and text using Azure OpenAI and Computer Vision models
    • Configured hybrid search to combine keyword and vector queries for comprehensive retrieval
  • Deployment 
    • Integrate the search solution into the firm’s internal portal using Azure SDKs, supporting text and image-based queries

Benefits

  • Improved Relevance: Semantic ranking and vector search increases result accuracy
  • Time Savings: Reduced research time by accessing insights via intuitive searches 
  • Multimodal Flexibility: Image-based queries (e.g., uploading a chart) can now retrieve related documents, enhancing usability
  • Global Accessibility: Multilingual indexing supports seamless search for international teams
  • Scalability: Using Azure as the foundation, the service can scale easily with the firm’s needs and goals

Responsible AI

AI Enrichment and Multimodal Search rely on Azure AI models (e.g., Computer Vision, Language Service, Azure OpenAI) that may process personal or sensitive information.  

To uphold privacy and fairness, it is essential to consider the following: 

  • Ensure input data complies with applicable privacy laws and regulations. Avoid processing sensitive personal data without explicit consent or legal basis. 
  • AI models may inadvertently reflect biases in training data, potentially affecting entity recognition or image analysis. Regularly evaluate outputs for fairness and adjust configurations (e.g., custom skills or filters) to minimize biased results. 
  • Clearly communicate to users when AI-generated outputs, such as extracted entities or vectorized content, are used in search results, ensuring they understand the role of AI. 

More information on Microsoft’s guidelines on responsible use of AI Search can be found in their documentation provided here

Why DMC? 

At DMC, we blend our years of expertise with the Azure ecosystem with a client-centric approach to deliver transformative cloud-based solutions. Our deep knowledge base enables us to craft tailored platforms that unlock actionable insights and streamline operations. As a Microsoft Solutions Partner, we are committed to responsible AI practices. We ensure ethical, secure, and scalable solutions that align with your business goals, empowering you to stay ahead and succeed. 

Let’s Recap

Azure AI Search is a powerful platform for intelligent search that uses AI Enrichment and Multimodal Search. AI Enrichment processes unstructured data into searchable, structured content, enabling applications like knowledge management and multilingual search. Multimodal Search extends this to images and audio, supporting cross-modal retrieval with similarity search and hybrid queries. Responsible AI practices ensure ethical use by addressing privacy, bias, and accuracy concerns. In our real-world example, a consulting firm could leverage these features to build an efficient search solution, boosting productivity and client satisfaction. For organizations seeking to unify diverse data and enhance discovery, Azure AI Search delivers expert results. 

At DMC, we’re passionate about harnessing these tools to solve complex challenges and deliver measurable results. From automation to agriculture, our team is equipped to build custom AI solutions across industries that meet your unique needs. Ready to explore the potential of Azure AI Vision? Contact us to start a partnership today! 

The post Azure AI: Knowledge Mining Solutions appeared first on DMC, Inc..

]]>
Avalonia UI: Noteworthy Differences from WPF https://www.dmcinfo.com/blog/15571/avalonia-ui-noteworthy-differences-from-wpf/ Fri, 28 Mar 2025 14:26:11 +0000 https://www.dmcinfo.com/blog/15571/avalonia-ui-noteworthy-differences-from-wpf/ Overview Avalonia UI is a cross-platform UI framework that is considered a “spiritual successor” to WPF. If you are brand new to Avalonia UI, you should check out this blog, Avalonia UI: Introduction and Initial Impression, to learn the basics of what Avalonia UI is. This blog builds on that foundation and will help you to […]

The post Avalonia UI: Noteworthy Differences from WPF appeared first on DMC, Inc..

]]>

Overview

Avalonia UI is a cross-platform UI framework that is considered a “spiritual successor” to WPF. If you are brand new to Avalonia UI, you should check out this blog, Avalonia UI: Introduction and Initial Impression, to learn the basics of what Avalonia UI is. This blog builds on that foundation and will help you to better understand the noteworthy differences between developing with Avalonia and WPF.

Styling

In Avalonia, a Style is more similar to a CSS style than a WPF style. The Avalonia equivalent of a WPF Style is a Control Theme.

A Style should be used to style a control based on its content or purpose within the application whereas a Control Theme should be used for shared theming between all controls of that type. For example, a TextBlock might have a Control Theme to set a shared font type and font color, but a TextBlock Style would alter the font weight and font size. Examples of both can be seen below:

XAML
<controltheme targettype="{x:Type TextBlock}" x:key="DefaultTextBlockStyle">
    <setter property="FontFamily" value="Sans Serif">
    <setter property="FontSize" value="12">
    <setter property="Foreground" value="Black">
</setter></setter></setter></controltheme><style selector="TextBlock.h1" type="text/css"><Setter Property="FontSize" Value="24" />
    <Setter Property="FontWeight" Value="Bold" /></style><style selector="TextBlock.h2" type="text/css"><Setter Property="FontSize" Value="20" />
    <Setter Property="FontWeight" Value="Bold" /></style>

Having a more layered styling approach in Avalonia is beneficial since it allows you to use Styles to substitute a control’s property values without needing to override the entire theme. Conversely, in WPF, you can get stuck needing to override an entire theme if there is a theme applied to a control without an x:Key defined. If there is an x:Key defined in WPF, you can take advantage of the BasedOn property to build upon a pre-defined theme.

Avalonia Styles are placed in the Styles collection of a control and Control Themes are placed in the Resources collection of a control. Comparatively, in WPF, the Styles are all placed in the Resources collection.

Styles: Conditional Classes

A feature that stood out to me significantly is conditional classes for Avalonia Styles. This allows you to alter Style of a control based on a bound condition. In WPF, doing something similar is overly verbose and complicated and requires the use of a DataTrigger. In Avalonia, there is a lot less markup code that is needed.

The examples below demonstrate conditionally changing the TextBlock foreground based on a bound property.

In Avalonia, the following code will use the Error Style based on the bound property. Since you can conditionally pass in the property, both the DeviceState text and the SystemState text can share the Style with little code.

XAML
<grid.styles><style selector="TextBlock.Error" type="text/css"><Setter Property="Foreground" Value="{StaticResource ErrorForeground}" /></style>
</grid.styles>
<textblock classes.error="{Binding IsDeviceInError}" text="{Binding DeviceState}">
<textblock classes.error="{Binding IsSystemInError}" text="{Binding SystemState}">
</textblock></textblock>

In WPF, you must rely on a DataTrigger to change the Foreground value. Since the SystemState text and the DeviceState text rely on different bound properties as their condition, they cannot share the Style which leads to less code reuse.

XAML
<textblock text="{Binding DeviceState}">
    <textblock.style><style basedon="{StaticResource DefaultTextBlockStyle}" targettype="{x:Type TextBlock}" type="text/css"><Setter Property="Foreground" Value="Black" />
            <Style.Triggers>
                <DataTrigger Binding="{Binding IsDeviceInError}" Value="True">
                    <Setter Property="Foreground" Value="Red" />
                </DataTrigger>
            </Style.Triggers></style>
    </textblock.style>
</textblock>
<textblock text="{Binding SystemState}">
    <textblock.style><style basedon="{StaticResource DefaultTextBlockStyle}" targettype="{x:Type TextBlock}" type="text/css"><Setter Property="Foreground" Value="Black" />
            <Style.Triggers>
                <DataTrigger Binding="{Binding IsSystemInError}" Value="True">
                    <Setter Property="Foreground" Value="Red" />
                </DataTrigger>
            </Style.Triggers></style>
    </textblock.style>
</textblock>

Controls

Controls in Avalonia are very similar to WPF, but there are a few tweaks that make the framework quicker to work with but potentially less feature-rich.

Visualization and Animations

Avalonia does not support the VisualStateManager, and it instead relies on styles and pseudoclasses such as :hover, :focus, and :checked. Additionally, Avalonia does not use Storyboards, but rather it has simpler animations that use Transitions and Animation.

Grid Row and Column Definitions

A low-hanging fruit that Avalonia improved was how to define rows and columns for the Grid control. Instead of using multiple lines to do it, Avalonia has allowed you to do it within one line. The following examples show how to define the same grid layout in Avalonia and WPF.

In Avalonia,

XAML
<grid columndefinitions="*, *, *, *" rowdefinitions="Auto, Auto, Auto, Auto">
</grid>

In WPF,

XAML
<controltheme targettype="{x:Type TextBlock}" x:key="DefaultTextBlockStyle">
    <setter property="FontFamily" value="Sans Serif">
    <setter property="FontSize" value="12">
    <setter property="Foreground" value="Black">
</setter></setter></setter></controltheme><style selector="TextBlock.h1" type="text/css"><Setter Property="FontSize" Value="24" />
    <Setter Property="FontWeight" Value="Bold" /></style><style selector="TextBlock.h2" type="text/css"><Setter Property="FontSize" Value="20" />
    <Setter Property="FontWeight" Value="Bold" /></style>

Compiled Bindings

Another nice feature in Avalonia is the choice to use compiled bindings. This can be very helpful since it allows the developer to catch binding errors faster since they are caught at compile time instead of runtime.

There are some limitations with compiled bindings though. For instance, they require a static DataContext and a defined data context type using x:DataType. However, for most use cases, they will be helpful in debugging and development!

Takeaways

When comparing Avalonia and WPF, the running theme is that Avalonia prioritizes flexibility to support multiple platforms and succinct code. This makes Avalonia lighter weight and a great option for cross-platform development. For complex, feature-rich Windows development, WPF has the edge over Avalonia.

At DMC, we are always looking towards the future and learning new technologies to better support the wide variety of needs our customers have. We are excited to continue exploring Avalonia UI to provide expert solutions for cross-platform, desktop development.

Ready to take your Application Development project to the next level? Contact us today to learn more about our solutions and how we can help you achieve your goals.

The post Avalonia UI: Noteworthy Differences from WPF appeared first on DMC, Inc..

]]>
Avalonia UI: Introduction and Initial Impression https://www.dmcinfo.com/blog/15658/avalonia-ui-introduction-and-initial-impression/ Thu, 20 Feb 2025 12:01:59 +0000 https://www.dmcinfo.com/blog/15658/avalonia-ui-introduction-and-initial-impression/ Avalonia UI is an open-source UI framework for cross-platform, .NET applications. It is free to use under the MIT license, and it supports Windows, macOS, Linux, iOS, Android, and WebAssembly. The framework is owned by the commercial entity, AvaloniaUI OÜ, and it is maintained by a community of developers and a core team of around […]

The post Avalonia UI: Introduction and Initial Impression appeared first on DMC, Inc..

]]>
Avalonia UI is an open-source UI framework for cross-platform, .NET applications. It is free to use under the MIT license, and it supports Windows, macOS, Linux, iOS, Android, and WebAssembly. The framework is owned by the commercial entity, AvaloniaUI OÜ, and it is maintained by a community of developers and a core team of around twenty that work full-time for the company. 

Avalonia is a “spiritual successor” to WPF which gives developers a familiar experience. You can code in C#, F#, or XAML for Avalonia’s UI, and you can use different IDEs such as Visual Studio, Visual Studio Code, and JetBrains Rider, but JetBrains Rider is the IDE Avalonia recommends for development. 

A Brief History of Avalonia UI 

Avalonia had its first commit in December of 2013 when it was still called Perspex. Although Avalonia has been around for over a decade, it started gaining more popularity in 2020 when it joined the .NET Foundation. The partnership with .NET Foundation was short-lived though, and Avalonia left it in February of 2024

Avalonia XPF 

Along with the open-source, free to use, UI framework, the AvaloniaUI OÜ company also sells licenses to Avalonia XPF

Avalonia XPF is a cross-platform fork of WPF, and it allows developers to “instantly” make a cross-platform version of their WPF project. It does this by leaving presentation core and presentation framework untouched so the WPF application “just works”. If the project has other Windows dependencies outside of WPF though, it may need additional massaging to migrate those features. For example, if your project only targets .NET Framework, which is Windows specific, updates will be needed to allow it to work cross-platform. 

Initial Impression 

Overall, Avalonia feels like a modern version of WPF, and it cleans up some of WPF’s quirks. 

If  you are familiar with WPF, the ramp up time to Avalonia is fairly quick. The largest differences are getting used to Avalonia’s styling and learning the different UI controls and their properties. 

For developers that are new to both WPF and Avalonia, it’s likely that it would take about the same amount of time to learn either framework. The documentation for WPF is much more extensive since it’s been around longer, but a lot of it can also apply to Avalonia. Conversely, Avalonia has fewer resources, but the resources that are available are better organized and modernized. The documentation to get started with Avalonia can be found here

Support 

Avalonia releases updates consistently which can give developers peace of mind that it is maintained well. It supports .NET Framework 4.6.2+, .NET Core 2.0+, and .NET 5+. 

When using the Model-View-ViewModel (MVVM) pattern, Avalonia works well with the ReactiveUI and MVVM Community ToolKit libraries. This is great news since WPF also supports these libraries. 

IDEs 

As a WPF developer that already has a Visual Studio license, VS is my go-to IDE, and there is an Avalonia Extension for Visual Studio 2022. Although this extension exists, it is a bit lackluster. It does not support the code completion nor the rich syntax highlighting for AXAML (Avalonia’s flavor of XAML) files that you would typically expect for XAML files. These deficiencies can cause a slower or less enjoyable developer experience. 

Visual Studio Syntax Highlighting
Visual Studio Syntax Highlighting 

The IDE Avalonia recommends is JetBrains Rider. Rider is free only for non-commercial use. This IDE makes programming in Avalonia much smoother as it does support code completion and syntax highlighting for AXAML. It also supports more obvious highlighting for file types, and its IntelliSense code “usages” feature finds references in both C# and AXAML files.

JetBrains Rider Syntax Highlighting
JetBrains Rider Syntax Highlighting 

Unfortunately, Avalonia does not support hot reload at all, but it does have an AXAML design previewer which is similar to the XAML Live Preview. 

AXAML Design Preview
AXAML Design Preview 

Cross-Platform Demo 

To demonstrate the look and feel of Avalonia on different platforms, I followed the Avalonia Music Store App tutorial and deployed it to a Windows and Linux operating system respectively. 

Avalonia on Windows
Windows 

Avalonia on Linux
Linux

As you can see, the application looks consistent across different platforms thanks to Avalonia’s independent rendering. Avalonia does not rely on the native UI controls of the operating system. Instead, it draws the entire UI itself which allows for more flexibility and customization. 

Takeaways 

Avalonia is a great option to have for desktop development, especially for cross-platform use cases. Avalonia’s styling features and components streamline development while also providing a similar feel to WPF. While Avalonia seems like WPF 2.0, WPF is still a strong choice for Windows platforms. WPF has a long history of development behind it and many resources available that have led it to be DMC’s standby for a long time. 

At DMC, we are always looking towards the future and learning new technologies to better support the wide variety of needs our customers have. We are excited to continue exploring Avalonia UI to provide expert solutions for cross-platform, desktop development. 

Ready to take your Application Development project to the next level? Contact us today to learn more about our solutions and how we can help you achieve your goals. 

The post Avalonia UI: Introduction and Initial Impression appeared first on DMC, Inc..

]]>
Updating Microsoft Defender for IoT https://www.dmcinfo.com/blog/15825/updating-microsoft-defender-for-iot/ Tue, 26 Nov 2024 17:01:50 +0000 https://www.dmcinfo.com/blog/15825/updating-microsoft-defender-for-iot/ Microsoft Defender for IoT is a powerful security tool that can help to protect your IoT/OT environment. Almost every new update improves the detection, brings new features to help protect your systems and fixing issues in previous releases, you need to keep your sensors up to date. Since there are multiple deployment options, including cloud, […]

The post Updating Microsoft Defender for IoT appeared first on DMC, Inc..

]]>
Microsoft Defender for IoT is a powerful security tool that can help to protect your IoT/OT environment. Almost every new update improves the detection, brings new features to help protect your systems and fixing issues in previous releases, you need to keep your sensors up to date. Since there are multiple deployment options, including cloud, on-premises, or hybrid networks, your update options might also be different. 

How to Update Defender for IoT

1. Navigate to portal.azure.com and search for Microsoft Defender for IoT. 

Microsoft Defender for IoT welcome screen

2. Navigate to “Sites and Sensors.” 

3. In the “Site and Sensors” list, choose the sensor you want to update. 

Microsoft Defender for IoT sites and sensors list

4. You’ll have two options: “Remote update” (push an online upgrade) or “Download upgrade files” (manually upgrade from the local network). 

Microsoft Defender for IoT upgrade options

5. In our example we will update locally to 22.3.10 and push an online update to 24.1.3 after. 

6.1.1. If you select “Download upgrade files,” you need to choose your sensor version (in our case it’s 22.3.9) and if you have a local manager. 

Microsoft Defender for IoT download upgrade

6.1.2. From the “Available versions” list you need to choose what version you want to update. If you need to update to an older version, choose “Show more.” 

Microsoft Defender for IoT versions

6.1.3. Navigate to your Defender for IoT internal IP address and login with CyberX credentials.

Microsoft Defender for Iot sensor sign in
Microsoft Defender for IoT dashboard

6.1.4. Navigate to system settings, software update, upload file. Choose the upgrade file you previously downloaded and click open. When it finishes uploading it will display “Status: Updated successfully (Agent will reboot in 30 seconds)." Note: it took ~5 minutes.

6.1.5. When it’s done, you can see that the version of the system changed.

Microsoft Defender for IoT new version
Microsoft Defender for IoT system settings
Microsoft Defender for IoT downloads
Microsoft Defender for IoT software update

6.2.1. Choose “Step one: Send package to sensor.” 

Microsoft Defender for IoT send package

6.2.2. Choose the version you want to install (the latest available version will display). If you need an older version, click on “Show more” and choose the version you prefer to install. Then, click on “Send package.”

Microsoft Defender for IoT install version

6.2.3 You should see the status under “Sensor version” tab.

Microsoft defender for iot sensor version

6.2.4. When it’s done, the status will change to “Ready to update."

microsoft defender for iot ready to update

6.2.5. Navigate to the “Remote update” and choose “Step two: Update sensor” and click “Update now”, “Confirm update.”

microsoft defender for iot confirm update
microsoft defender for iot confirm update

6.2.6. The “Sensor version" will change to “Installing."

microsoft defender for iot installing

Learn more about DMC's IoT expertise and contact us for your next project. 

The post Updating Microsoft Defender for IoT appeared first on DMC, Inc..

]]>
Electrifying Desktop Application Development with Electron https://www.dmcinfo.com/blog/15849/electrifying-desktop-application-development-with-electron/ Mon, 18 Nov 2024 10:58:38 +0000 https://www.dmcinfo.com/blog/15849/electrifying-desktop-application-development-with-electron/ You've already used an Electron application. So, what is Electron? You’ve already used an Electron application; you just might not have known it. Electron applications provide the basis for some of the apps you use daily—Microsoft Teams, Slack, and Visual Studio Code. Electron is a modern, open-source framework for building desktop applications. Since it runs on […]

The post Electrifying Desktop Application Development with Electron appeared first on DMC, Inc..

]]>
You've already used an Electron application. So, what is Electron?

You’ve already used an Electron application; you just might not have known it. Electron applications provide the basis for some of the apps you use daily—Microsoft Teams, Slack, and Visual Studio Code.

Electron is a modern, open-source framework for building desktop applications. Since it runs on the Node.js runtime, Electron makes it possible to harness JavaScript—typically a web technology—for the development of desktop applications. That’s a powerful way to take our expertise in web technology and apply it to desktop development.

Let’s talk about why DMC loves building with Electron!

Why DMC develops with Electron

Electron is cross-platform. Have a use case that calls for compatibility with Windows, macOS, and Linux? Your Electron application will run on all three, so you need just one code base. You’ll benefit from a smaller, tighter engineering team focused on a single application for minimized time to market and maximized market reach across multiple platforms. A singular code base means that bug fixes, updates, and new features need only be implemented once.

Built on the Node.js runtime. Equally as important is the runtime itself. Electron runs on Node.js, a JavaScript runtime environment, taking advantage of JavaScript’s position as the world’s eminent web technology. Not only does this align with DMC’s sweeping base of expertise in developing JavaScript-based applications, but it also means that you’ll be able to involve your developers in the technical process during an application’s development and support, including post-handoff of the application. That common technical understanding means all parties are speaking the same language and are geared up for success for the lifetime of the application.

Prototyping and proof-of-concepting. An often-understated utility of Electron development is its handiness for prototyping. Since Electron applications can be spun up quickly — and because you'll have access to the vast community of established libraries and frameworks built for JavaScript — an Electron application is ideally suited to building out a viable proof of concept perfect for getting projects off to a promising start.

Trying out new ideas with Electron Fiddle. Sometimes, all you need is a playground to test out that idea that’s been bouncing around in your head. Electron Fiddle is Electron’s take on JSFiddle. It’s a lightweight desktop studio for quickly running a simple Electron project. Maybe you want to experiment with a new feature for your application. Or maybe you want to download someone else’s demo and click around on your own. Fiddle allows you to run an Electron application without the overhead of initializing a completely new project. I love this capability and use it both for exploring new features and testing out small tweaks to colleagues’ code. You can read up on Electron Fiddle here.

Packaging and Distributing with Electron Forge. Electron Forge is the all-in-one tool for initiating, configuring, pipelining, and distributing an Electron application. At the start of your project, you’ll use the Electron Forge CLI to spin up a template application. Later, when it’s time to distribute, Electron Forge facilitates three core steps in the distribution sequence:

  1. The packaging of the application to a bundled executable
  2. The making of the bundled executable into a distributable (such as a .zip or an .exe)
  3. The publishing of the distributable so that your application is available for users to download

Users love it

Developing for the web means recognizing the dominant design for the everyday user’s experience. Developers and non-developers alike have come to expect a certain standard: a modern browser serving webpages on a modern JavaScript framework—think React with Material UI or Angular with Angular UI. Why shouldn’t this expectation carry over to the desktop application experience? For consistency and seamlessness, it makes sense to craft desktop applications that look and feel like their web counterparts.

An Electron application fluently brings the desktop application must-haves—offline capabilities, local system access, process launching, hardware discovery—and merges them with the familiar user interface elements of the web.

When it comes to picking JavaScript frameworks and libraries, Electron lets you choose your own tools. You can keep things simple and lean on basic HTML and CSS. Or you can scaffold up a full-fledged React application via Next.js.

Case Study: A desktop application for high-throughput data analysis and visualization

The problem: Our client, a global aerospace and defense technology company, sought to modernize their in-house data analysis and visualization workflow. Their existing workflow relied on several de-coupled applications for processing and exhibiting their data, and there was no official means of warehousing the data. Their team needed a successor platform to provide stronger scalability and faster workflow throughput while still retaining the data reporting functionalities of the legacy system. In modernizing the legacy platform, they laid out a set of key requirements:

  • The solution must provide for the storage of captured data.
  • The solution must ingest large datasets and visualize them clearly, and it must do so performantly.
    • Presentation of data must support advanced visualization controls such as time-shifting, zooming, and time-trending.
    • Plotting must be capable of processing up to 1 million data points.
  • The solution must support the validation of data.
  • The solution’s data processing must be offline-capable. If data is available locally, no network connection shall be required for the core functionality of the application.

The solution: DMC identified a path forward to replace the legacy workflow platform with a single cohesive solution. The bedrock of the solution is a data analysis desktop application built on Electron. Here’s why we landed on Electron for building out this application:

  • Electron gives our development team access to a range of powerful JavaScript packages for plotting, UI, and navigation. These are the packages that supply the familiar, expected experience of the web, and Electron brings them to the desktop environment.
  • Electron’s access to the native OS empowers us to launch a local Python service upon Electron startup. This allows us to easily take advantage of Python’s proven libraries for data crunching, validation, and report generation.

To round out the solution, we then spun up cloud-based data storage hosted on Amazon’s S3 service. As a whole, DMC delivered an application to meet the client's need for a comprehensive data intelligence and reporting platform.

Read more about DMC’s desktop application and web application development offerings or contact us today for your next project.

The post Electrifying Desktop Application Development with Electron appeared first on DMC, Inc..

]]>
How to Integrate Azure IoT Edge, .NET, OpenTelemetry Collector, and Application Insights https://www.dmcinfo.com/blog/15913/how-to-integrate-azure-iot-edge-net-opentelemetry-collector-and-application-insights/ Fri, 25 Oct 2024 13:26:46 +0000 https://www.dmcinfo.com/blog/15913/how-to-integrate-azure-iot-edge-net-opentelemetry-collector-and-application-insights/ All industries continue using the Internet-of-Things (IoT) to collect, monitor, and analyze data. One popular IoT option is using Azure IoTHub, which this tutorial will focus on.  Connecting IoT Devices to an IoT Edge device allows for data processing to be even faster since an IoT Edge device allows you to analyze data closer to […]

The post How to Integrate Azure IoT Edge, .NET, OpenTelemetry Collector, and Application Insights appeared first on DMC, Inc..

]]>
All industries continue using the Internet-of-Things (IoT) to collect, monitor, and analyze data. One popular IoT option is using Azure IoTHub, which this tutorial will focus on. 

Connecting IoT Devices to an IoT Edge device allows for data processing to be even faster since an IoT Edge device allows you to analyze data closer to your IoT devices. This gives you the advantage of preprocessing data prior to sending it to the cloud.  

In this article, we will review how we can get data running on an ASP.NET application of a child device to an OpenTelemetry Collector module running on an IoT Edge device up into Application Insights in the Azure cloud. Since the application is running on a separate device than the device running the OpenTelemetry Collector module, this example can be used to send trace and metrics data to Application Insights when an IoT device does not have direct access to the internet but the IoT Edge device does. 

OpenTelemetry is a framework and toolkit to manage telemetry data. It provides a protocol that specifies how telemetry should be formatted and set. OpenTelemetry is favorable to use since it is vendor and tool-agnostic and can be used with different Observability backends such as Jaegar and Prometheus. Additionally, OpenTelemetry provides SDKs to implement clients in different languages, including .NET. 

OpenTelemetry framework

Prerequisites 

  • Visual Studio 2022 Preview 
  • Docker Desktop 
  • Application Insights resource in Azure 
    • Application Insights is not a totally free service, but Azure offers a free account that comes with $200 in Azure credits.
  • IoT Edge device to deploy modules 
    • The IoT Edge device will be used to deploy the OpenTelemetry Collector module. You can follow this tutorial to set up your IoT Edge device and example module. 

Implement the OpenTelemtry Exporter in ASP.NET Application 

To get started, I created an ASP.NET Core Web App (Razor Pages) project with .NET 8.0 from the Visual Studio project templates. 

Next, we will add an OpenTelemetry exporter to the service registration of our web application. There are a variety of exporters to choose from, but for this tutorial, we will use Prometheus to export metrics and OpenTelemetry Protocol (OTLP) to export application logs. 

I chose Prometheus for metrics since it is highly reliable when recording numeric time series data, and it has an easy plug-in to use with C# and ASP.NET. I chose OTLP for logs since it also has an easy plug-in for ASP.NET and Prometheus does not support exporting traces. 

The Prometheus Exporter relies on the following packages: 

The OTLP Exporter relies on the following packages: 

Your Program.cs file should look like the following: 

  1. Configure logging to log to the console and to the OpenTelemetry OTLP Exporter. The IP address used for the OTLP Exporter endpoint should be the IP address of the IoT Edge Device. The default HTTP port is 4318 with the relative path of “v1/logs” as described here
PpenTelemetry exporter
  1. Configure OpenTelemetry to use the Prometheus Exporter to export metrics. This example configures built-in metrics for “Microsoft.AspNetCore.Hosting” and “Microsoft.AspNet.Diagnostics.” It also configures metrics, for example, custom metric “HatCo.HatStore.”
Configure OpenTelemetry
  1. Configure the web application. 
Configure web app
  1. Configure the web application to use the Prometheus scraping endpoint. Run background, test logic to generate logs and custom metric data for “HatCo.HatStore”. Run the application. 
Prometheus scraping endpoint

Add the OpenTelemetry Collector Module to your IoT Edge Device 

We are going to test running the IoT Edge Modules in the simulator. This will allow the modules to run on your local Docker container registry. 

Within your IoT Edge VS project, we will edit the “deployment.template.json” file to add the OpenTelemetry Collector module. The “modules” section should look like the following example. 

Add OpenTelemetry Collector module

The image name should match the desired image name when we create the OpenTelemetry Collector image in Build the Module. The $APPI_CONNECTION_STRING is an environment variable on your local computer that can be set in a .env file within the VS solution. APPI_CONNECTION_STRING sets up an environment variable within the OpenTelemetry Collector module, and this variable is used in the config.yaml file for configuring the module which I’ll detail in the next section.  

Once the deployment file is edited, right-click the Edge project and select “Generate Deployment for IoT Edge.”

Create the Configuration File for OpenTelemetry Collector

Before running the OpenTelemetry Collector module on our IoT Edge device, we will want to configure how the OpenTelemetry Collector module will receive data from exporters in our ASP.NET application and configure how the OpenTelemetry Collector should export the data it receives. The configuration for the module goes into a .yaml file. Your configuration file should look similar to the example below. You can add this config.yaml file into your IoT Edge solution that was created in the IoT Edge tutorial. 

In this example, we have configured the OpenTelemetry Collector module to use a Prometheus receiver to scrape metrics. The Prometheus receiver on the OpenTelemetry Collector module has to actively scrape the endpoint where the child device is running the ASP.NET application. We then use Azure Monitor to report those metrics to Application Insights. For the “scrape_configs” section, the “targets” are the endpoints where you would like to scrape metrics from. You can have multiple “scrape_configs”. Note that as configured, Prometheus will scrape only from HTTP endpoints. If your ASP.NET project uses different ports for HTTP and HTTPS, ensure to provide the HTTP port. 

We have also configured an OTLP receiver to retrieve logs. The default endpoint for the OTLP receiver using the HTTP protocol is http://localhost:4318/v1/logs, which is what is configured for the OTLP exporter on the ASP.NET project. “Localhost” here references the Edge device. The logs are also exported to Application Insights via Azure Monitor. Note: the “localhost” link is part of the tutorial and can only be accessed locally as part of running the tutorial itself.  

Create the Configuration File for OpenTelemetry Collector

Build the Module 

To build the module in Docker Desktop, we will utilize the public OpenTelemetry Collector module image, and then we will copy our configuration file into it to modify the agent behavior. Create a Dockerfile to pull the OpenTelemetry Collector module image and copy over your configuration file. In the Dockerfile, we expose 4318 so that the ASP.NET application can push logs to our OTLP endpoint. The example Dockerfile below assumes that the Dockerfile and the config.yaml file are in the same directory. 

example Dockerfile

Within a command prompt, navigate to the directory where your Dockerfile.amd64 file lives. Run the following command to build and push your image to your local Docker container registry. 

PowerShell
docker build -t localhost:5000/opentelemetrycollector:0.0.1-windows-amd64 -f Dockerfile.amd64 . 

Deploy the Edge Device and Collect Logs and Metrics 

After generating your deployment file for the Edge device, right-click the Edge project and select “Build and Run IoT Edge Modules in Simulator.” This will run the OpenTelemetry Collector module from your Edge device on your local instance of Docker Desktop. You should see the logs for the OpenTelemetry Collector module trying to scrape your application metrics. 

deploy edge device

View Metrics and Logs in Application Insights 

Once the Edge device is running, run your ASP.NET project. Navigate to your Application Insights resource in Azure Portal. In the left-hand navigation bar within the Application Insights resource under Monitor, select Logs. Query the logs for “traces” and “customMetrics.”  

Under traces, you will see your application logs appear. 

application logs appear

Under “customMetrics”, you will see your application metrics which includes the example metric “hat_sold_Hats_total.”

application metrics with example

Conclusion 

With this tutorial as an example, you now have the ability to collect metrics and logs via an IoT Edge device from multiple applications! 

Learn more about DMC’s Application Development services and contact us for your next project.

The post How to Integrate Azure IoT Edge, .NET, OpenTelemetry Collector, and Application Insights appeared first on DMC, Inc..

]]>