Web Application Development Archives | DMC, Inc. https://www.dmcinfo.com/blog/category/application-development/web-application-development/ Tue, 23 Dec 2025 14:48:57 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 https://cdn.dmcinfo.com/wp-content/uploads/2025/04/17193803/site-icon-150x150.png Web Application Development Archives | DMC, Inc. https://www.dmcinfo.com/blog/category/application-development/web-application-development/ 32 32 A Complete Guide to Planning Your IIoT Solution https://www.dmcinfo.com/blog/20635/a-complete-guide-to-planning-your-iiot-solution/ Fri, 26 Sep 2025 16:00:00 +0000 https://www.dmcinfo.com/?p=20635 IoT or Internet of Things is a “system of interrelated computing devices, mechanical and digital machines, objects, animals, or people provided with unique identifiers and the ability to transfer data over a network without requiring human-to-human or human-to-computer interaction.” The Internet of Things continues to develop as technology advances, and the need to interact with […]

The post A Complete Guide to Planning Your IIoT Solution appeared first on DMC, Inc..

]]>
IoT or Internet of Things is a “system of interrelated computing devices, mechanical and digital machines, objects, animals, or people provided with unique identifiers and the ability to transfer data over a network without requiring human-to-human or human-to-computer interaction.” The Internet of Things continues to develop as technology advances, and the need to interact with devices in a new way continues to develop.

Out of IoT, IIoT, or the Industrial Internet of Things, is emerging as a common and necessary term. This guide provides insight into IIoT and into DMC’s process for completing successful IIoT projects.


Table of Contents


IIoT Overview

Much like IoT, IIoT uses sensors and other technology but in an industrial setting to leverage real-time data to monitor and control devices in the field and communicate and display that data in a way that allows for better decision making in industrial processes.


Our Five-Step Process

DMC has completed hundreds of projects incorporating a wide range of solutions. Through experience, our engineers have developed a process for implementing IIoT solutions. We begin by starting at the lowest level and then fill in the gaps, going up the stack.

DMC IIoT Five Step Process

Step One: Field Device Platform Selection

Getting reliable, accurate data from the physical system is the primary challenge for any system design and the most important decision you can make. Once you secure the data, you can do anything you want with it. Answer the following questions to select hardware used in the field:

  • What are you trying to measure or control?
  • What does the device in the field need to be able to do on its own? How complex?
  • How many devices do you anticipate on deploying?
  • Are you the end-user and maintainer of this equipment, or are you selling a solution (operated and maintained by others)?
  • What does new device provisioning look like? (the ideal process)
  • How will this device be powered?
Step One

Keep in mind, the software platform is determined by hardware selection. For example, when you pick a particular PLC, you have to use the manufacturer’s software to program it. For an embedded device this will be C++, etc.

Device Hardware: Selecting the Right Parts

Let’s say the average PLC costs $1,000, so every time you put one out in the field, that is another $1,000 of hard costs. PLCs are great for some things while NI has developed expert tech for other uses. DMC engineers can leverage experience from hundreds of completed projects to advise on making the best choice for each project.

0-10 Devices – Small Deployments

10-100 Devices – Larger Deployment, off the shelf products, but start to have cost-optimized decisions

100-1000+ Devices – Discuss an embedded solution because the hardware starts to get expensive

Step Two: Determine Communications

After determining your device platform, deciding how your networking devices are configured is key. Consistent communication between devices is essential. DMC’s engineers help scope what needs to be done. Consider the following:

  • How are you going to communicate with devices?
  • Where is the internet coming from?
    • Cellular, Wi-Fi, from the plant?
  • What happens when the internet is not available?
    • Local caching, buffer and retry, operational impacts
  • What are the protocol security requirements?
    • Encryptions, certificates, secure comm management

Step Three: Determine Cloud Platform

There are a lot of cloud platforms to choose from when you reach this phase of the process. Ask yourself, what out of the box services (provided by these hosting entities) will your application need/or take advantage of? Some cloud platforms include Azure, AWS, and Google.

This phase is when we need to assess how to save on custom development, and where it’s possible to use solid foundational pieces already developed for these types of applications. Ask yourself:

  • Do you need a website?
  • Do you need database(s)?
  • Do you need user management?
  • Do you need integrations to other cloud services?
  • Do you need SMS, e-Mail, or other mass notification capabilities?
  • Do you need an AI engine or advanced analytics support?
  • Do you need a flexible reporting framework (points to things like PowerBI)?
  • What type of data store is needed?
    • How much data?
    • How often will it be sampled?
    • How will the data be used?
  • Where and how will security be enforced for cloud resources?
  • How many monthly active users do you anticipate on this cloud application?
  • What in house cloud/web development resources do you have?
    • What are they comfortable with and willing to maintain?

Step Four: Web Application Development

DMC’s full-stack development team builds custom web applications with intuitive interfaces designed for usability and stable back ends designed for scalability. 

Consider the following during this step:

  • Define the UI/UX experience
  • How are you going to onboard new users?
  • How are you going to onboard new devices?
  • How are you going to manage devices?
  • How are users going to view data?
  • What access restrictions should apply? (user levels)
  • What types of notification and alerts are required?
  • When should devices be alerted to changes?
  • What visualizations for data or information are required?
  • What type of reporting is required? How are users notified of reports?
  • Is a native Mobile App also required?
  • Is a generic API (accessible by third parties) required?
  • Define support plan for end-users of the application

Step Five: Go Live and Maintenance

  • Are you using continuous Integration tools in Development, Staging, and Production environments?
  • Do you have planned downtime for production-level updates?
  • Are database migrations required? Data integrity checks?
  • Deploy and active service health monitors?
  • Are support and service avenues (emails/phone) active and being monitored?

Industry Credentials

DMC holds several key industry credentials with leading technology providers.

Our Work

Ready to take your Automation project to the next level? Contact us today to learn more about our solutions and how we can help you achieve your goals. 

The post A Complete Guide to Planning Your IIoT Solution appeared first on DMC, Inc..

]]>
Electrifying Desktop Application Development with Electron https://www.dmcinfo.com/blog/15849/electrifying-desktop-application-development-with-electron/ Mon, 18 Nov 2024 10:58:38 +0000 https://www.dmcinfo.com/blog/15849/electrifying-desktop-application-development-with-electron/ You've already used an Electron application. So, what is Electron? You’ve already used an Electron application; you just might not have known it. Electron applications provide the basis for some of the apps you use daily—Microsoft Teams, Slack, and Visual Studio Code. Electron is a modern, open-source framework for building desktop applications. Since it runs on […]

The post Electrifying Desktop Application Development with Electron appeared first on DMC, Inc..

]]>
You've already used an Electron application. So, what is Electron?

You’ve already used an Electron application; you just might not have known it. Electron applications provide the basis for some of the apps you use daily—Microsoft Teams, Slack, and Visual Studio Code.

Electron is a modern, open-source framework for building desktop applications. Since it runs on the Node.js runtime, Electron makes it possible to harness JavaScript—typically a web technology—for the development of desktop applications. That’s a powerful way to take our expertise in web technology and apply it to desktop development.

Let’s talk about why DMC loves building with Electron!

Why DMC develops with Electron

Electron is cross-platform. Have a use case that calls for compatibility with Windows, macOS, and Linux? Your Electron application will run on all three, so you need just one code base. You’ll benefit from a smaller, tighter engineering team focused on a single application for minimized time to market and maximized market reach across multiple platforms. A singular code base means that bug fixes, updates, and new features need only be implemented once.

Built on the Node.js runtime. Equally as important is the runtime itself. Electron runs on Node.js, a JavaScript runtime environment, taking advantage of JavaScript’s position as the world’s eminent web technology. Not only does this align with DMC’s sweeping base of expertise in developing JavaScript-based applications, but it also means that you’ll be able to involve your developers in the technical process during an application’s development and support, including post-handoff of the application. That common technical understanding means all parties are speaking the same language and are geared up for success for the lifetime of the application.

Prototyping and proof-of-concepting. An often-understated utility of Electron development is its handiness for prototyping. Since Electron applications can be spun up quickly — and because you'll have access to the vast community of established libraries and frameworks built for JavaScript — an Electron application is ideally suited to building out a viable proof of concept perfect for getting projects off to a promising start.

Trying out new ideas with Electron Fiddle. Sometimes, all you need is a playground to test out that idea that’s been bouncing around in your head. Electron Fiddle is Electron’s take on JSFiddle. It’s a lightweight desktop studio for quickly running a simple Electron project. Maybe you want to experiment with a new feature for your application. Or maybe you want to download someone else’s demo and click around on your own. Fiddle allows you to run an Electron application without the overhead of initializing a completely new project. I love this capability and use it both for exploring new features and testing out small tweaks to colleagues’ code. You can read up on Electron Fiddle here.

Packaging and Distributing with Electron Forge. Electron Forge is the all-in-one tool for initiating, configuring, pipelining, and distributing an Electron application. At the start of your project, you’ll use the Electron Forge CLI to spin up a template application. Later, when it’s time to distribute, Electron Forge facilitates three core steps in the distribution sequence:

  1. The packaging of the application to a bundled executable
  2. The making of the bundled executable into a distributable (such as a .zip or an .exe)
  3. The publishing of the distributable so that your application is available for users to download

Users love it

Developing for the web means recognizing the dominant design for the everyday user’s experience. Developers and non-developers alike have come to expect a certain standard: a modern browser serving webpages on a modern JavaScript framework—think React with Material UI or Angular with Angular UI. Why shouldn’t this expectation carry over to the desktop application experience? For consistency and seamlessness, it makes sense to craft desktop applications that look and feel like their web counterparts.

An Electron application fluently brings the desktop application must-haves—offline capabilities, local system access, process launching, hardware discovery—and merges them with the familiar user interface elements of the web.

When it comes to picking JavaScript frameworks and libraries, Electron lets you choose your own tools. You can keep things simple and lean on basic HTML and CSS. Or you can scaffold up a full-fledged React application via Next.js.

Case Study: A desktop application for high-throughput data analysis and visualization

The problem: Our client, a global aerospace and defense technology company, sought to modernize their in-house data analysis and visualization workflow. Their existing workflow relied on several de-coupled applications for processing and exhibiting their data, and there was no official means of warehousing the data. Their team needed a successor platform to provide stronger scalability and faster workflow throughput while still retaining the data reporting functionalities of the legacy system. In modernizing the legacy platform, they laid out a set of key requirements:

  • The solution must provide for the storage of captured data.
  • The solution must ingest large datasets and visualize them clearly, and it must do so performantly.
    • Presentation of data must support advanced visualization controls such as time-shifting, zooming, and time-trending.
    • Plotting must be capable of processing up to 1 million data points.
  • The solution must support the validation of data.
  • The solution’s data processing must be offline-capable. If data is available locally, no network connection shall be required for the core functionality of the application.

The solution: DMC identified a path forward to replace the legacy workflow platform with a single cohesive solution. The bedrock of the solution is a data analysis desktop application built on Electron. Here’s why we landed on Electron for building out this application:

  • Electron gives our development team access to a range of powerful JavaScript packages for plotting, UI, and navigation. These are the packages that supply the familiar, expected experience of the web, and Electron brings them to the desktop environment.
  • Electron’s access to the native OS empowers us to launch a local Python service upon Electron startup. This allows us to easily take advantage of Python’s proven libraries for data crunching, validation, and report generation.

To round out the solution, we then spun up cloud-based data storage hosted on Amazon’s S3 service. As a whole, DMC delivered an application to meet the client's need for a comprehensive data intelligence and reporting platform.

Read more about DMC’s desktop application and web application development offerings or contact us today for your next project.

The post Electrifying Desktop Application Development with Electron appeared first on DMC, Inc..

]]>
How to Integrate Azure IoT Edge, .NET, OpenTelemetry Collector, and Application Insights https://www.dmcinfo.com/blog/15913/how-to-integrate-azure-iot-edge-net-opentelemetry-collector-and-application-insights/ Fri, 25 Oct 2024 13:26:46 +0000 https://www.dmcinfo.com/blog/15913/how-to-integrate-azure-iot-edge-net-opentelemetry-collector-and-application-insights/ All industries continue using the Internet-of-Things (IoT) to collect, monitor, and analyze data. One popular IoT option is using Azure IoTHub, which this tutorial will focus on.  Connecting IoT Devices to an IoT Edge device allows for data processing to be even faster since an IoT Edge device allows you to analyze data closer to […]

The post How to Integrate Azure IoT Edge, .NET, OpenTelemetry Collector, and Application Insights appeared first on DMC, Inc..

]]>
All industries continue using the Internet-of-Things (IoT) to collect, monitor, and analyze data. One popular IoT option is using Azure IoTHub, which this tutorial will focus on. 

Connecting IoT Devices to an IoT Edge device allows for data processing to be even faster since an IoT Edge device allows you to analyze data closer to your IoT devices. This gives you the advantage of preprocessing data prior to sending it to the cloud.  

In this article, we will review how we can get data running on an ASP.NET application of a child device to an OpenTelemetry Collector module running on an IoT Edge device up into Application Insights in the Azure cloud. Since the application is running on a separate device than the device running the OpenTelemetry Collector module, this example can be used to send trace and metrics data to Application Insights when an IoT device does not have direct access to the internet but the IoT Edge device does. 

OpenTelemetry is a framework and toolkit to manage telemetry data. It provides a protocol that specifies how telemetry should be formatted and set. OpenTelemetry is favorable to use since it is vendor and tool-agnostic and can be used with different Observability backends such as Jaegar and Prometheus. Additionally, OpenTelemetry provides SDKs to implement clients in different languages, including .NET. 

OpenTelemetry framework

Prerequisites 

  • Visual Studio 2022 Preview 
  • Docker Desktop 
  • Application Insights resource in Azure 
    • Application Insights is not a totally free service, but Azure offers a free account that comes with $200 in Azure credits.
  • IoT Edge device to deploy modules 
    • The IoT Edge device will be used to deploy the OpenTelemetry Collector module. You can follow this tutorial to set up your IoT Edge device and example module. 

Implement the OpenTelemtry Exporter in ASP.NET Application 

To get started, I created an ASP.NET Core Web App (Razor Pages) project with .NET 8.0 from the Visual Studio project templates. 

Next, we will add an OpenTelemetry exporter to the service registration of our web application. There are a variety of exporters to choose from, but for this tutorial, we will use Prometheus to export metrics and OpenTelemetry Protocol (OTLP) to export application logs. 

I chose Prometheus for metrics since it is highly reliable when recording numeric time series data, and it has an easy plug-in to use with C# and ASP.NET. I chose OTLP for logs since it also has an easy plug-in for ASP.NET and Prometheus does not support exporting traces. 

The Prometheus Exporter relies on the following packages: 

The OTLP Exporter relies on the following packages: 

Your Program.cs file should look like the following: 

  1. Configure logging to log to the console and to the OpenTelemetry OTLP Exporter. The IP address used for the OTLP Exporter endpoint should be the IP address of the IoT Edge Device. The default HTTP port is 4318 with the relative path of “v1/logs” as described here
PpenTelemetry exporter
  1. Configure OpenTelemetry to use the Prometheus Exporter to export metrics. This example configures built-in metrics for “Microsoft.AspNetCore.Hosting” and “Microsoft.AspNet.Diagnostics.” It also configures metrics, for example, custom metric “HatCo.HatStore.”
Configure OpenTelemetry
  1. Configure the web application. 
Configure web app
  1. Configure the web application to use the Prometheus scraping endpoint. Run background, test logic to generate logs and custom metric data for “HatCo.HatStore”. Run the application. 
Prometheus scraping endpoint

Add the OpenTelemetry Collector Module to your IoT Edge Device 

We are going to test running the IoT Edge Modules in the simulator. This will allow the modules to run on your local Docker container registry. 

Within your IoT Edge VS project, we will edit the “deployment.template.json” file to add the OpenTelemetry Collector module. The “modules” section should look like the following example. 

Add OpenTelemetry Collector module

The image name should match the desired image name when we create the OpenTelemetry Collector image in Build the Module. The $APPI_CONNECTION_STRING is an environment variable on your local computer that can be set in a .env file within the VS solution. APPI_CONNECTION_STRING sets up an environment variable within the OpenTelemetry Collector module, and this variable is used in the config.yaml file for configuring the module which I’ll detail in the next section.  

Once the deployment file is edited, right-click the Edge project and select “Generate Deployment for IoT Edge.”

Create the Configuration File for OpenTelemetry Collector

Before running the OpenTelemetry Collector module on our IoT Edge device, we will want to configure how the OpenTelemetry Collector module will receive data from exporters in our ASP.NET application and configure how the OpenTelemetry Collector should export the data it receives. The configuration for the module goes into a .yaml file. Your configuration file should look similar to the example below. You can add this config.yaml file into your IoT Edge solution that was created in the IoT Edge tutorial. 

In this example, we have configured the OpenTelemetry Collector module to use a Prometheus receiver to scrape metrics. The Prometheus receiver on the OpenTelemetry Collector module has to actively scrape the endpoint where the child device is running the ASP.NET application. We then use Azure Monitor to report those metrics to Application Insights. For the “scrape_configs” section, the “targets” are the endpoints where you would like to scrape metrics from. You can have multiple “scrape_configs”. Note that as configured, Prometheus will scrape only from HTTP endpoints. If your ASP.NET project uses different ports for HTTP and HTTPS, ensure to provide the HTTP port. 

We have also configured an OTLP receiver to retrieve logs. The default endpoint for the OTLP receiver using the HTTP protocol is http://localhost:4318/v1/logs, which is what is configured for the OTLP exporter on the ASP.NET project. “Localhost” here references the Edge device. The logs are also exported to Application Insights via Azure Monitor. Note: the “localhost” link is part of the tutorial and can only be accessed locally as part of running the tutorial itself.  

Create the Configuration File for OpenTelemetry Collector

Build the Module 

To build the module in Docker Desktop, we will utilize the public OpenTelemetry Collector module image, and then we will copy our configuration file into it to modify the agent behavior. Create a Dockerfile to pull the OpenTelemetry Collector module image and copy over your configuration file. In the Dockerfile, we expose 4318 so that the ASP.NET application can push logs to our OTLP endpoint. The example Dockerfile below assumes that the Dockerfile and the config.yaml file are in the same directory. 

example Dockerfile

Within a command prompt, navigate to the directory where your Dockerfile.amd64 file lives. Run the following command to build and push your image to your local Docker container registry. 

PowerShell
docker build -t localhost:5000/opentelemetrycollector:0.0.1-windows-amd64 -f Dockerfile.amd64 . 

Deploy the Edge Device and Collect Logs and Metrics 

After generating your deployment file for the Edge device, right-click the Edge project and select “Build and Run IoT Edge Modules in Simulator.” This will run the OpenTelemetry Collector module from your Edge device on your local instance of Docker Desktop. You should see the logs for the OpenTelemetry Collector module trying to scrape your application metrics. 

deploy edge device

View Metrics and Logs in Application Insights 

Once the Edge device is running, run your ASP.NET project. Navigate to your Application Insights resource in Azure Portal. In the left-hand navigation bar within the Application Insights resource under Monitor, select Logs. Query the logs for “traces” and “customMetrics.”  

Under traces, you will see your application logs appear. 

application logs appear

Under “customMetrics”, you will see your application metrics which includes the example metric “hat_sold_Hats_total.”

application metrics with example

Conclusion 

With this tutorial as an example, you now have the ability to collect metrics and logs via an IoT Edge device from multiple applications! 

Learn more about DMC’s Application Development services and contact us for your next project.

The post How to Integrate Azure IoT Edge, .NET, OpenTelemetry Collector, and Application Insights appeared first on DMC, Inc..

]]>
Basic JavaScript Localization Setup Using JSON with Visual Studio Code https://www.dmcinfo.com/blog/15997/basic-javascript-localization-setup-using-json-with-visual-studio-code/ Mon, 16 Sep 2024 12:55:30 +0000 https://www.dmcinfo.com/blog/15997/basic-javascript-localization-setup-using-json-with-visual-studio-code/ Frontend localization is different than .NET backend localization in that there is not an established “correct way” to accomplish translating text in JS and related frameworks. This is a basic example using JavaScript, but more customization can be done with methods and helpers for accessing text, especially if you are using a JS framework like […]

The post Basic JavaScript Localization Setup Using JSON with Visual Studio Code appeared first on DMC, Inc..

]]>
Frontend localization is different than .NET backend localization in that there is not an established “correct way” to accomplish translating text in JS and related frameworks. This is a basic example using JavaScript, but more customization can be done with methods and helpers for accessing text, especially if you are using a JS framework like React or Material UI.

Prerequisites

  • Visual Studio Code installed on your machine
  • Node.js version 20.17.0 installed on your machine (other versions of Node will likely work as well)
  • General understanding of navigation in Visual Studio Code IDE

Steps

  1. Open Visual Studio Code and create an empty folder “LocalizationExampleJS”:

    Open Visual Studio Code
     
  2. Add the file “LocalizationExample.js” at the top level of your new folder:


     
  3. Add a new subfolder "Resources" with empty files “en-US.json”, “es-MX.json”, and “fr-FR.json”:



  4. Add the following code to each JSON translation file. It’s important for the text keys in all 3 files to be identical – in this case, the sample text we are using has the key “HelloWorld”.

en-US.json

JSON
<code>
{
  "HelloWorld": "Hello World!"
}
es-MX.json
{
  "HelloWorld": "¡Hola Mundo!"
}
fr-FR.json
{
  "HelloWorld": "Bonjour le monde!"
}
</code>
  1. Add the following code to “LocalizationExample.js”. This will enable you to supply a command line argument containing your desired locale string during testing. (Note that in an actual web frontend solution, the frontend code would need to access the current user’s culture from the server, NOT via command line argument.)
JavaScript
<code>
const args = process.argv.slice(2);
const currentCulture = args[0];
console.log("Your supplied culture is: " + currentCulture);
</code>
  1. Below the code to extract current culture from command line arguments, add the following code to access the “HelloWorld” string for the supplied culture. If a culture is supplied that is not present in the application, then an error message will log accordingly, and the application will default to the “en-US” locale.
JavaScript
<code>
var pathToJson = "./Resources/" + args[0] + ".json";
try {
  const jsonObj = require(pathToJson);
  console.log(jsonObj.HelloWorld);
}
catch (err) {
  console.log("Culture \"" + currentCulture + "\" does not exist in this application - using default locale en-US.")
  pathToJson = "./Resources/en-US.json";
  const jsonObj = require(pathToJson);
  console.log(jsonObj.HelloWorld);
}
</code>
  1. Your code is ready for testing! It should look something like this:
JavaScript
<code>
const args = process.argv.slice(2);
const currentCulture = args[0];
console.log("Your supplied culture is: " + currentCulture);

var pathToJson = "./Resources/" + args[0] + ".json";
try {
  const jsonObj = require(pathToJson);
  console.log(jsonObj.HelloWorld);
}
catch (err) {
  console.log("Culture \"" + currentCulture + "\" does not exist in this application - using default locale en-US.")
  pathToJson = "./Resources/en-US.json";
  const jsonObj = require(pathToJson);
  console.log(jsonObj.HelloWorld);
}
</code>
  1. Open a terminal and test out the LocalizationExample solution using the command node .\LocalizationExample.js xx-XX, where “xx-XX” is a locale string representing the combination of a language and a region. Try out “es-MX”, “fr-FR”, “en-US”, and “pt-BR” (Brazilian Portuguese) as command line arguments.

    Note that the HelloWorld resource for the cultures “en-US”, “es-MX”, and “fr-FR” all appear translated as expected. However, the “pt-BR” locale yields the HelloWorld string in English, because this culture is not present in the collection of translations, and our code explicitly defaults to the “en-US” U.S. English locale in this case.

Localization Tips and Tricks

When it comes to localization, there is a lot more to consider than simply translating text and accessing it appropriately from code. Read on to learn some tips and tricks for elevating your localization solution!

Text Key Organization

Generally, DMC has found that it is beneficial to organize text keys into separate files based not only on language, but also based on what page of your application the text belongs on. For backend localization, this is likely best done by splitting text by controller. Here is an example of a backend localization file structure that includes two controllers, “Login.cs” and “Home.cs”, and two cultures, a default culture and “es-MX”:

  • Resources folder
    • Login folder
      • Login.resx
      • Login.es-MX.resx
    • Home folder
      • Home.resx
      • Home.es-MX.resx

This separation organizes your translated text in a way that is easily modifiable for each individual page of your application.

In the event that identical text appears in multiple places in your application in English, it is still beneficial to duplicate that text key/value entry across pages, rather than maintaining a single key/value item. For example, if the word “Email” exists on both the “Login” and “Home” pages in the example above, this text key/value should exist in both the “Login” resource files and “Home” resource files. This enables you to update the text in one place without modifying the same text elsewhere, if desired. Also, translations of the same English word or phrase could be different across different languages, depending on the context of how the word or phrase is used – in this case, separate text/key values are necessary for translators to appropriately translate your application text.

Variables and Pluralization

Variables and pluralization can be quite tricky to handle in any localization solution. For a backend solution with .resx files, if you have a sentence to translate with a variable in the middle, you need to separate any given string key/value into separate parts that precede and follow the variable.

For example, say you want to include the phrase “Hello, {name} – welcome to my site!” The following key/value structure would need to be implemented in each .resx file to handle this phrase:

Key

Value (en-US)

Hello_BeforeName

“Hello, “

Hello_AfterName

“ – welcome to my site!”

 

Additionally, numeric variables add an extra layer of complexity to a localization solution, because pluralization must be implemented for these. The way words are pluralized can actually differ significantly across languages. For example, Slavic languages (e.g. Russian, Czech) have a different style of pluralization for 5 or more of an item than they do for zero, one, or 2-4 items. So a solution that translates to Slavic languages must implement separate logic for pluralizing 5+ items than for other items. For example, the sentence “You have {number} items in your cart!” may have a key/value structure like this in English:

Key

Value (en-US)

ItemsInCart_BeforeValue_0

“You have “

ItemsInCart_AfterValue_0

“ items in your cart!”

ItemsInCart_BeforeValue_1

“You have “

ItemsInCart_AfterValue_1

“ item in your cart!”

ItemsInCart_BeforeValue_2to4

“You have “

ItemsInCart_AfterValue_2to4

“ items in your cart!”

ItemsInCart_BeforeValue_5up

“You have “

ItemsInCart_AfterValue_5up

“ items in your cart!”

As you can see, this is a complex case to translate for – there are 8 total key/value pairs to accommodate just one sentence that integrates a numeric variable. To avoid the need for this, one solution is to reduce or eliminate the need for accommodating plurals – for example, you could rephrase “You have {number} items in your cart!” to instead become “Number of Items in Cart: {number}”. This would only require one key/value pair to be translated instead of eight.

Fallback Cultures

When a user’s specific locale does not have a translation present in your application, it can be beneficial to implement fallback cultures so this user is given a more suitable approximation of their desired language. For example, say a Spanish speaker lives in Spain and therefore has their browser locale set to “es-ES”. However, your application only has resources for Mexican Spanish “es-MX”. Ideally this person in Spain would see the application in Mexican Spanish rather than the application default of English, because Mexican Spanish more closely matches their desired language than English does.

Using .NET backend localization, fallback cultures can be implemented by simply renaming any files ending in “es-MX.resx” to instead end in “es.resx”. Per .NET’s culture fallback scenarios, upon receiving a request from the Spain Spanish user, .NET will still search for “es-ES” first. But upon not finding this specific culture, .NET will “fall back” to just the “es” portion of the locale string and will access the text as it exists in this new general Spanish file. Users using Mexican Spanish “es-MX” will see the same behavior.

Common Pitfalls

Localization can be complicated, and there are some common pitfalls you may encounter if you are implementing your own localization solution. Here are some examples:

  • Logging / error messaging: if you are displaying raw exception messages to your users, then these are not able to be localized because they are generally hardcoded in English by the entities providing the error messages. In this case, your application should be updated to display custom error messages to your users instead of raw messages.
  • Identifying EVERY source of user-facing text: There are many possible sources of text in your application that can go unnoticed, which can then become difficult to handle late in the localization process. Some examples are images, email templates, and messages from external libraries/packages/APIs. If you are doing frontend only localization, then it becomes necessary to make 100% certain that there is NO user facing text coming from the backend – if you discover late in the process that some user facing text comes from the server, then some “hacky” solutions may be necessary to preserve the frontend-only localization scheme.
  • Not enforcing a single source of truth for text: If there is not a very solid system in place for establishing and updating the “source or truth” for translated text, then the localization process can quickly become messy. Things are always changing during development, and if there is not an established system to deliver text updates to translators and other stakeholders, then any one party can modify text without other stakeholders’ knowledge and introduce confusion. Enforce a single source of truth for text that all stakeholders can agree on. (Resource files are likely NOT the appropriate single source of truth, since non-developer stakeholders should also have control over text in the application.)

 DMC is a full-service application development company focused on custom web, mobile, cloud, and desktop applications.  Contact us today to discuss your application development needs.

The post Basic JavaScript Localization Setup Using JSON with Visual Studio Code appeared first on DMC, Inc..

]]>
Basic Localization Setup in .NET 8 with Visual Studio https://www.dmcinfo.com/blog/16011/basic-localization-setup-in-net-8-with-visual-studio/ Mon, 16 Sep 2024 10:25:01 +0000 https://www.dmcinfo.com/blog/16011/basic-localization-setup-in-net-8-with-visual-studio/ Localization as it pertains to software is a very broad topic, but generally, to “localize” an application means to write or update the user-facing components of an application to simultaneously target users from multiple regions, languages, and/or cultures. Often, software engineers refer to localization to mean simply translating an application into multiple languages. But in […]

The post Basic Localization Setup in .NET 8 with Visual Studio appeared first on DMC, Inc..

]]>
Localization as it pertains to software is a very broad topic, but generally, to “localize” an application means to write or update the user-facing components of an application to simultaneously target users from multiple regions, languages, and/or cultures.

Often, software engineers refer to localization to mean simply translating an application into multiple languages. But in reality, localization can be much more involved than basic text translation. When localizing an application, these can all be items a software engineer needs to consider:

  • Translation of user-facing text
  • Date formatting
  • Currency formatting
  • Number formatting
  • Text ordering and direction
  • Images and other content
  • Regional language differences
  • And others

Every localization system is different, but in general, localization solutions should NOT solely consider target languages. Rather, to target a specific group of users, locales should be used. A locale combines language (what language the target users speak) with region (the area of the world in which the target users live). Region and language can be combined to form a locale string, generally in the format “xx-XX”, where “xx” is a two-letter code indicating the language and “XX” is a two-letter code indicating the region. Here are some examples of locale strings:

  • en-US: English as spoken in the United States
  • en-GB: English as spoken in United Kingdom of Great Britain and Northern Ireland
  • pt-BR: Portuguese as spoken in Brazil
  • pt-PL: Portuguese as spoken in Portugal
  • es-MX: Spanish as spoken in Mexico
  • es-US: Spanish as spoken in the United States

Language abbreviations are standardized using the ISO 639-1 language codes, and country abbreviations are standardized using the ISO 3166-1 alpha-2 country codes. A combination of codes from these two lists forms a complete locale, used to uniquely identify resources used for a language and region.

Successfully implementing an efficient and maintainable localization system involves significant upfront planning, for both software solution structure and communication with text stakeholders and translators.

Basic Localization Setup in .NET 8 with Visual Studio

This section will demonstrate how to set up basic localization in .NET 8 and Visual Studio Code. We will create a .NET 8 solution, and then we will modify the solution to include localization for two additional cultures (Spanish as spoken in Mexico “es-MX” and French as spoken in France “fr-FR”).

Note that programmatically, Microsoft uses the terminology “culture” rather than “locale” to indicate a combination of region and language that an application must be localized to.

Prerequisites

  • Visual Studio 2022 installed on your machine
  • .NET 8.0 SDK installed on your machine
  • General understanding of navigation for Visual Studio IDE

Localization Steps

  1. In Visual Studio 2022, create a new .NET 8 console application named LocalizationExample.
Create a new .NET 8 console application
create a new .NET 8 console application named LocalizationExample

Once the project is created and opened in Visual Studio, it will look like this:

Created project viewed in Visual Studio
  1. In the Visual Studio Package Manager Console, run the following commands to install necessary packages:
    1. Install-Package Microsoft.Extensions.Hosting
    2. Install-Package Microsoft.Extensions.DependencyInjection
    3. Install-Package Microsoft.Extensions.Localization
  1. To set up localization with dependency injection, replace the contents of Program.cs with the following lines of code, which add the services required for localizing your application. Note that the specified ResourcePath is “Resources” – you will add this path to your project in the next step.
C#
using Microsoft.Extensions.Hosting;
using Microsoft.Extensions.DependencyInjection;

using IHost host = Host.CreateDefaultBuilder(args)
    .ConfigureServices(services =>
    {
        services.AddLocalization(options =>
        {
            options.ResourcesPath = "Resources";
        });
    })
    .Build();
Console.WriteLine("Hello, World!");

 

  1. In the LocalizationExample project, add a new folder called “Resources”. This directory is where the localization services are set up to search for your localization resource files that will contain your localized text.
Add a new folder called Resources
  1. In the “Resources” folder, add a new item using the “Resources” type template. Name it “SampleResource.resx”.
Add a new item using the "Resources" type template

When “SampleResource.resx” opens, you should see a table open with the columns “Name”, “Value”, and “Comment”. You will also see a dropdown at the top of the file with “Strings” selected – for this exercise you will preserve the “Strings” option, but other resource types can be localized by changing this dropdown. The resource file columns serve the following purposes:

  1. Name: a string that uniquely identifies the piece of text the entry contains in the.resx files preceded with “SampleResource”. Note that it cannot begin with a number and cannot contain spaces or most special characters.
  2. Value: The string value that corresponds to the Name key for the culture associated with this resource file.
  3. Comment: An optional text entry to clarify information about the resource entry.
View of the “SampleResource.resx” table
  1. Repeat step 5 to create files “SampleResource.es-MX.resx” and “SampleResource.fr-FR.resx”.
Repeat the steps to create additional localization files
  1. Add the following entries to the .resx files created in steps 5-6. These values are “Hello World!” in different languages. Note that the file SampleResource.resx is the default resource file – if a culture is selected that does NOT match “fr-FR” or “es-MX” when the resource is accessed, then the application will default to the English value stored in SampleResource.resx.

File

Name

Value

Comment

SampleResource.resx

HelloWorld

Hello World!

Hello World in English

SampleResource.es-MX.resx

HelloWorld

¡Hola Mundo!

Hello World in Spanish (MX)

SampleResource.fr-FR.resx

HelloWorld

Bonjour le monde!

Hello World in French (FR)

 

Add entries for different languages
  1. Add the following code to Program.cs under the existing usings. This will enable the desired culture to be supplied as a command line argument, as well as add the appropriate using to use our new SampleResource files:
C#
using System.Globalization; 
if (args is { Length: 1 })
{
    CultureInfo.CurrentCulture =
        CultureInfo.CurrentUICulture =
            CultureInfo.GetCultureInfo(args[0]);
}

 

  1. In Program.cs, replace the line Console.WriteLine("Hello, World!"); with Console.WriteLine(SampleResource.HelloWorld);. Your Program.cs should now look like this:
View of Program.cs after replacing code
  1. Build the solution by right clicking on the solution in the Solution Explorer and clicking “Build Solution”.
  1. Open the .NET CLI / Terminal through the View menu.
Open the .NET CLI / Terminal through the View menu
  1. Finally, test out the LocalizationExample solution via the newly opened terminal using the command dotnet run –project LocalizationExample.csproj xx-XX, where “xx-XX” is a combination of a language and a region. Try out “es-MX”, “fr-FR”, “en-US” (U.S. English), and “pt-BR” (Brazilian Portuguese) as command line arguments.

Note that the HelloWorld resource for the cultures “es-MX” and “fr-FR” both appear translated as expected. When .NET reaches into the Resources folder to obtain the “HelloWorld” resource, it uses the culture string associated with the stored CultureInfo.CurrentCulture to access the appropriate .resx file. However, the “en-US” and “pt-BR” cultures both yield the HelloWorld string in English, because neither of these cultures are explicitly provided for the SampleResource resource, and so .NET defaults to the base SampleResource.resx.

Test out the LocalizationExample solution

DMC is a full-service application development company focused on custom web, mobile, cloud, and desktop applications.  Contact us today to discuss your application development needs.

The post Basic Localization Setup in .NET 8 with Visual Studio appeared first on DMC, Inc..

]]>
Why DMC is Adopting Next.js for Web Application Development https://www.dmcinfo.com/blog/16026/why-dmc-is-adopting-next-js-for-web-application-development/ Thu, 22 Aug 2024 15:40:27 +0000 https://www.dmcinfo.com/blog/16026/why-dmc-is-adopting-next-js-for-web-application-development/ Introduction For the last seven years, DMC’s go-to stack for web application development has been a React SPA (single-page application) front-end with an ASP.NET Core REST API. Although that will still be a good fit for some projects, DMC has chosen to expand our toolset and adopt Next.js for developing new web applications. Why choosing the right […]

The post Why DMC is Adopting Next.js for Web Application Development appeared first on DMC, Inc..

]]>
  • Introduction
  • Overview of Next.js
  • Performance Improvements
  • Developer Experience
  • Case Studies and Success Stories
  • Future Outlook
  • Conclusion
  • Introduction

    For the last seven years, DMC’s go-to stack for web application development has been a React SPA (single-page application) front-end with an ASP.NET Core REST API. Although that will still be a good fit for some projects, DMC has chosen to expand our toolset and adopt Next.js for developing new web applications.

    Why choosing the right framework is important

    When developing a new web application, your choice of framework or tech stack can have enormous and wide-ranging implications, and the benefits and consequences of this decision can impact your business years later. These implications include, but are not limited to:

    • Performance. Page load times and smoothness of transitions in the UI can both be affected by your framework choice.
    • Maintenance. Choosing an out-dated framework, or even choosing a modern framework that’s a poor fit for your application, can result in future feature requests taking much longer to implement. I.e. a 1-hour development task could be an 8-hour task in extreme cases.
    • Scalability. Some frameworks can limit your options for scaling as your userbase groups. For example a Blazor Server application may make it more difficult to do horizontal scaling. Also, some ORMs will have issues once your data reaches a certain scale.
    • Talent. If you choose an unpopular framework, or continue holding onto an outdated one, it can be difficult to hire, train, and retain good developer talent. After all, not many developers want to work on a no-framework PHP app or a ASP.NET Web Forms app in 2024.
    • Support. This is where popularity comes into play, because it tends to be easy to find information about how to solve a specific problem with a popular framework, whereas trying to find information on how to solve a specific problem with an unpopular framework can be nearly impossible.

    So when developing a new web application, you should look for a framework that is:

    • Performant, either in general or at least for your use case
    • Has a good developer experience. This helps make maintenance easier and helps attract and retain talent.
    • Scalable.
    • Popular, or at the very least not unpopular.

    With those points in mind, DMC has chosen to add Next.js to our web development toolbelt and officially adopt it for use on new projects. We have done several pilot projects with Next.js and we feel it’s a good, balanced approach that will serve a majority of projects well.

    Overview of Next.js

    What is Next.js?

    Next.js is a full-stack React framework that has actually been around a long time (1.0 release was in 2016), but its popularity and their profile in the React community has exploded since they released their big App Router update last year (May 2023). This version of Next.js became the first React framework to implement React Server Components and Server Actions, and as a result the official React documentation even started recommending Next.js for new projects.

    Key Features of Next.js

    Next.js offers features that make it stand out from other web development frameworks.

    Routing

    Most web development frameworks offer some sort of routing system, which enables you to connect a URL route (e.g. “/employees/123/edit”) to the piece of code responsible for responding to requests for that URL. Next.js offers a particularly effective routing system. It’s file-based, which means that requests to your web application will be routed a given React component based on the names of its folders and files.

    For example, in the file structure depicted in the below screenshot, a request to “/admin/addCompetency” will be handled by the file “app/admin/addCompetency/page.tsx”. This is intuitive and it enforces a disciplined approach to organizing your code files. If a new developer on the team is assigned a task to troubleshoot that “/admin/addCompetency” page, they know exactly where to start.

    Example of folder-based routing in Next.js

    But that’s only a small part of what Next.js’ routing system offers us. In the above screenshot, you can see an “error.tsx” file. By including this file in this folder, we automatically wrap our page in an error boundary. Any uncaught exceptions/errors on the “/admin/addCompetency” page will be handling by the component defined in “error.tsx” and a fallback UI will be displayed.

    Additionally, we can put a “loading.tsx” file in that folder, and it will cause a fallback UI (loading indicator) to be displayed while we wait for that portion of the page to finish loading. This feature allows us to implement loading UI and streaming with almost no effort.

    Next.js’ routing system also offers dynamic routes, parallel routes, intercepting routes, and other features that make it easy to compose a complex UI while keeping our codebase simple and well-organized. Read more about it routing in Next.js.

    Server Components

    I’ve been using React for web application development since 2016. And when I first started, I distinctly remember thinking, “Okay, but how do we get data from the server into our React components?” The official answer from React back then was 🤷‍♂️

    In 2016, the common way to load data into a React component (without third-party libraries for data fetching) was to do a fetch call from within your component’s “componentDidMount” lifecycle method. Honestly, it was kind of gross. With React 16.8 things got a little better with the useEffect hook. And of course, today, in a React SPA the best way to load data from the server into a component is with a third-party library like Tanstack Query (react-query).

    But React has never offered a first-class method of getting data from the server and giving it to our React components. Until now. React has introduced the concept of React Server Components (RSC), and Next.js is the React framework to make this feature available. With RSCs, we can write components that run on the server. The components can be defined as async functions, and therefore we can await asynchronous operations in them.

    For example, here’s a server component for a select list of “categories”. I’ve defined this as an async function, so it can only run on the server, but this allows us to get data directly from our database (which is what db_getCategories() does in this example). Because we await the result, we can guarantee that by the time we reach the next line of our code, we have the data.

    TypeScript
    import db_getCategories from "@/database/queries/db_getCategories";
    import linq from "linq";
    
    export async function CategorySelect() {
      const categories = await db_getCategories();
      const orderedCategories = linq
        .from(categories)
        .orderBy((c) => c.Name)
        .toArray();
      return (
        <select name="category_id">
          {orderedCategories.map((category) => (<option key="{category.Id}" value="{category.Id}">
              {category.Name}</option>
          ))}
        </select>
      );
    }

    This results in a very simple component: first we get the data, then we render the data as JSX. No state, no effects, no memoization – just simple, straightforward logic that’s easy for a developer to understand.

    But what sets React Server Components apart from other server-side rendering solutions is that they can be used to stream parts of the UI to the web browser. So as opposed to the good old days when we used to render the HTML for an entire web page on the server before sending it to the web browser, we can leverage server components along with React Suspense to stream our UI in chunks to the user complete with loading indicators:

    Code sample of wrapping a server component with React Suspense

    Additionally, because the output of these server components is a minimal data format called the RSC Payload, rather than rendered HTML, server components do not have a significant impact on a web application’s I/O throughput and therefore should not have a negative impact on scalability in most cases.

    This is incredibly powerful. And to someone who’s been developing React applications for years, this feels like someone’s finally given me the keys to a car that I’ve just been hotwiring until now.

    Server Actions

    Server Actions are similar to Server Components in that they’re a React feature that Next.js has been the first framework to incorporate. Whereas React Server Components are designed to make loading/fetching data easier, Server Actions are designed for mutating data.

    Server Actions are defined similar to any other JavaScript function, except with a “use server” directive added in to signal that they’re Server Actions. Then Next.js takes these functions and essentially turns them into HTTP endpoints for us, and by calling these functions from a client component we automatically get an HTTP request sent to that endpoint.

    Here’s an example Server Action, that takes some simple parameters and makes a corresponding update in the database (db_updateCompetencyForEmployee()) based on those parameters and the authentication information from the request (await getUserSession()).

    TypeScript
    "use server";
    
    export async function updateCompetencyForEmployee(
      competencyId: number,
      proficiencyId: number,
      learningPreference: boolean,
      notes: string | null
    ) {
      const userSession = await getUserSession();
      if (!userSession) {
        return { success: false, error: "User not logged in" };
      }
      const onprem_sid = userSession.user.onprem_sid;
      if (!onprem_sid) {
        return {
          success: false,
          error: "User session does not contain onprem_sid",
        };
      }
      const employee = await db_getEmployeeByOnpremSid(onprem_sid);
      if (isNaN(competencyId) || isNaN(proficiencyId) || !employee) {
        return {
          success: false,
          error: "Invalid competencyId, proficiencyId, or employeeId",
        };
      }
      await db_updateCompetencyForEmployee(
        employee.Id,
        competencyId,
        proficiencyId,
        learningPreference,
        notes
      );
      return { success: true };
    }

    Then from a client component, we can call this Server Action function directly, and under the hood React/Next.js will make an HTTP request to the server with those parameter values in the body of the request.

    TypeScript
    <label htmlfor="{`${fieldPrefix}-proficiency-${competencyId}`}">
      <input onclick="{async" type="radio" /> {
          const response = await updateCompetencyForEmployee(
            competencyId,
            proficiencyId,
            props.learningPreference,
            props.notes
          );
          if (!response.success) {
            alert(`ERROR: ${response.error}`);
          } 
        }}
      />
      {proficiencyName}
    </label>

    One way to think of Server Actions is that they’ve made it far easier to cross the barrier between the client and the server.

    A really nifty way to use Server Actions is to provide them to the “action” attribute of a  element like so:

    TypeScript
    export function AddCategoryForm() {
      return (
        <form action={addCompetencyCategory}>
          <FormLabel htmlFor="add_category">
            Add Category
            <input
              type="text"
              name="add_category"
              placeholder="New Category Name"
            />
          </FormLabel>
          <FormSubmitButton pendingText="Saving..." type="submit">
            Add Category
          </FormSubmitButton>
          <p>
            <FormStatusMessageFromUrl />
          </p>
        </form>
      );
    }
    
    export function AddCategoryForm() {
      return (
        
    Add Category Add Category

    ); }

    With this simple form, we don’t need to manage state at all! This is very similar to plain HTML form handling, where pressing the submit button triggers the values of the form’s fields to be bundled up and sent to the provided URL as an HTTP request.

    Here’s the server action being used in the above example:

    TypeScript
    "use server";
    // ...imports...
    export async function addCompetencyCategory(formData: FormData) {
      // authorization
      if (!(await getUserIsAdmin())) {
        redirect(
          "/admin?error=" +
            encodeURIComponent("You must be an admin to add a category")
        );
      }
      // form data validation with ZOD
      const { data, success, error } = await schema.safeParseAsync(formData);
      if (!success) {
        const errorMessages = error.issues.map((i) => i.message).join("\n");
        // redirect with error message if failed validation
        redirect("/admin?error=" + encodeURIComponent(errorMessages));
      }
      const { add_category: name } = data;
      // save to database
      await db_addCompetencyCategory(name);
      revalidatePath("/admin");
      // redirect to "success" page
      redirect(
        "/admin?message=" + encodeURIComponent("Category added successfully")
      );
    }

    An interesting benefit of this approach is the form submission is fully functional even if the user has JavaScript disabled. We don’t need that for most projects, but for those few projects where we need to the application to still be functional with JavaScript disabled, this is an enormous benefit. If the user does have JavaScript enabled, though, Next.js automatically transforms the behavior of this simple form to act like a form in a modern SPA, because once the React component tree hydrates the client-side behavior gets enabled. This is called progressive enhancement.

    API routes

    In addition to its full-stack capabilities, Next.js allows us to define more traditional endpoints via “route handlers”. These use the same kind of file-based routing as the rest of the framework, but route handlers allow us to return any kind of HTTP response we want in response to a request. For example, this GET route handler at “/api/projects/{projectId}/expenseReceipts/projectFiles” returns a file that can be downloaded by the browser:

    TypeScript
    export async function GET(
      request: Request,
      { params }: { params: { projectId: string } }
    ) {
      if (!getUserIsLoggedIn()) {
        return new Response("Unauthorized", { status: 401 });
      }
      const acumaticaApiEndpoint = `${process.env.ACUMATICA_API_URL}/ExpenseReceipt/ProjectFiles?projectId=${params.projectId}`;
      const acumaticaResponse = await fetch(acumaticaApiEndpoint);
      if (acumaticaResponse.ok) {
        const bodyAsBuffer = await acumaticaResponse.arrayBuffer();
        return new Response(bodyAsBuffer, {
          headers: {
            "Content-Type": "application/octet-stream",
            "Content-Disposition": `attachment; filename="projectFiles.zip"`,
          },
        });
      }
    }

    An advantage of this is that if we have a separate mobile or desktop application that needs to consume endpoints like these, we don’t need a separate REST API to serve those applications. It can make sense to have a separate REST API anyway, but it’s not strictly necessary.

    Performance Improvements

    Faster Load Times

    Thanks to server-side rendering (SSR) with React Server Components, initial load times in a Next.js application can be much faster than the initial page load time of a React SPA. Whereas with an SPA we have to wait for the JavaScript bundle (which is usually hundreds of KB and can sometimes be multiple MB) to download before the user sees any UI, in a Next.js app the initial request for the web page gets fully rendered HTML as a response. Taking a current project as an example, loading the home page is only a 7.4 KB download versus 200 KB if I were developing it as an SPA. This tends to result in a much better score for First Contentful Paint (FCP), an important Core Web Vitals performance metric.

    And depending on how much content you’re able to render on the server for your application, you can even end up seeing the remarkable result of FCP and LCP (Largest Contentful Paint) being the same number! When this happens, it means the very instant that the user first sees content/UI for the web page, they see the entire UI.

    Core web vitals sample for a real Next.js application

    Caching

    One of the reasons it’s easy to develop a highly-performant application with Next.js is because it offers robust caching options. Next.js with app router has four different caching mechanisms, all of which are enabled by default (Next.js 15 will make some of these opt-in).

    • Request Memoization – If you make multiple fetch requests to the same URL in multiple places in a React component tree, Next.js will only make the request once
    • Data Cache – Same as Request Memoization, but is persistent across multiple requests to our Next.js application, e.g. multiple requests for the same web page
    • Full Route Cache – If a given server component or route handler does not use headers or cookies to determine its output, then the HTML output and/or React Server Component Payload (or HTTP response in the case of route handlers) will be cached on the server as static content until something happens to “revalidate” it.
    • Router Cache – This is the client-side caching mechanism in Next.js. It’s time-based, and by default, if a user has visited a given route in the last 30 seconds, navigating back to that route again within 30 seconds will serve up the same React component tree from the last visit and won’t make a request to the server at all.

    Automatic Code Splitting

    Code splitting involves splitting up your JavaScript bundle so that the user’s browser only downloads enough JavaScript to render the currently viewed web page. In other words, if there’s only one page in our app that utilizes the react-query package, we can use code splitting to ensure that the web browser doesn’t need to download react-query’s bundled JavaScript until they visit that specific page – visiting other pages will not trigger that chunk of JavaScript to be downloaded.

    This is something we’ve been able to do in React for a very long time now, but it’s fairly involved and is easy to get it wrong. So the fact that Next.js does this automatically for us, and does a good job with it, makes it that much easier to develop an app that delivers as fast and performant an experience as possible.

    Improved SEO

    Although many of the web application projects we work are not concerned with SEO, some are. For those web applications where good SEO (Search Engine Optimization) is required or at least desired, Next.js helps us achieve better SEO through the following mechanisms:

    • Search engine crawlers are better able to fully process the contents of a web page that was rendered on the server than those that are rendered on the client with JavaScript. Next.js enables us to render all our content on the server, and this in turn boosts SEO.
    • Performance is a factor in SEO. The faster a web page loads, the higher search engine ranking it will get. Next.js’ many mechanisms for improving performance also result in better SEO.
    • Because most of your links will be rendered on the server with server components, search engine crawlers will have an easier time indexing all the pages on your web site or web application.

    A takeaway here is that, if SEO is important for your web application or website, Next.js is one of your best framework options.

    Developer Experience

    Developer experience is a broad term that covers workplace culture, management process, and workflow, but for the sake of this article we’re going to focus on one aspect of developer experience: tools.

    If you’re a contractor who builds houses, but you restrict your team from using power tools, your employees are not going to enjoy working for you but equally important is the fact that it will take you more hours and money to complete a project. Similarly, the tools a software development team uses can have an enormous impact on their productivity. A toolset that’s easy to work with and fits the project’s specific needs can result in us shipping more features for our client’s web application for less money, which means we deliver more value to our clients.

    So let’s look at some of the ways that Next.js improves the developer experience.

    Simpler code

    Two of Next.js’ big features – Server Actions and React Server Components – streamline the codebase of a React app in a very big way. I did a demo a while back where I developed the same form two ways: with a plain React component using hooks for state management, and with server components and server actions. The difference was bigger than I’d expected – the traditional React component was 125 lines of code whereas the Next.js version was only 29 lines of code. In the screenshot below, you can see the “plain React” version on the left and the Next.js version on the right.

    Comparison between a plain React component and the same component written in Next.js

    Comparison between a plain React component and the same component written in Next.js

    It should be noted that neither version of this component utilizes any third party libraries – only built-in React features. The version on the right simply uses a handful of custom, reusable components, each only a few lines of very simple code. Both versions of the form component have the same behavior: a loading indicator for the select list of categories, temporarily disabling the submit button while the form submission is pending, and a display for error messages a “form saved successfully” status message.

    Not only does the Next.js version of this form component take much less effort to develop, but the code is also much more straightforward – it’s purely declarative and contains no imperative logic whatsoever. That almost completely eliminates the potential for buggy behavior (at least on the client-side), but it also makes it very easy to modify and maintain this component in the future. A developer can jump into this code file and understand everything about it with a quick glance.

    This has the potential to greatly reduce the amount of effort, and therefore money, required to build a web application. The key word there is potential – an app that requires a large amount of very intricate client-side interactivity may not see as much benefit in terms of the simplicity of the code base and the effort required to develop the app.

    Enhanced Developer Tools

    Hot reloading has become a staple of front end development. When I save a change to a file in my application, I want to see that change reflected in my web browser without needing to restart the application or even reload my browser. Next.js provides a particularly smooth hot reloading experience. If I save a change to a server component, hot reload will cause that server component to automatically be requested and re-rendered again, updating the UI instantaneously for all intents and purposes. The same is true with client-side components. This full-stack hot reload makes testing changes during development a breeze.

    Also, I’ve found that generative AI tools like GitHub Copilot and ChatGPT are particularly effective at generating code for server components and server actions. Similarly, the server side of a Next.js app will use Node.js APIs and libraries, which generative AI also has an easy time with. So tools like GitHub Copilot can enhance a Next.js developer’s productivity even further, which again, helps us deliver more value to our clients when working on a web application.

    Rich Ecosystem and Community Support

    The old adage about popularity contests being bad does not apply to software development. The more popular a programming language or framework is:

    • The easier it is to recruit new developers that are already familiar with it.
    • The easier it is to find libraries or packages for that framework/library.
    • The easier it is to find answers to problems or questions via a Google search or a StackOverflow question.

    Next.js with the app router is picking up in terms of popularity, but it also benefits from the fact that Node.js and React are both incredibly popular. In the 2023 StackOverflow Developer Survey, the two most popular web technologies by a mile were Node.js and React:

    StackOverflow developer survey results showing both Node.js and React as the most popular technologies, with more than 40% of respondents using them

    The takeaway here is that it is very easy to find support, recruits, and libraries when working with Next.js, more than with any other full-stack web framework. One caveat is that there are a lot of React component libraries/packages out there that have not been updated to work in React Server Components, and may never receive such an update. However, any existing third-party React component will work just fine if used from a client component. Some components will need to be imported in a specific way, but as an extreme example I was able to use a very client-heavy React component from an NPM package that hasn’t been updated since 2019 and it worked just fine. There is some misinformation out there around this point. The most you’ll have to do to make this work is to do something like the following when important a component:

    TypeScript
    const ColorThemePicker = dynamic(() => import("./colorThemePicker"), {
      ssr: false,
    });

    Case Studies and Success Stories

    Industry Examples

    Working with Next.js, we’re in good company. The following is a small sample of businesses and organizations that have found success with Next.js.

    Internal Benefits

    DMC has been piloting Next.js for a year now, it’s helped ensure success on several projects:

    • DMC has already leveraged Next.js (with App Router) to help a client with a rewrite of their order management web portal. Next.js’ streamlined developer experience and robust features allowed us to complete this project well under budget and deliver a web application that our client was happy with.
    • We wrote a new internal application (budget tracking app) using Next.js, again with the App Router.
    • We rewrote several internal web applications using Next.js, and the result for each application was a more performant application with a smoother UX and a simpler codebase.

    Future Outlook

    Upcoming Features and Roadmap

    Next.js represents the cutting edge of web application development technology. Moving forward, they plan to improve integration for React 19 features, as well as ship support for the React Compiler, further improving the productivity of our web development team and allow us to achieve more functionality with less code. Next.js is also continually improving its caching features and is implementing partial prerendering, so this framework aims to become ever more performant as time goes on.

    DMC looks forward to leveraging Next.js and the App Router to deliver more value to our clients by efficiently building better web applications.

    Ongoing Learning and Adaptation

    As with any other technology, Next.js has a learning curve. Next.js is an opinionated framework, and our senior application developers have invested time in learning how to work with those opinions rather than against them. Our next steps are to disseminate that knowledge throughout our team so that we can apply best practices to the web application projects we take on for our clients.

    Aside from internally developed training content, the best learning resources for the Next.js App Router are:

    Conclusion

    Summary of Key Points

    To recap, Next.js is an excellent tool for us to utilize on web application projects. Aside from objective technical advantages like performance and SEO, Next.js’ core features allow us to develop a given web application with simpler code and less code, and will often produce a codebase that is easier to maintain over time. The end result of all of this is that our clients end up getting more bang for their buck on those projects that are a good fit for Next.js.

    To learn more about the benefits of using Next.js for web development, check out the Next.js website, and to learn how to use Next.js check out the official documentation.

    If you’re looking to develop a new application or rewrite an existing web application, DMC’s expert web developers can help you get up and running fast with a high-quality web application that serves your business needs. Read more about our web application development services, and contact us today to explore how DMC can help you.

    The post Why DMC is Adopting Next.js for Web Application Development appeared first on DMC, Inc..

    ]]>
    Embedding Private Media Files Securely in a React Frontend with Amazon S3 and AWS Lambda https://www.dmcinfo.com/blog/16076/embedding-private-media-files-securely-in-a-react-frontend-with-amazon-s3-and-aws-lambda/ Wed, 31 Jul 2024 18:30:36 +0000 https://www.dmcinfo.com/blog/16076/embedding-private-media-files-securely-in-a-react-frontend-with-amazon-s3-and-aws-lambda/ Amazon Simple Storage Service (S3) is a low-cost service for storing and serving unstructured data, which makes it perfect for hosting any media that will be displayed or referenced in your React frontend (PDFs, images, etc.). However, if the media fetched by your frontend is read-restricted to only authenticated users of your application, then embedding this […]

    The post Embedding Private Media Files Securely in a React Frontend with Amazon S3 and AWS Lambda appeared first on DMC, Inc..

    ]]>
    Amazon Simple Storage Service (S3) is a low-cost service for storing and serving unstructured data, which makes it perfect for hosting any media that will be displayed or referenced in your React frontend (PDFs, images, etc.). However, if the media fetched by your frontend is read-restricted to only authenticated users of your application, then embedding this content gets tricky. In this post, I’ll expand on my previous post about externally authenticated access to private S3 objects with an outline for a generalized solution to embedding private S3 content into your React components.

    Object Retrieval Pattern

    We’ll be using presigned S3 object URLs to fetch our private Amazon S3 data. See my previous post linked above to learn more about this design. Below is a diagram visualizing the dataflow of authorizing the React frontend to fetch an image file, using presigned URLs.

    Initial Implementation

    Let’s set up a simple component with a single <img> element that we’d like to link to our desired resource, with the resource URL passed as a prop to the component.

    JavaScript
    const S3Image = ({ objectUrl }) => {
    
        return <img src={objectUrl} alt="Image" />;
    
    }
    
    export default S3Image;

    If we want to fetch the presigned URL for the corresponding object pointed to by objectUrl, we need to ensure that the URL embedded into the <img> element is up to date with the object specified in the props. We can set up some local state to store the signed URL and a useEffect to fetch the URL when the objectPath prop changes.

    JavaScript
    import React, { useState, useEffect } from 'react';
    const S3Image = ({ objectPath }) => {
    
        const [signedUrl, setSignedUrl] = useState('');
    
        useEffect( async () => {
    
            try {
    
                const response = await fetch('/get-signed-url', { objectPath });
    
                const data = await response.json();
    
                setSignedUrl(data.url);
    
            } catch (error) {
    
                console.error('Error fetching signed URL:', error);
    
            }
    
        }, [objectPath]);
    
     
    
      return (
    
          <img src={signedUrl} alt="Image" />
    
      );
    
    };
    
    export default S3Image;

    For simplicity, we’ll handle the signed URL creation in lambda function running Node.js. This assumes that an API Gateway is set up to proxy the HTTP request from the react app to this Lambda function, but this pattern could be implemented with other AWS compute resources/VPC connection methods if more applicable. To see more on how to implement this API Gateway + Lambda stack, see my blog on Cloud APIs in minutes with Serverless Framework and AWS Lambda.

    We’ll use the AWS SDK to generate a presigned URL for this image file.

    JavaScript
    const AWS = require('aws-sdk');
    
    AWS.config.update({
    
        accessKeyId: // IAM user credentials that have access to the S3 bucket
    
        secretAccessKey: // IAM user secret key
    
        signatureVersion: 'v4',
    
        region: // enter your region here
    
    });
    
     
    
    const s3 = new AWS.S3();
    
     
    
    module.exports = async (event) => {
    
        try {
    
            // get request parameters
    
            const body = JSON.parse(event.body);
    
           
    
            if(!(body.objectPath?.length)){
    
                return { statusCode: 400, body: JSON.stringify({error: "Invalid object path"}) };  
    
            }
    
     
    
            const url = s3.getSignedUrl(input.object_method, {
    
                Bucket: // specify the bucket we're pulling from
    
                Key: body.object_path,
    
                Expires: //specify the lifetime of the URL (in milliseconds)
    
            });
    
     
    
            return { statusCode: 200, body: JSON.stringify({url}) };
    
        } catch (error) {
    
            return { statusCode: 500, body: JSON.stringify(error) };
    
        }
    
    }

    And there it is! With this implementation alone, you’ll be able to display that image in your app, even though it’s not publicly accessible in S3.

    Caveat – Signed URL Expiration

    There’s one large issue with this implementation—the presigned URLs expire! Each presigned URL generated has a lifespan of a duration (configurable when the URL is generated), and after this time, requests using that URL will fail. This is to prevent persisting public access to the object. Remember, anyone with the presigned URL can use it, as long as it has not expired!

    This will not be an issue for any media fetched and displayed in the browser on page load (like our image component above), but what if you want to provide a download link to the image file? The user may linger on the page after it’s initially loaded and only request the file from S3 by clicking the link after the signature has expired. This leads to a nasty error screen and is probably not what your users want to see.

    So, we don’t want to let presigned URLs embedded into the page expire, but we also don’t want to disable the presigned URL’s timeout. To get around this, we’ll add logic to periodically re-generate the presigned URL for the given asset. Below is our component, updated with the logic to re-fetch the URL periodically (using the useInterval custom hook from the use-interval package).

    JavaScript
    import React, { useState, useCallback } from 'react';
    
    import useInterval from 'use-interval';
    
     
    
    const S3Image = ({ objectPath }) => {
    
        const [signedUrl, setSignedUrl] = useState('');
    
        // define the function used to fetch the signed URL
    
        const getSignedUrl = useCallback(async () => {
    
            try {
    
                const response = await fetch('/get-signed-url', { objectPath });
    
                const data = await response.json();
    
                setSignedUrl(data.url);
    
            } catch (error) {
    
                console.error('Error fetching signed URL:', error);
    
            }
    
        }, [objectPath]);
    
        // call the function defined above every second
    
        useInterval( getSignedUrl, 1000, true );
    
        return <img src={signedUrl} alt="Image" />;
    
    };
    
    export default S3Image;

    This implementation can be further ruggedized by modifying our Lambda function to return the lifetime along with the URL itself, such that the client knows exactly when it will need to regenerate it. However, regenerating the URL is not a time or compute intensive operation, so simply regenerating more often than you would ever expect the presigned URL timeout to be set to is most likely a reliable solution.

    Abstracting as a Custom Hook

    As a one-off use of this procedure, this implementation is perfectly suitable now. However, this logic would muddy your state management design if added in multiple locations or more complex components. To avoid this, let’s abstract this logic into a custom hook that provides the presigned URL for the S3 object path passed to it.

    JavaScript
    import React, { useState, useCallback } from 'react';
    
    import useInterval from 'use-interval';
    
     
    
    export const useSignedUrl = (objectPath) => {
    
        const [signedUrl, setSignedUrl] = useState('');
    
        const getSignedUrl = useCallback(async () => {
    
            try {
    
                const response = await fetch('/get-signed-url', { objectPath });
    
                const data = await response.json();
    
                setSignedUrl(data.url);
    
            } catch (error) {
    
                console.error('Error fetching signed URL:', error);
    
            }
    
        }, [objectPath]);
    
        useInterval( getSignedUrl, 1000, true );
    
        return signedUrl;
    
    };

    This simplifies our component definition significantly.

    JavaScript
    import useSignedUrl from './useSignedUrl';
    
    const S3Image = ({ objectPath }) => {
    
        const signedUrl = useSignedUrl(objectPath);
    
        return <img src={signedUrl} alt="Image" />;
    
    };
    
    export default S3Image;

    With this, we have a clean, generalized solution for embedding restricted S3 content in our components.

    Learn more about DMC’s Application Development services and contact us for your next project.

    The post Embedding Private Media Files Securely in a React Frontend with Amazon S3 and AWS Lambda appeared first on DMC, Inc..

    ]]>
    Externally Authenticated Access to S3 Objects Over the Internet https://www.dmcinfo.com/blog/16081/externally-authenticated-access-to-s3-objects-over-the-internet/ Wed, 31 Jul 2024 17:11:11 +0000 https://www.dmcinfo.com/blog/16081/externally-authenticated-access-to-s3-objects-over-the-internet/ Amazon S3 is great for storing any type of binary data or file you may need in a centralized location in the cloud. There is a dedicated URL for each object, which can be easily shared with anyone who needs to access it. However, say that your bucket is storing private/proprietary information. You wouldn’t want […]

    The post Externally Authenticated Access to S3 Objects Over the Internet appeared first on DMC, Inc..

    ]]>
    Amazon S3 is great for storing any type of binary data or file you may need in a centralized location in the cloud. There is a dedicated URL for each object, which can be easily shared with anyone who needs to access it.

    However, say that your bucket is storing private/proprietary information. You wouldn’t want just anybody to be able to retrieve that data with an HTTP request, would you? In this blog, we’ll explore how we can securely and efficiently access Amazon S3 objects with either direct AWS or third-party authentication/authorization.

    Bucket Policies

    Bucket policies are the first step to restricting public access to objects. They apply to entire buckets in S3, and can be set up to only allow certain AWS IAM users, user roles, or methods of access to retrieve objects within the bucket to which it’s applied.

    Here’s a straightforward bucket policy to restrict any access to an S3 bucket except for those that provide IAM credentials that match to a specific user:

    JSON
    
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Principal": {
                    "AWS": "arn:aws:iam::ACCOUNT_ID:user/USER_NAME"
                },
                "Action": [
                    "s3:GetObject"
                ],
                "Resource": [
                    "arn:aws:s3:::BUCKET_NAME/*"
                ]
            }
        ]
    }
    

    Now this solution works for any situation where the client fetching the S3 object is able to provide permanent IAM credentials in their request. However, this is very rarely a valid solution. What if we have an existing authentication solution that we want to use to determine access to an S3 object? Or what if we don’t want/need to edit this policy whenever a new user needs access?

    Accommodating Third-party Authentication

    To abstract the IAM authentication layer entirely, we could proxy the S3 files through some intermediate compute resource within our VPC that:

    1. can authenticate a request with a 3rd party auth solution.
    2. can access the buckets directly, with its own static set of IAM credentials.

    Below is a diagram of this dataflow. Only the solid lines are data transfers that contain the data of the S3 object being requested.

    This is, strictly speaking, an effective solution. However, S3 objects can be very large, so passing the whole object through some intermediary compute resource may incur unacceptable memory/data transfer costs. What we’d want is a way to securely access the S3 object while still being able to pull it directly from the bucket to the end client making the request, in order to take advantage of S3’s ultra-cost-efficient retrieval pricing.

    Enter – the presigned URL!

    Presigned URLs are used to temporarily authorize operations across AWS to anybody who has the URL. To generate a presigned URL for a specific action, your IAM policy must be authorized to perform that action. To mitigate the chance that the power of a presigned URL falls into the wrong hands, they are configured to only serve their purpose for a defined timeout.

    Using presigned URLs generated to provide access to individual S3 objects at request time, we can extend the secure access approach above, such that:

    1. The client sends an authenticated request using the 3rd-party auth to a compute resource (lambda works perfectly) with full read-access to our S3 bucket.
    2. The compute resource authenticates this request against 3rd party auth.
    3. The compute resource generates a pre-signed URL for the S3 object requested and returns it to the client.
    4. The client fetches the S3 object using the presigned URL before it times out.

     

    Notice how the file is requested directly from S3 in this setup! With that, we have a secure and performant solution.

    To see an example implementation of this design, see my follow-up post, Embedding private media files securely in a React frontend with Amazon S3 and AWS Lambda.

    Ready to take your Application Development project to the next level? Contact us today to learn more about our solutions!

    The post Externally Authenticated Access to S3 Objects Over the Internet appeared first on DMC, Inc..

    ]]>
    Cloud Deployments in Minutes with Serverless Framework and AWS Lambda https://www.dmcinfo.com/blog/16084/cloud-deployments-in-minutes-with-serverless-framework-and-aws-lambda/ Wed, 31 Jul 2024 16:07:35 +0000 https://www.dmcinfo.com/blog/16084/cloud-deployments-in-minutes-with-serverless-framework-and-aws-lambda/ Let’s say that you have a RESTful API deployed to some reserved compute resources in AWS. The API’s surface is well-designed and is perfectly capable of meeting all of the needs of any clients that consume it. But then, you’re told that there is some small internal feature or action that must be implemented in your cloud infrastructure. […]

    The post Cloud Deployments in Minutes with Serverless Framework and AWS Lambda appeared first on DMC, Inc..

    ]]>
    Let’s say that you have a RESTful API deployed to some reserved compute resources in AWS. The API’s surface is well-designed and is perfectly capable of meeting all of the needs of any clients that consume it. But then, you’re told that there is some small internal feature or action that must be implemented in your cloud infrastructure.

    For example, say you need to set up a service that sends an email containing data from the same data sources accessed by your API when triggered by an HTTP request. The API’s codebase could very easily be extended to provide this functionality, but doing so can lead to several types of tech debt.

    • Design – The API surface should abstract the complexities of data retrieval/syncing, so adding endpoints to manage this process goes against the design pattern.
    • Security – If this is an external-facing API, you might run into serious security issues with allowing processes that consume your API to trigger such an action!
    • Cost optimization – This is probably not a concern if your code just needs to send an email, but if you need to perform a more compute or memory intensive action, like a sync between two data sources. If this data sync is a very bursty workload, then running it on the same resources provisioned for nominal API usage may lead to surge pricing.

    So, looks like you need a scheduled process within your cloud infrastructure that runs this operation separately from your API. This now seems like a lot more of an undertaking. The source code for this tiny task should be very quick to implement, but the overhead of provisioning, testing, deploying, and maintaining new resources dedicated for that task seems too high.

    Serverless Functions to the Rescue!

    Serverless functions like AWS Lambda (which is what we’ll focus on) are perfect for these kinds of actions. They’re quick and easy to develop and can cost almost nothing compared to reserved compute like EC2 for small tasks like sending an email or two.

    The code below is an example of an entire Lambda function implementation, minus any package dependencies. See how quickly these can be developed?

    JavaScript
    
     
    var AWS = require('aws-sdk');
    AWS.config.update({region: 'us-west-2'});
    var ses = new AWS.SES({apiVersion: '2010-12-01'});
    
    exports.handler = async (event) => {
        var params = // email params
        try {
            var data = await ses.sendEmail(params).promise();
            return {
                statusCode: 200,
                body: JSON.stringify("Email sent! Message ID: " + data.MessageId),
            };
        } catch (error) {
            console.error(error);
            return {
                statusCode: 500,
                body: JSON.stringify("Error sending email: " + error.message),
            };
        }
    };
    

    But wait! What if a second data sync action needs to be added? A third? Serverless functions are great for implementing one-off features like this in a vacuum, but managing a whole flock of serverless functions with their own source code, their own VPC configurations, their own deployment processes, etc., can become an unmanageable mess very quickly.

    Serverless Framework

    The Serverless Framework package provides a suite of Infrastructure-as-Code capabilities built to allow you to deploy and configure one or more serverless functions defined within a single directory. This means that you can easily keep your functions’ implementations and configurations all tracked within one repository!

    Let’s walk through the setup of a project using the serverless framework to show how simple it is.

    Create a Node Project and Install the Serverless Framework Package

    In your project directory, with your Unix shell of choice, run the following.

    ShellScript
    
     
    npm init
    npm install serverless
    

    Use the Serverless Framework CLI to Pick the Project Template

    The CLI will provide a list of templates to choose from. For this post, we’re going to create a Lambda application targeting Node.js with an Amazon API Gateway integration for triggering our functions.

    Bash
    
     
    ? What do you want to make?
      AWS - Node.js - Starter
    > AWS - Node.js - HTTP API
      AWS - Node.js - Scheduled Task
      AWS - Node.js - SQS Worker
      AWS - Node.js - Express API
      AWS - Node.js - Express API with DynamoDB
      AWS - Python - Starter
      AWS - Python - HTTP API
      AWS - Python - Scheduled Task
      AWS - Python - SQS Worker
      AWS - Python - Flask API
      AWS - Python - Flask API with DynamoDB
      Other
    

    Name the project once you’ve selected the template. 

    Bash
    
     
    ? What do you want to make? AWS - Node.js - HTTP API
    ? What do you want to call this project? example-project
    

    This should generate the following files in your Node project.

    The configuration of each Lambda function and this gateway are all specified in the generated serverless.yaml file.

    YAML
    
     
    service: example-project
    frameworkVersion: '3'
    provider:
      name: aws
      runtime: nodejs18.x
    functions:
      api:
        handler: index.handler
        events:
          - httpApi:
              path: /
              method: get
    

    This config tells the API gateway to invoke our Lambda (referenced here as the auto-generated function name of index.handler, which matches the function code’s file name and export field) in a Node 18 runtime, whenever a GET request is sent to the API gateway. The auto-generated function (in ./index.js) looks like this.

    JavaScript
    
     
    module.exports.handler = async (event) => {
      return {
        statusCode: 200,
        body: JSON.stringify(
          {
            message: "Go Serverless v3.0! Your function executed successfully!",
            input: event,
          },
          null,
          2
        ),
      };
    };
    

    Creating the Functions

    Let’s update index.js to include both of our emailing functions.

    JavaScript
    
     
    var AWS = require('aws-sdk');
    AWS.config.update({region: 'us-west-2'});
    var ses = new AWS.SES({apiVersion: '2010-12-01'});
    const sendEmailAndReturnResponse = async (params) => {
      try {
          var data = await ses.sendEmail(params).promise();
          return {
              statusCode: 200,
              body:
        JSON.stringify("Email sent! Message ID: " + data.MessageId),
          };
      } catch (error) {
          console.error(error);
          return {
              statusCode: 500,
              body: JSON.stringify("Error sending email: " + error.message),
          };
      }
    };
               
    exports.mail1 = async (event) => {
        var params = getEmailParams1(event); // getEmailParams1 is a function that you'll define for your own system
        return sendEmailAndReturnResponse(params);
    };
    exports.mail2 = async (event) => {
        var params = getEmailParams2(event); // getEmailParams2 is a function that you'll define for your own system
        return sendEmailAndReturnResponse(params);
    };
    

    Configuring the Functions

    Updates the serverless.yaml file to map both functions to POST requests sent to their endpoints, and to route the API gateway endpoints to each function correctly.

    YAML
    
     
    service: example-project
    frameworkVersion: '3'
    provider:
      name: aws
      runtime: nodejs18.x
    functions:
      api:
        handler: index.mail1
        events:
          - httpApi:
              path: /
              method: post
      api:
        handler: index.mail2
        events:
          - httpApi:
              path: /
              method: post
    

    Enter your AWS account credentials, and both of your functions will be deployed to a new Lambda application for you!

    Update Your Functions

    Updates to the serverless.yaml or the function source code can be easily applied to the deployments by simply re-running the command above. Serverless Framework tracks the configuration of your functions with AWS CloudFormation, so it knows exactly what to update when changes have been pushed.

    Learn more about DMC’s application development expertise and contact us for your next project. 

    The post Cloud Deployments in Minutes with Serverless Framework and AWS Lambda appeared first on DMC, Inc..

    ]]>
    C# New vs. Override Keyword https://www.dmcinfo.com/blog/17225/c-new-vs-override-keyword/ Tue, 12 Sep 2023 11:21:12 +0000 https://www.dmcinfo.com/blog/17225/c-new-vs-override-keyword/ One common challenge you may come across when working with derived classes is deciding which of the two ways you should override a function in C#: using either override or new keywords. One thing to remember is that it is necessary to declare a function `virtual` if you want to use the `override` keyword, otherwise, the override […]

    The post C# New vs. Override Keyword appeared first on DMC, Inc..

    ]]>
    One common challenge you may come across when working with derived classes is deciding which of the two ways you should override a function in C#: using either override or new keywords.

    One thing to remember is that it is necessary to declare a function `virtual` if you want to use the `override` keyword, otherwise, the override will not work, and you will get an error. The keyword `new`, however, does not require any extra refactoring, and, for these reasons, knowing how to use both override and new keywords is critical when working with derived classes.

    General Setup

    We will start with some basics and create two classes. One of them will be a base class and the other a derived class that inherits from the said base class. We will also create a different method in each of the classes.

    Base class:

    General Setup Base Class

    Derived class:

    General Setup Derived Class

    To fully demonstrate the capabilities of inheritance, we will instantiate each of the classes as separate variables and create one variable of type BaseClass that implicitly casts as such from a DerivedClass type.

    Demo:

    General Setup Demo

    As you can see, the derived class contains both Method1 and Method2 since it has inherited all the methods from the base class.

    Now, we will take a step further and make both classes implement a conflicting method. This means that both classes will implement a method that has the same signature, but different logic implementation.

    Base class:

    General Setup Base Class Method 1 Method 2

    Derived class:

    General Setup Derived Class Method 2

    Demo:

    General Setup Demo Method 1 Method 2

    As you can see in the demonstration, you will not get an error and the code will execute. However, this is not good practice because you are making the compiler choose for you what type of an override you need, and it tells you so through a warning message. It will by default hide the method for you in the base class.

    Warning message:

    Warning Message

    New

    The keyword new allows you to hide a base class method. This will suppress the warning message seen in the previous example. Here is a simple way that you can use the new keyword in your code (keep in mind that you can use the keyword before or after the access keyword, in this case public):

    New before public:

    New Before Public

    New after public:

    New After Public

    Demo:

    New Demo

    As you can see, no warning message is present and when you attempt to call Method2 in a class that was implicitly casted to BaseClass from DerivedClass you get the method implementation from the BaseClass. This simply means that when you instantiate a class as a DerivedClass you hide the original implementation of a method, so when you recast it to the Base class the original implementation resurfaces.

    Override

    The keyword override, on the other hand, allows you to extend a virtual method in the base class. Here is an example of how you can use the override keyword in your code (virtual and override can similarly be put before or after the access keyword):

    Virtual before public:

    Virtual Before Public

    Virtual after public:

    Virtual After Public

    Override before public:

    Override Before Public

    Override after public:

    Override After Public

    Demo:

    Override Demo

    Now, when you attempt to call Method1 in the class that was implicitly casted as BaseClass from DerivedClass, the updated function implementation remains.

    Note that you must implement the original function as virtual, otherwise you will see an error message preventing you from successfully compiling the code:

    Error Message

    Example With Context

    The difference between hiding and extending a method is better shown in context. Refer to the following example that completes this detailed demonstration.

    Setup:

    Hiding vs Extending Setup

    Examples:

    Hiding vs Extending Examples

    Note that in the first example building2 displays the message of the base class because ShowDefinition only has access to the base class ShowDescription method.

    Learn more about DMC's C# .NET Application Development and contact us today for your next project.

    The post C# New vs. Override Keyword appeared first on DMC, Inc..

    ]]>