PC Application Development Archives | DMC, Inc. https://www.dmcinfo.com/blog/category/application-development/pc-application-development/ Tue, 23 Dec 2025 14:53:30 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 https://cdn.dmcinfo.com/wp-content/uploads/2025/04/17193803/site-icon-150x150.png PC Application Development Archives | DMC, Inc. https://www.dmcinfo.com/blog/category/application-development/pc-application-development/ 32 32 Electrifying Desktop Application Development with Electron https://www.dmcinfo.com/blog/15849/electrifying-desktop-application-development-with-electron/ Mon, 18 Nov 2024 10:58:38 +0000 https://www.dmcinfo.com/blog/15849/electrifying-desktop-application-development-with-electron/ You've already used an Electron application. So, what is Electron? You’ve already used an Electron application; you just might not have known it. Electron applications provide the basis for some of the apps you use daily—Microsoft Teams, Slack, and Visual Studio Code. Electron is a modern, open-source framework for building desktop applications. Since it runs on […]

The post Electrifying Desktop Application Development with Electron appeared first on DMC, Inc..

]]>
You've already used an Electron application. So, what is Electron?

You’ve already used an Electron application; you just might not have known it. Electron applications provide the basis for some of the apps you use daily—Microsoft Teams, Slack, and Visual Studio Code.

Electron is a modern, open-source framework for building desktop applications. Since it runs on the Node.js runtime, Electron makes it possible to harness JavaScript—typically a web technology—for the development of desktop applications. That’s a powerful way to take our expertise in web technology and apply it to desktop development.

Let’s talk about why DMC loves building with Electron!

Why DMC develops with Electron

Electron is cross-platform. Have a use case that calls for compatibility with Windows, macOS, and Linux? Your Electron application will run on all three, so you need just one code base. You’ll benefit from a smaller, tighter engineering team focused on a single application for minimized time to market and maximized market reach across multiple platforms. A singular code base means that bug fixes, updates, and new features need only be implemented once.

Built on the Node.js runtime. Equally as important is the runtime itself. Electron runs on Node.js, a JavaScript runtime environment, taking advantage of JavaScript’s position as the world’s eminent web technology. Not only does this align with DMC’s sweeping base of expertise in developing JavaScript-based applications, but it also means that you’ll be able to involve your developers in the technical process during an application’s development and support, including post-handoff of the application. That common technical understanding means all parties are speaking the same language and are geared up for success for the lifetime of the application.

Prototyping and proof-of-concepting. An often-understated utility of Electron development is its handiness for prototyping. Since Electron applications can be spun up quickly — and because you'll have access to the vast community of established libraries and frameworks built for JavaScript — an Electron application is ideally suited to building out a viable proof of concept perfect for getting projects off to a promising start.

Trying out new ideas with Electron Fiddle. Sometimes, all you need is a playground to test out that idea that’s been bouncing around in your head. Electron Fiddle is Electron’s take on JSFiddle. It’s a lightweight desktop studio for quickly running a simple Electron project. Maybe you want to experiment with a new feature for your application. Or maybe you want to download someone else’s demo and click around on your own. Fiddle allows you to run an Electron application without the overhead of initializing a completely new project. I love this capability and use it both for exploring new features and testing out small tweaks to colleagues’ code. You can read up on Electron Fiddle here.

Packaging and Distributing with Electron Forge. Electron Forge is the all-in-one tool for initiating, configuring, pipelining, and distributing an Electron application. At the start of your project, you’ll use the Electron Forge CLI to spin up a template application. Later, when it’s time to distribute, Electron Forge facilitates three core steps in the distribution sequence:

  1. The packaging of the application to a bundled executable
  2. The making of the bundled executable into a distributable (such as a .zip or an .exe)
  3. The publishing of the distributable so that your application is available for users to download

Users love it

Developing for the web means recognizing the dominant design for the everyday user’s experience. Developers and non-developers alike have come to expect a certain standard: a modern browser serving webpages on a modern JavaScript framework—think React with Material UI or Angular with Angular UI. Why shouldn’t this expectation carry over to the desktop application experience? For consistency and seamlessness, it makes sense to craft desktop applications that look and feel like their web counterparts.

An Electron application fluently brings the desktop application must-haves—offline capabilities, local system access, process launching, hardware discovery—and merges them with the familiar user interface elements of the web.

When it comes to picking JavaScript frameworks and libraries, Electron lets you choose your own tools. You can keep things simple and lean on basic HTML and CSS. Or you can scaffold up a full-fledged React application via Next.js.

Case Study: A desktop application for high-throughput data analysis and visualization

The problem: Our client, a global aerospace and defense technology company, sought to modernize their in-house data analysis and visualization workflow. Their existing workflow relied on several de-coupled applications for processing and exhibiting their data, and there was no official means of warehousing the data. Their team needed a successor platform to provide stronger scalability and faster workflow throughput while still retaining the data reporting functionalities of the legacy system. In modernizing the legacy platform, they laid out a set of key requirements:

  • The solution must provide for the storage of captured data.
  • The solution must ingest large datasets and visualize them clearly, and it must do so performantly.
    • Presentation of data must support advanced visualization controls such as time-shifting, zooming, and time-trending.
    • Plotting must be capable of processing up to 1 million data points.
  • The solution must support the validation of data.
  • The solution’s data processing must be offline-capable. If data is available locally, no network connection shall be required for the core functionality of the application.

The solution: DMC identified a path forward to replace the legacy workflow platform with a single cohesive solution. The bedrock of the solution is a data analysis desktop application built on Electron. Here’s why we landed on Electron for building out this application:

  • Electron gives our development team access to a range of powerful JavaScript packages for plotting, UI, and navigation. These are the packages that supply the familiar, expected experience of the web, and Electron brings them to the desktop environment.
  • Electron’s access to the native OS empowers us to launch a local Python service upon Electron startup. This allows us to easily take advantage of Python’s proven libraries for data crunching, validation, and report generation.

To round out the solution, we then spun up cloud-based data storage hosted on Amazon’s S3 service. As a whole, DMC delivered an application to meet the client's need for a comprehensive data intelligence and reporting platform.

Read more about DMC’s desktop application and web application development offerings or contact us today for your next project.

The post Electrifying Desktop Application Development with Electron appeared first on DMC, Inc..

]]>
Custom Image Provider Implementation in PySide https://www.dmcinfo.com/blog/17084/custom-image-provider-implementation-in-pyside/ Thu, 02 Nov 2023 14:34:05 +0000 https://www.dmcinfo.com/blog/17084/custom-image-provider-implementation-in-pyside/ Introduction In application development, projects require various depths of involvement. Some projects may need you to interconnect a bunch of trendy frameworks and open-source libraries, while other projects will require full-scale development in a lesser-known framework not even designed for the task at hand. A framework that is particularly interesting to work with that occasionally […]

The post Custom Image Provider Implementation in PySide appeared first on DMC, Inc..

]]>
Introduction

In application development, projects require various depths of involvement. Some projects may need you to interconnect a bunch of trendy frameworks and open-source libraries, while other projects will require full-scale development in a lesser-known framework not even designed for the task at hand. A framework that is particularly interesting to work with that occasionally lacks proper documentation is PySide.

I’ve had an opportunity to explore some image rendering capabilities of the framework and would like to share some tips and best practice standards for your custom application.

Image Provider

Before we dive deeper into the topic, I want to note that I am using PySide6, and the image provider lives under PySide6.QtQuick.

Here is a decent, thorough documentation page on the QQuickImageProvider that you may wish to examine to gain a better understanding of the concept. 

The primary purpose of the image provider is to allow the application to render images from sources other than the standard files. Good examples of such sources are the in-memory data and dynamically generated images.

If you simply need to render a pre-existing image in a common format, then I highly recommend looking into the native QML Image component capabilities.

Setup

Disclosure: code casing might be inconsistent with what you are used to in Python, but I tried incorporating both QML and Python standards where applicable.

  • camelCase – function names (QML/C++ – inherited function override + consistency)
  • PascalCase – objects (QML/C++ – inherited base class + consistency)
  • snake_case – variables (Python – standard)

To begin custom image provider implementation, you have to instantiate your image provider class that inherits PySide6.QtQuick.QQuickImageProvider.

Custom Image Provider Definition

Image 1. custom image provider definition

class CustomImageProvider(QQuickImageProvider):
    def __init__(self, image_provider_id: str):
        super().__init__(QQuickImageProvider.ImageType.Image)
        """Image provider metadata."""
        self.provider_id = image_provider_id

        """Image provider data."""
        self._images: dict[str, np.ndarray] = dict()

        """Suggested utility objects."""
        # self.SharedConstants = SharedConstants()
        # self._imageConstructor = ImageConstructor()
        # self._idConstructor = IdConstructor()

Notice that I created a public property provider_id. You do not technically need it, but I highly recommend introducing one, especially if you are anticipating multiple image provider instances. For example, I worked on an app that required multiple tabs to be open in parallel, each with access to an image provider. Since each tab needed its own library of custom images, with image IDs not necessarily globally unique, I had to instantiate a unique image provider per tab to avoid data conflicts. The provider_id really helps identify which image provider to use and, more importantly, which image provider can be cleared out for garbage collection purposes as the tab closes.

You also need to create a data structure to hold your images. A dictionary has the convenience of id-data mapping. QML will always request an image using a string id and mapping hashable string ids to image data sounds like a perfect opportunity to use a dictionary. To avoid any unexpected behavior, I recommend instantiating the data structure as an internal property and, thus, I called it simply _images.

In the comments I am also suggesting the usage of the following objects:

  • SharedConstants – implement to store and use constants that are shared across the application. Remember, if anything is used in both QML and Python and is ultimately hardcoded, then you should instantiate it as a shared constant, and that constant could be used by both QML and Python. Otherwise, any change to a hardcoded value will become a living nightmare of chasing down all the instances of that value in the code. Remember, you cannot easily, if at all, debug QML.
  • ImageConstructor – use this class to define functions that could be used to construct pixel data that would be stored in the _images data structure.
  • IdConstructor – use this class to define functions that could be used to construct ids that would be used to store data in the _images data structure.

Method Override

QQuickImageProvider has requestPixmap, requestImage and requestTexture that you can override to implement your custom functionality. Each method’s signature is similar to the others, so I will focus on the requestImage method as I have worked with it the most.

Requestimage Override

Image 2. requestImage override

    def requestImage(self, image_id: str, size: QSize, requested_size: QSize) -> QImage:
        if (image_id in self._images.keys()) and (self._images[image_id] is not None):
            """Retrieve the image data from the image library."""
            _pixels = self._images[image_id]

            """According to the documentation: 
            In all cases, size must be set to the original size of the image. 
            This is used to set the width and height of the relevant Image if 
            these values have not been set explicitly."""
            image_size = QSize(_pixels.shape[1], _pixels.shape[0])
            if size:
                size = image_size

            """Construct the size of the returned image."""
            width = (
                requested_size.width()
                if requested_size.width() > 0
                else image_size.width()
            )
            height = (
                requested_size.height()
                if requested_size.height() > 0
                else image_size.height()
            )

            """Construct the image."""
            img = QImage(
                _pixels.data,
                height,
                width,
                QImage.Format_RGBA8888,
            )

            return img
        else:
            raise ValueError(
                self.provider_id
                + " image provider was unable to find image "
                + image_id
            )

Each of the three methods requires a signature that includes image_id, size and requested_size. It is okay to rename those input variables, but the typing must remain the same. Nevertheless, I do not recommend changing the names.

  • image_id – the string ID of the image by which you will be looking up the pixel data in the defined _images data structure.
  • size – is not technically used for anything in the Python implementation of the image provider. According to the official documentation: “In all cases, size must be set to the original size of the image. This is used to set the width and height of the relevant Image if these values have not been set explicitly.” This is a remnant of the C++ framework. This variable is passed into this method by reference and must be updated. Every other input is passed in by value.
  • requested_size – the size of the Image component in QML that requested the image from the custom image provider.

The return value of the overridden method must be a properly constructed QImage that QML will be able to display in the application. Notice that the first input for the QImage constructor is a buffer object pointing to the start of the array’s data. This is also a C++ memory management quirk that Python has to deal with. The format also plays a big role in image rendering. The current QImage.Format_RGBA8888 decodes the given data array as though there are four 8-bit unsigned integers for red, green, blue, and alpha channels per pixel. This is crucial, since, if you provide the wrong array type, the displayed image will either be incorrect or won’t show up at all. QML will not throw an error, so you might spend a lot of time trying to figure out why your image is not rendering. A friend of mine told me this, I am certainly not speaking from experience…

The sizing of the image is another topic for discussion. Depending on your implementation, you might want to keep the original size of the image in any rendering case, or, on the contrary, you might always want to resize the image to the window size. Sometimes, you might need to implement the sizing so that it is dynamic and dependent on the window state/size. Basically, what I am saying is that the sizing implementation in the code above is subject to change depending on your application needs; however, you must assign the current image original size to the passed by reference size variable.

Data Management

Now that we’ve implemented the basic image provider functionality, we have to develop data management capabilities. Image data must be somehow stored in the image provider, and, since the data structure _images was defined internally, we have to define functions that will allow insertion of the new data into the dictionary as well as its removal.

It is also important to implement data validation code that could be broken down into helper functions. This is where you make sure that the data provided for addition could be displayed using the defined QImage format.

You can also include some of your data manipulation and ID construction code. Basically, this is the perfect opportunity to get the best use out of your predefined ImageConstructor and IdConstructor objects. You still have to make sure that the data on the output is suitable for the defined QImage format.

Main Image Provider Management Function

Image 3. main image provider management function

The following implementation is an example of how you can develop the data removal functionality.

Supplemental Management Function

Image 4. supplemental management function


    def addOrUpdateLayer(self, layer_id: str, pixel_data: np.ndarray) -> dict:
        """Data validation and manipulations to prep for the render-ready format"""
        # Insert your data validation code
        # Insert your data manipulation code

        """Map the new data to the desired id and save to the library."""
        layer = {layer_id: pixel_data}
        self._images.update(layer)

        return layer

    def removeLayer(self, layer_id: str) -> None:
        """Apply key-value pair deletion logic"""
        del self._images[layer_id]

Helper Functionality

I highly recommend keeping your custom image provider as lightweight as possible for scaling and maintainability purposes; however, you may still want to implement some basic helper functions like:

  • isIDTaken – to check whether a given ID already exists in the image provider to avoid overriding stored data.
  • isImageLoaded – to check whether the data is present in the data structure before querying it.
  • clearImageProviderInstance – to manage the memory that the image provider occupies.

Example implementation for each is presented below:

Supplemental Image Provider Functionality

Image 5. supplemental image provider functionality

    def isIDTaken(self, image_id: str) -> bool:
        return image_id in self._images.keys()

    def isImageLoaded(self, image_id: str) -> bool:
        return self._images[image_id] is not None

    def clearImageProviderInstance(self):
        self._qml_engine.removeImageProvider(self._id)

In Action

The following code snippets do not require much explanation but are a good starting point in learning how to use the custom image provider in the scope of any application. Note, we must add an image provider to the application engine.

Since we defined the image provider class with a unique ID property, you must provide one to each image provider you insert into the application. Keep in mind that QML engine requires you to provide a unique ID in the first place, so you should just store that ID in the image provider itself.

Image Provider Usage in an Example

Image 6. image provider usage in an example

if __name__ == "__main__":
    """Set up the application."""
    app = QApplication([])
    engine = QQmlApplicationEngine()

    """Instantiate an image provider."""
    unique_id = "unique_image_provider_id"
    image_provider = CustomImageProvider(unique_id)
    engine.addImageProvider(unique_id, image_provider)

    """Add an image to the image provider."""
    image_provider.addOrUpdateLayer(
        "unique_image_id",
        np.array(
            [
                [
                    [255, 0, 0, 255],
                    [255, 0, 0, 255],
                    [255, 0, 0, 255],
                ],
                [
                    [0, 255, 0, 255],
                    [0, 255, 0, 255],
                    [0, 255, 0, 255],
                ],
                [
                    [0, 0, 255, 255],
                    [0, 0, 255, 255],
                    [0, 0, 255, 255],
                ],
            ],
            dtype=np.uint8,
        ),
    )

    """Load and tun the app."""
    engine.load("main.qml")
    app.exec()

According to the code above, the example image we are defining has a 3-pixel height and a 3-pixel width. It’s a square of three colored stripes: red, green, blue with full opacity (4th alpha channel).

Example QML Code

Image 7. example QML code

import QtQuick 2.15
import QtQuick.Controls 2.15

ApplicationWindow {
    visible: true
    width: 600
    height: 600
    title: "Custom Image App"

    Image {
        anchors.fill: parent
        // Set the source to the custom image provider.
        // Include the image id if you would like to show a particular image.
        source: "image://unique_image_provider_id/unique_image_id"
    }
}

I am choosing to keep the QML code fairly simple and straightforward. This code snippet does not necessarily follow any coding standards, but rather serves as a quick and dirty playground to show off some custom image provider capabilities.

Produced Result

Image 8. produced result

The produced result is just as we expected, a stretched out 3×3 pixel image! Notice that the Image component anchors onto its parent, so the requestedSize will be inherited from the ApplicationWindow component size.

We could also set the requestedSize manually in QML. This way, the size of the constructed image will not change dynamically with the ApplicationWindow.

Example of Setting the Image Size Manually

Image 9. example of setting the image size manually

import QtQuick 2.15
import QtQuick.Controls 2.15

ApplicationWindow {
    visible: true
    width: 600
    height: 600
    title: "Custom Image App"

    Image {
        width: 450
        height: 300
        // Set the source to the custom image provider.
        // Include the image id if you would like to show a particular image.
        source: "image://unique_image_provider_id/unique_image_id"
    }
}

Notice the difference in the newly rendered result:

Result of Setting the Image Size Manually

Image 10. result of setting the image size manually

Alternative Setup

A less recommended, but valid nonetheless, implementation is to let the custom image provider insert itself into the application upon instantiation.

Alternative Custom Image Provider Definition

Image 11. alternative custom image provider definition

class CustomImageProvider(QQuickImageProvider):
    def __init__(
        self, image_provider_id: str, qml_application_engine: QQmlApplicationEngine
    ):
        super().__init__(QQuickImageProvider.ImageType.Image)
        """Image provider metadata."""
        self.provider_id = image_provider_id

        """Handle image provider self insertion into the application."""
        self._qml_application_engine = qml_application_engine
        self._qml_application_engine.addImageProvider(self.provider_id, self)

        """Image provider data."""
        self._images: dict[str, np.ndarray] = dict()

        """Suggested utility objects."""
        # self.SharedConstants = SharedConstants()
        # self._imageConstructor = ImageConstructor()
        # self._idConstructor = IdConstructor()

The following is an example of such an image provider in action.

Example Usage of the Alternative Custom Image Provider

Image 12. example usage of the alternative custom image provider

if __name__ == "__main__":
    """Set up the application."""
    app = QApplication([])
    engine = QQmlApplicationEngine()

    """Instantiate an image provider."""
    unique_id = "unique_image_provider_id"
    image_provider = CustomImageProvider(unique_id, engine)

    """Add an image to the image provider."""
    image_provider.addOrUpdateLayer(
        "unique_image_id",
        np.array(
            [
                [
                    [255, 0, 0, 255],
                    [255, 0, 0, 255],
                    [255, 0, 0, 255],
                ],
                [
                    [0, 255, 0, 255],
                    [0, 255, 0, 255],
                    [0, 255, 0, 255],
                ],
                [
                    [0, 0, 255, 255],
                    [0, 0, 255, 255],
                    [0, 0, 255, 255],
                ],
            ],
            dtype=np.uint8,
        ),
    )

    """Load and tun the app."""
    engine.load("main.qml")
    app.exec()

Notes and Tips

  1. Keep your image provider lightweight. Put all the functionality you think is relevant to it somewhere else, because chances are, it is not. The image provider should really be treated as a data structure that has functionality only to store and remove images.
  2. Make your image provider usable in every place of your application. Avoid putting select-component/window-only functionality in here.
  3. This implementation will also likely work in PySide2 since that is where I originally developed it.

 Learn more about Resizing UIs with QML Layouts and contact us today for your next project.

The post Custom Image Provider Implementation in PySide appeared first on DMC, Inc..

]]>
Using a QAbstractListModel in QML https://www.dmcinfo.com/blog/17671/using-a-qabstractlistmodel-in-qml/ Mon, 27 Mar 2023 09:36:10 +0000 https://www.dmcinfo.com/blog/17671/using-a-qabstractlistmodel-in-qml/ The QAbstractListModel class provided by Qt can be used to organize data that will be presented visually as a list or table. Standardizing the interface with an abstract class like QAbstractListModel makes it easy to keep your model data completely isolated from your view (a software design principle known as “separation of concerns“). That abstraction […]

The post Using a QAbstractListModel in QML appeared first on DMC, Inc..

]]>
The QAbstractListModel class provided by Qt can be used to organize data that will be presented visually as a list or table. Standardizing the interface with an abstract class like QAbstractListModel makes it easy to keep your model data completely isolated from your view (a software design principle known as “separation of concerns“). That abstraction makes it a powerful and flexible tool, but it also makes the learning curve steep.

The goal of this post is to provide concrete examples, explanations, and definitions of terms so you can more easily make use of the QAbstractListModel class. For your reference, you can see the complete example code on GitHub.

Example GUI

Let’s say we’ve got a list of devices with which our software interacts. The data we’ve got for each device is:

  • A human-readable name (a string)
  • A serial number (an integer)
  • Whether or not the device is currently connected (a Boolean)

Our example GUI will look like this:

Example GUI

Part of the appeal of Qt is that you can make extremely slick UIs. We will not be doing that here in order to keep the focus on listmodel concepts. I’ve resisted the urge to add eye candy for the sake of clarity, and I have crafted the example to make it clear how you could stylize the list if you wanted to.

Additionally, the example uses Qt’s Python bindings (PySide6). Everything here is equally applicable to C++, but again, for the sake of simplicity, it is presented as a Python application. The QML is identical in both cases.

The QML Description

First, we’ll describe the visualization of our list of devices in QML. The QML ListView class is a great start. We’ll set three properties:

  • model: this is what we’ll use to bind the QML ListView to a QAbstractListModel class defined in C++ or Python

  • delegate: this is used to define how each item in the list is rendered as a QML object
  • highlight: this is not necessary to use a ListView, but it is generally useful to visualize a selected item in the list

Basic Example

A rough first draft of the QML might look like this (for the final version, see here):

ListView {
    id: deviceList

    model: controller.listmodel
    delegate: Item {
        width: deviceList.width

        Text {
            text: "Placeholder"
        }

        MouseArea {
            anchors.fill: parent
        }
    }
  // Item delegate

    highlight: Rectangle { color: "lightBlue" }
}  // ListView

Using the Model and the Delegate

In my main() function, I set a context property called controller that refers to an instance of my Controller class:

Python
qml_app_engine = QQmlApplicationEngine()
qml_context = qml_app_engine.rootContext()
controller = Controller(parent=app)
qml_context.setContextProperty("controller", controller)

The Controller class exposes a Qt property called listmodel. Note that that property is declared as a QObject in my Python code, and that it does not need a property change signal (i.e., I use constant=True). In the QML above, I bind the ListView‘s model property to the listmodel property of my controller object:

model: controller.listmodel

The delegate property of ListView is like a template that defines how each item in the list is rendered as a QML object. For the sake of demonstration, we’ll keep it simple here. I made it a QML Item that is as wide as the ListView itself and contains a Text object and a MouseArea, but you can make it anything you like (you’ll generally make it much fancier)! For example, you might instead have something like a RowLayout containing a Checkbox, an Image, and a Text. (Haven’t used layouts yet? Start here!) However you want each item in your list to be visualized, you can define it in your delegate. For simplicity, I often start by just rendering it all in a Text item. We’ll look at how to access each item of data (name, serial number, and connection status) in the next section.

Note also that the MouseArea in my delegate is used to select an item in the list. Each item in the list is instantiated as a delegate object, so each item in the list has a MouseArea that can handle click events. We’ll look at this in more detail later also.

The QAbstractListModel Class

If you are managing a large quantity of data and you want to visualize it on your QML GUI, you have a few options. For simple cases, a Repeater can usually get the job done just fine and is conceptually very easy to grasp. However, for very large lists, a Repeater is not recommended because it instantiates all visual items at once. In cases where you have a lot of data, you often only want to view or update a small section of it. For these cases, QML provides the ListView object, which expects to be bound to a QAbstractListModel object in your C++ or Python application.

QAbstractListModel is an abstract class that cannot be instantiated itself, so you need to create a new class that inherits from it and is specialized for your needs. I would first suggest reading the section of its documentation titled “Subclassing,” which states that:

When subclassing QAbstractListModel, you must provide implementations of the rowCount() and data() functions. Well behaved models also provide a headerData() implementation.

If your model is used within QML and requires roles other than the default ones provided by the roleNames() function, you must override it.

For editable list models, you must also provide an implementation of setData() and implement the flags() function so that it returns a value containing Qt::ItemIsEditable.

It’s unlikely that those few sentences made it immediately obvious what you need to do. Let’s start with what confused me most when I got started: the concept of a “role.”

Roles

Think about the data we’re presenting. We have a list of devices, and each device in the list has three pieces of data (name, serial number, and connection status). You might think of each device’s data as a row in a table:

Serial Number Human-readable name Connected?
123 name1 No
456 name2 Yes
789 name3 No

The simplest analogy is that the “role” is the piece of data that goes in each column. You might also think of it as identifying each piece of data in each object in the list. So, we will define our “roles” as name, serial, and connected.

Notice also that Qt provides a built-in ItemDataRole enum. I initially found this very confusing, because it provides roles with names like Qt::DisplayRole and Qt::EditRole, which don’t really sound like individual data items to me. The built-in roles are intended for use with built-in classes like QString and QIcon, and they don’t necessarily make sense for this particular custom class, so don’t let it throw you off. Consider though, that you might have roles (items of data in your class) that aren’t pieces of data that you’d want to render as text but are instead pieces of data that determine how the display of that data behaves (like a background color or an icon).

Roles that you want to define yourself for your own custom class can use enum values starting with Qt::UserRole, which has value 0x0100 = 256.

An Aside on Tables

It’s worth mentioning that there is indeed a QAbstractTableModel class as well. As shown above, we can use the role as the “second dimension” of our one-dimensional list, making it look like a table. So, when would you use QAbstractTableModel? You might use it when you have a 2D array of objects, where each object has a set of properties that you identify as “roles.”

What makes the most sense as a data model will depend on your specific data, and it may be confusing to think about a list in terms of “rows” if your data doesn’t really seem like a table (you may not even arrange items vertically on your UI, which makes the terminology much worse!). We’re stuck with the “row” and “column” terminology used by Qt here, but the models can be used in whatever way makes the most sense for the data you need to represent.

In most real-life applications (as well as in the example code here), I use a 1D array with multiple roles, because I find that to be the simplest and most natural data structure. However, both QAbstractTableModel and QAbstractItemModel are available to you if you need a more complex visualization of more complex data. Once you get a handle on QAbstractListModel, the more general classes will make more sense.

How are Roles Used in QML?

As shown above, you will use the QML ListView‘s model property to specify an object in your C++ or Python code that inherits from QAbstractListModel. The delegate property is then used to define the QML object that will visualize that data. In this case, each item in the list has three roles (name, serial, and connected), and we’ll want to access each of those data items in QML independently.

Looking at the GUI again:

GUI example

Each item in the list is visualized as a Text item where the content follows this pattern:

[index of item]: [name role] ([serial role]) - [connection role]

In our delegate, we can access each item of data using the name of the role:

ListView {
    model: controller.listmodel
    delegate: Text {
        text: `${index}: ${name} (${serial}) - ${connected ? "OK" : "NOT FOUND"}`
    }

}

Inside the delegate object we can simply use index, name, serial and connected as if they are bound to the individual data items inside that element of the list. index is provided out-of-the-box by ListView, but name, serial, and connected are the names of roles we define ourselves. Within the delegate object, we can refer to those names, and our child class of QAbstractListModel will provide methods that QML can use to link those names to specific pieces of data.

The index value is also useful in our MouseArea. We added the MouseArea so that the user could click an item in the list and manipulate it. Since the MouseArea is inside the delegate, we have access to the index value. In the MouseArea‘s signal handler onClicked, we will want to set the currentIndex property of the ListView to the index of the item in the list that was clicked:

ListView {
    id: deviceList

    delegate: Item {
        MouseArea {
            onClicked: deviceList.currentIndex = index
        }   // MouseArea
    }
  // Item delegate

    highlight: Rectangle { color: "lightBlue" }
}  // ListView

Setting the currentIndex of the ListView enables the ListView to automatically animate the highlight object that we defined. When the user clicks an item in the list, it will move the Rectangle to highlight the selected list item.

As an exercise for the reader, try making the delegate more interesting. Instead of indicating the state of the connected role with just a Text, try using the connected role to set the text color of the delegate, or add an icon to each row that indicates whether or not the device is connected.

Setting Up the Roles

Let’s circle back to this statement in the documentation:

If your model is used within QML and requires roles other than the default ones provided by the roleNames() function, you must override it.

We want to use our own role names for this, so we can use names that make sense in our delegate (like name, serial, and connected). The mapping from integer role enum values (like Qt::UserRole) to strings of characters is established with the QAbstractItemModel::roleNames() method, which your custom listmodel will inherit. All classes that inherit QAbstractListModel need to implement this method, which returns the map from integers to byte arrays. In C++, this map is a QHash<int, QByteArray>, and in Python it is a basic dict. The integer is the role enum value, and the byte array is the string name used in QML to access that role in each item of the listmodel.

I like to set up my roles by doing two things: creating an enum (starting with the value Qt::UserRole and incrementing from there) that enumerates my custom roles, and then creating a dictionary that maps the role enum values to byte arrays (the names used by QML to access elements of the model). In our example, I might do:

Python
class DeviceItemRoles(IntEnum):    
  NAME = Qt.UserRole
  SERIAL = auto()
  CONNECTED = auto()
  _role_names = {
      DeviceItemRoles.NAME: b'name',
      DeviceItemRoles.SERIAL: b'serial',
      DeviceItemRoles.CONNECTED: b'connected'
  }

Note again that in Python, the values are byte arrays (b''), not strings.

With this setup, the delegate of our ListView can access each piece of data in each list item using the strings name, serial, and connected. QML knows how the role integers (from the enum) map to the names because it knows that a QAbstractListModel must have a roleNames() method, so now we just need to give it a way to access each piece of data given the list index and the role. That is the job of the QAbstractItemModel::data() method, which we will get to shortly.

Subclassing a QAbstractListModel

Recall from the documentation that subclasses of QAbstractListModel need to implement the rowCount() and data() methods, plus roleNames() if the listmodel is used in QML. We’ll cover these one-by-one, but first let’s define how we’ll store our data.

Data Storage

I find that the easiest way to store the data (for a Python application) is with a list of dictionaries, where each dictionary uses the role enum as the key for each data value. This is by no means the only way, but it is very simple and often sufficient. So, you might start developing your custom listmodel class like this:

Python
class DeviceListModel(QAbstractListModel):

    def __init__(self):

        super().__init__()

        self._data = []



    def add_device(self, name, serial, connected):

        new_row = {

            DeviceItemRoles.NAME: name,

            DeviceItemRoles.SERIAL: serial,

            DeviceItemRoles.CONNECTED: connected

        }



        self._data.append(new_row)

When we create a new listmodel, the list of data, self._data, is just an empty list. We can then add device data to the list with the add_device() method, which takes the name, serial number, and connection status, puts them in a dictionary with the appropriate role enum values as keys, and then appends that dictionary to the data list.

Now that we’ve established how the data is stored, we can fill out the required methods.

The roleNames() Method

roleNames() is the easiest to implement, because it’s already done! The _role_names dictionary from above is exactly what roleNames() should return, so this one’s a no-brainer:

Python
def roleNames(self):
    return _role_names

That’s it!

The rowCount() Method

rowCount() is similarly straightforward. The number of rows is just the number of elements in our self._data list. We don’t need to do much here either:

Python
def rowCount(self, parent=QModelIndex()):
    return len(self._data)

The only thing to address is that weird parent argument. What’s that about?

It comes from the base class, QAbstractItemModel. The base class is more general. Whereas QAbstractListModel represents a one-dimensional list of items that all have the same type of elements, QAbstractItemModel can describe trees and other complex hierarchical structures. In those cases, you need to provide the index of a parent object in the tree so the rowCount() method can return the number of children of that parent. Once you get your bearings with the QAbstractListModel, you can dig into the QAbstractItemModel, but for now, let’s just ignore parent, because it doesn’t apply to a one-dimensional list. Just give it a default QModelIndex.

The data() Method

Finally, we need to implement a method that will return data values when QML asks for them. The C++ signature of this method is:

C++
QVariant QAbstractItemModel::data(const QModelIndex &index, int role = Qt::DisplayRole) const

So, our implementation of the method needs to take the index of the row we want (as a QModelIndex object) and the role of the individual data item we want (as an integer, like our convenient DeviceItemRoles enum), and it will return the data as a QVariant. With the PySide6 bindings, there is no QVariant. We can return whatever Python object we want, and if there’s no data at that index or with that role, we can just return None. A simple implementation in Python looks like:

Python
def data(self, index, role):

    if role not in list(DeviceItemRoles):

        return None



    try:

        device = self._data[index.row()]

    except IndexError:

        return None



    if role in device:

        return device[role]

    return None

There’s a little more meat here than in our roleNames() and rowCount() methods. First, we check that the role integer that was passed in is an item in our DeviceItemRoles enum. If it isn’t, then something is looking for a role we aren’t providing, so we’ll just return None.

Next, we’ll try to get the index of the item in the list. Note that you can’t index the self._data list using the index argument directly. You need to call index.row(), which is a consequence of the fact that QAbstractListModel is a child of the more general QAbstractItemModel class, which is not necessarily a 1D list. If you look at the QModelIndex class, you’ll see that, in addition to row(), it also provides column(), as well as various other methods that only apply to more complex structures.

Anyway, if the given index is out of bounds, then something is looking for an invalid row, and we return None. Beyond that, both the index and the role are ok, so we return the appropriate value by indexing the list to get a dictionary, and then looking up the value of the role key in that dictionary. Whatever data was stored there gets returned.

Updating, Inserting, and Removing Data

What was implemented above is sufficient for a listmodel that will never change, but that’s probably in the minority of use cases. If you only have a small-ish amount of static data, it would probably be easier to use a Repeater. More likely, you’ll want to add data to your list, remove data, or change data at runtime, and QAbstractListModel is a much better fit in these situations. In order to manipulate our list contents, we need to understand a few additional concepts.

Signaling Changes to Existing Row Data from the Application

In general, when we bind properties to QML, we provide a signal that we emit when the property changes. QML listens for that signal, and when it gets emitted, it calls the property getter to refresh the value. This is done by a QAbstractListModel by using the dataChanged signal provided by its parent, QAbstractItemModel, which specifies which elements of the model changed (in terms of rows, columns, and roles).

In some cases, you need to emit this signal yourself. For example, we might want our listmodel class to have a method that sets all devices to “disconnected.” That would look like:

Python
def set_all_disconnected(self):

    for d in self._data:

        d[DeviceItemRoles.CONNECTED] = False

    self.dataChanged.emit(self.index(0), self.index(self.rowCount() - 1), [])

In this method, we first loop over all items in the data list and set the connected value to False. Then, we only need to emit a single signal that says that all items in the list have changed (i.e., every index from 0 to rowCount() - 1). The empty list in the last parameter of the signal is a list of roles that changed, which can be left empty to indicate that all roles have changed. In this case, you can specify [DeviceItemRoles.CONNECTED] if you prefer. This only makes a difference if you have many roles.

Signaling Changes to the Collection of Rows

Even if you don’t change any existing data, you might add or remove entire rows, and QML will need to know what to update when that happens. In this case, we use a pair of methods, beginInsertRows() and endInsertRows(), to specify that we’re adding data (and how many rows we’re adding).

Let’s say we want to add a new element to the list after the selected index. We can do that with a method like:

Python
def add_device_after_index(self, idx, name, serial, connected):

    index_of_new_device = idx + 1

    new_device = {

        DeviceItemRoles.NAME: name,

        DeviceItemRoles.SERIAL: serial,

        DeviceItemRoles.CONNECTED: connected

    }



    self.beginInsertRows(QModelIndex(), index_of_new_device, index_of_new_device)

    self._data.insert(index_of_new_device, new_device)

    self.endInsertRows()

beginInsertRows() needs three things: the QModelIndex of the parent into which rows are inserted (for 1D listmodels that we’re talking about here, just give it a default one), the row number that the first new row will have after insertion, and the row number that the last new row will have after insertion. I’m never able to remember this, so I almost always consult the helpful diagrams in the documentation for this method.

After that, we insert the new data into our list, and then we call endInsertRows(). The beginInsertRows() method handles emitting a signal for you (rowsAboutToBeInserted), and endInsertRows() handles emitting a different signal for you (rowsInserted) so you don’t have to emit any signals yourself! These signals are used to notify QML that it’s time to refresh the ListView with new rows, and which ones need to be updated (if the model contains large quantities of data, we obviously only want to update as few as possible).

Signaling Changes to Existing Row Data from the GUI

The final scenario we’ll discuss addresses the last part of the QAbstractListModel documentation on subclassing:

For editable list models, you must also provide an implementation of setData() and implement the flags() function so that it returns a value containing Qt::ItemIsEditable.

As you can see above, if the application manipulates data in the QAbstractListModel, it simply needs to emit a signal (dataChanged) to notify QML that there’s something new. The setData() method is used when information goes the opposite direction, from the UI to the application. For example, say the delegate contains a checkbox. If the user clicks the checkbox in a particular row, QML needs to tell the QAbstractListModel that there is a new value for the checkbox’s role at a particular list index. It does this by calling the setData() method, which takes three arguments: the index, the new value, and the role. It will look very similar to the data() method above, perhaps like:

Python
def setData(self, index, value, role):

    if role != MyRoleEnum.SOME_EDITABLE_ROLE:

        return False



    try:

        data_row = self._data_list[index.row()]

    except IndexError:

        return False



    data_row[MyRoleEnum.SOME_EDITABLE_ROLE] = value

    self.dataChanged.emit(index, index, [MyRoleEnum.SOME_EDITABLE_ROLE])

    return True

In short, you use the index and role arguments to find the data you’re looking for in the model, you set that data to the new value, and then you emit dataChanged.

Summary

The QAbstractListModel (and its base class, QAbstractItemModel) is a powerful way to present a list of data to a user interface, but the extensive abstraction can make the documentation hard to parse. A simple example should help clarify, as well as a small number of important concepts:

  • Many of QAbstractListModel‘s methods are inherited from its base classes, and consequently involve a parent QModelIndex that doesn’t apply to a simple list and can be very confusing.
  • A “role” is simply a way to specify individual pieces of data in a list item.
  • Roles can be used to make a list seem like a table, and that’s fine… you can still use QAbstractListModel!
  • When adding items to the list or removing items from it, call the beginInsertRows() and endInsertRows() methods before making your changes, and the correct signals will be emitted for you at the right times.
  • If your application updates the model by changing data in an existing item in the list (or multiple existing items in the list), make sure you emit the dataChanged signal after the changes are made to notify QML that it needs to update its views.
  • If the user interacts with the QML UI and modifies data in the model, you will need to implement setData() to store the new information in the model object, and then you will need to emit dataChanged.

Building a Qt app?

I’d love to help! Give us a call or send us an email to discuss! 

Learn more about our Application Development expertise and contact us for your next project.

The post Using a QAbstractListModel in QML appeared first on DMC, Inc..

]]>
Resizing UIs with QML Layouts https://www.dmcinfo.com/blog/18019/resizing-uis-with-qml-layouts/ Fri, 09 Dec 2022 09:40:04 +0000 https://www.dmcinfo.com/blog/18019/resizing-uis-with-qml-layouts/ Overview When I was first getting exposed to QML as a language for describing user interfaces, almost everything was easy to grasp except the concept of layouts. Their behavior never seemed natural, and I spent a lot of time fighting with them before I was finally able to identify the things that didn’t do what […]

The post Resizing UIs with QML Layouts appeared first on DMC, Inc..

]]>
Overview

When I was first getting exposed to QML as a language for describing user interfaces, almost everything was easy to grasp except the concept of layouts. Their behavior never seemed natural, and I spent a lot of time fighting with them before I was finally able to identify the things that didn’t do what I wanted them to do.

This blog aims to give you the head start I didn’t have by walking you through a series of simple examples that illustrate most of the principles.

A Brief Introduction to Layouts

In short, layouts are QML elements that control how their children (the elements that they contain) are positioned and resized. A layout has no visible characteristics itself. There are a few types of layouts:

  • RowLayout: positions its children in a single row, either left to right (default) or right to left
  • ColumnLayout: positions its children in a single column, either top to bottom (default) or bottom to top
  • GridLayout: positions children in successive cells of a grid
    • Cells in the grid are rearranged when the GridLayout is resized.
    • RowLayout and ColumnLayout are special cases of a GridLayout with only one row or column.

The purpose of this walkthrough is to familiarize you with the behaviors of layouts (particularly behaviors that are unintuitive), not to describe all of their features. As such, we will largely focus on the ColumnLayout.

A Side Note on the StackLayout

There is one more QML Layout type called StackLayout. This is most closely related to the concept of “tabs” or “pages,” where different sets of controls can be grouped together and displayed in the same area, and only one group is visible at a time. The StackLayout behaves much like the others, but isn’t primarily for positioning and resizing its content. Since the positioning and resizing behaviors are the interesting ones, we’ll focus on those here, and you will be more than capable of figuring out the StackLayout on your own.

Walkthrough

Let’s step through a series of tests to understand the behavior of layouts.

Step 0: Start a New QtQuick Project in QtCreator

I am using Qt Creator 8.0.2 on Windows, but any recent version on any platform will do. Simply create a new project from the QtQuick application template. I will be using Qt 6.2.1, but Qt5 should be nearly the same.

When your template application is generated, you’ll have a file main.qml file that looks like the code on the left. When you build and run the project, you’ll see an empty window like the one on the right. The only change I made to the template QML file is the width of the Window.

import QtQuick

Window {
    width: 320
    height: 480
    visible: true
    title: qsTr("Hello World")
}

As a side note, you do not need to build the application to see how the window will behave. You can use the “QML utility” to visualize and interact with the currently-active QML file. This utility can be launched from the Tools menu:

For convenience, I have that utility mapped to Ctrl-Q, which is not the default. By default, Ctrl-Q exits Qt Creator. In a default setup, don’t just hit Ctrl-Q and expect to see your QML object.

Step 1: Add a Rectangle

Let’s put something in that window, maybe just a colored rectangle to start with. In my example code, I'll highlight changes from the previous example in blue:

import QtQuick

Window {
    width: 320
    height: 480
    visible: true
    title: qsTr("Hello World")

    Rectangle {
        color: "lightBlue"
    }

}

We’ve already hit a snag. Where’s my rectangle?!

It’s there… but Qt has no way of knowing how big a rectangle you want, so, naturally, it chose 0x0 pixels. The rectangle is there, but it has no height or width. It is conceptual… the essence of a rectangle, wafting in the breeze, elusive.

Lesson 1: The implicit height and width of a QML Rectangle object is zero.

Step 2: Make it a Much Better Rectangle

We’ll specify the height and width of the rectangle so we can actually see it. Now we have:

import QtQuick

Window {
    width: 320
    height: 480
    visible: true
    title: qsTr("Hello World")

    Rectangle {
        color: "lightBlue"
        height: 64
        width: 64

    }
}

Step 3: More Rectangles. MORE. 

Next, add another two rectangles and vary the sizes and colors a little.

import QtQuick

Window {
    width: 320
    height: 480
    visible: true
    title: qsTr("Hello World")

    Rectangle {
        color: "pink"
        height: 256
        width: 256
    }

    Rectangle {
        color: "lightGreen"
        height: 128
        width: 128
    }


    Rectangle {
        color: "lightBlue"
        height: 64
        width: 64
    }
}

This isn’t necessarily what we wanted. In the absence of any specific information about where to put those rectangles, the only reasonable thing to do is just layer them on top of each other in the upper left corner, pink on the bottom, then light green, then light blue on top. If we hadn’t made them different sizes in exactly that order, we wouldn’t have even known, and the smaller rectangles would’ve been hidden under the larger one! We might have gotten incredibly frustrated and hurled our laptop into a fire! Boy, would IT have been mad in this extremely hypothetical example!

Lesson 2: Unless you tell Qt where to put stuff, it won’t know, and it has no problem letting things overlap.

Step 4: Add a Column Layout

As stated above, a layout object’s job is to handle the positioning and sizing of its children. So, let’s put our three rectangles into a column layout and change nothing else. Note that, in order to bring in the layout objects, we need to import the QtQuick.Layouts module at the top:

import QtQuick
import QtQuick.Layouts

Window {
    width: 320
    height: 480
    visible: true
    title: qsTr("Hello World")

    ColumnLayout {
        Rectangle {
            color: "pink"
            height: 256
            width: 256
        }

        Rectangle {
            color: "lightGreen"
            height: 128
            width: 128
        }

        Rectangle {
            color: "lightBlue"
            height: 64
            width: 64
        }
    }
}

Now we’re cookin’! The ColumnLayout object takes its children and arranges them in a column, in the order that they were declared. Try resizing the window, though. You’ll notice that the window resizes, but the rectangles don’t. They stay the same size, and more blank space fills the areas below and to the right of the rectangles.

Step 5: Filling All Available Space

If we want the rectangles to keep our specified height but always resize to be the width of the window, we can set the Layout.fillWidth property to true in our rectangles. This is the rectangle telling its parent layout, “set my width so that I’m always as wide as possible.”

import QtQuick
import QtQuick.Layouts

Window {
    width: 320
    height: 480
    visible: true
    title: qsTr("Hello World")

    ColumnLayout {
        Rectangle {
            color: "pink"
            height: 256
            Layout.fillWidth: true
        }

        Rectangle {
            color: "lightGreen"
            height: 128
            Layout.fillWidth: true
        }

        Rectangle {
            color: "lightBlue"
            height: 64
            Layout.fillWidth: true
        }
    }
}

WHAT. WHY?

Instead of specifying the width of the rectangles, we told it to make them as wide as possible, and now they’re gone! You might throw several more PCs into a fire before realizing that this is because the ColumnLayout did in fact make them as wide as it could, but the layout can only make its children as wide as it itself is: and, by default, a layout is… 0 pixels wide.

Lesson 3: Layouts, like rectangles, have zero implicit width/height.

Step 6: Make the Layout Wider Than 0 Pixels

We want that layout to be pinned to the width of the window, so let’s anchor the layout to its parent’s boundaries (i.e., make the ColumnLayout fill the Window left-to-right and top-to-bottom) by setting the layout’s anchors.fill property to parent (which is the Window object):

import QtQuick
import QtQuick.Layouts

Window {
    width: 320
    height: 480
    visible: true
    title: qsTr("Hello World")

    ColumnLayout {
        anchors.fill: parent

        Rectangle {
            color: "pink"
            height: 256
            Layout.fillWidth: true
        }

        Rectangle {
            color: "lightGreen"
            height: 128
            Layout.fillWidth: true
        }

        Rectangle {
            color: "lightBlue"
            height: 64
            Layout.fillWidth: true
        }
    }
}

That’s better! Now, the user can resize the window to be as wide or as narrow as needed, and the rectangles will always reach from the far left to the far right. If you stretch the window vertically, you’ll see that the rectangles always stay their specified height and they just get spaced farther apart as the window gets taller. Let’s see if we can fill their heights also!

Step 7: Use the Layout to Fill Height

Now, let’s replace the height property of each rectangle with Layout.fillHeight so the layout will know to resize all of them in both width and height:

import QtQuick
import QtQuick.Layouts

Window {
    width: 320
    height: 480
    visible: true
    title: qsTr("Hello World")

    ColumnLayout {
        anchors.fill: parent

        Rectangle {
            color: "pink"
            Layout.fillHeight: true
            Layout.fillWidth: true
        }

        Rectangle {
            color: "lightGreen"
            Layout.fillHeight: true
            Layout.fillWidth: true
        }

        Rectangle {
            color: "lightBlue"
            Layout.fillHeight: true
            Layout.fillWidth: true
        }
    }
}

Well… now, if we resize the window, the rectangles do always fill both width and height, so that's great! But, now we’ve lost the ratio of heights that made these rectangles so special to begin with! How do we get that back so they fill left & right but maintain a specified ratio vertically?

Step 8: Preferred, Minimum, and Maximum Sizes

There’s a nuance here that is easy to overlook and definitely confused me. When Layout.fillWidth or Layout.fillHeight is true, how does the layout decide what proportion of that dimension is allocated to each element? By default, it makes sense to divide it up evenly, which is what happened. But what happens if there are other constraints, like when we specified width or height? Well, first, consider that width and height are properties of the Rectangle object, whereas Layout.fillHeight talks to the Rectangle’s parent layout. You should not use explicit position or size properties (like x, y, width, or height) in objects managed by a layout. The layout should be free to manage positions and sizes for you.

Lesson 4: If an object is a child of a Layout, don't set explicit size properties of the child. Only use the Layout.* properties to delegate those duties to the Layout.

We can, however, give the layout more information so it can make better decisions. Each element that sets Layout.fillWidth or Layout.fillHeight to true can also set:

  • Layout.minimumWidth and Layout.minimumHeight
  • Layout.maximumWidth and Layout.maximumHeight
  • Layout.preferredWidth and Layout.preferredHeight

The minimum and maximum properties are self-explanatory, but what does “preferred” mean in the context of elements that the layout wants to stretch to fit all available space? As it turns out, it specifies the proportions! Let’s put our original heights back in there as preferred heights:

import QtQuick
import QtQuick.Layouts

Window {
    width: 320
    height: 480
    visible: true
    title: qsTr("Hello World")

    ColumnLayout {
        anchors.fill: parent

        Rectangle {
            color: "pink"
            Layout.fillHeight: true
            Layout.fillWidth: true
            Layout.preferredHeight: 256
        }

        Rectangle {
            color: "lightGreen"
            Layout.fillHeight: true
            Layout.fillWidth: true
            Layout.preferredHeight: 128
        }

        Rectangle {
            color: "lightBlue"
            Layout.fillHeight: true
            Layout.fillWidth: true
            Layout.preferredHeight: 64
        }
    }
}

That’s what we wanted, and now when we stretch the window vertically, the rectangles always maintain the given ratio. The green rectangle is always half the height of the pink one, and the blue one is always half the height of the green one (and therefore a quarter of the height of the pink one).

In the absence of maximum or minimum values, you can even simplify the preferred heights down to the proportions you want. For example, the example above will behave exactly the same if you change the Layout.preferredHeight values to 4, 2, and 1! Sometimes I’ll do that if I don’t care about the absolute sizes and just want to express that elements should be sized in a 4:2:1 ratio (or whatever ratio you’re looking to achieve). But be aware that this is only the case when all child elements of the layout are set to fill in that direction, and they all have a preferred size in that direction.

Lesson 5: If the layout itself has a specified size, AND all child objects use Layout.fillWidth/Height, AND all child elements have a preferredWidth/Height set, then the proportion of the fill allocated to each child will be the ratios of the preferredWidth/Height!

Let’s dig a little deeper. What if we don’t specify the vertical height of the layout? In other words, what if we only anchor the layout’s left and right sides to the window?

import QtQuick
import QtQuick.Layouts

Window {
    width: 320
    height: 480
    visible: true
    title: qsTr("Hello World")

    ColumnLayout {
        anchors.left: parent.left
        anchors.right: parent.right


        Rectangle {
            color: "pink"
            Layout.fillHeight: true
            Layout.fillWidth: true
            Layout.preferredHeight: 256
        }

        Rectangle {
            color: "lightGreen"
            Layout.fillHeight: true
            Layout.fillWidth: true
            Layout.preferredHeight: 128
        }

        Rectangle {
            color: "lightBlue"
            Layout.fillHeight: true
            Layout.fillWidth: true
            Layout.preferredHeight: 64
        }
    }
}

At the bottom, there’s some white space now. The layout isn’t anchored to the window’s top and bottom, so the rectangles are always their “preferred” heights, even when we stretch the window vertically. The Layout.fillHeight properties don’t do anything here. The layout doesn’t know what space it has available to fill if it doesn’t have a parent controlling its size in that direction! The only information the layout can use to control its own height is the sum of the implicit heights of its children (provided by their Layout.preferredHeight properties). The Layout.fillHeight properties are simply ignored.

OK, so what happens if we don’t fill height, but we do have the layout anchored to all four sides of the window?

import QtQuick
import QtQuick.Layouts

Window {
    width: 320
    height: 480
    visible: true
    title: qsTr("Hello World")

    ColumnLayout {
        anchors.fill: parent

        Rectangle {
            color: "pink"
            // Layout.fillHeight: true
            Layout.fillWidth: true
            Layout.preferredHeight: 256
        }

        Rectangle {
            color: "lightGreen"
            // Layout.fillHeight: true
            Layout.fillWidth: true
            Layout.preferredHeight: 128
        }

        Rectangle {
            color: "lightBlue"
            // Layout.fillHeight: true
            Layout.fillWidth: true
            Layout.preferredHeight: 64
        }
    }
}

Now the layout knows what height the rectangles like to be, but it was not told to resize them with a Layout.fillHeight. So, the layout stretches, and the rectangles get repositioned, but they do not get resized vertically. If you reduced the Layout.preferredHeight values to 4, 2, and 1, what happens?

import QtQuick
import QtQuick.Layouts

Window {
    width: 320
    height: 480
    visible: true
    title: qsTr("Hello World")

    ColumnLayout {
        anchors.fill: parent

        Rectangle {
            color: "pink"
            Layout.fillWidth: true
            Layout.preferredHeight: 4
        }

        Rectangle {
            color: "lightGreen"
            Layout.fillWidth: true
            Layout.preferredHeight: 2
        }

        Rectangle {
            color: "lightBlue"
            Layout.fillWidth: true
            Layout.preferredHeight: 1
        }
    }
}

Interesting. You get very slim horizontal lines that are not spaced out uniformly. The layout uses the Layout.preferredHeight properties to allocate proportional chunks of space (see the annotated figure below), but the rectangle placed in that space has exactly the height specified by the Layout.preferredHeight property.

Now, the last test we’ll do on sizing: setting the Layout.preferredHeight of a subset of child elements. For example, let’s say we put Layout.fillHeight back in there and remove the Layout.preferredHeight from the pink rectangle only (and restore the larger sizes 256, 128, and 64):

import QtQuick
import QtQuick.Layouts

Window {
    width: 320
    height: 480
    visible: true
    title: qsTr("Hello World")

    ColumnLayout {
        anchors.fill: parent

        Rectangle {
            color: "pink"
            Layout.fillWidth: true
            Layout.fillHeight: true
            // Layout.preferredHeight: 256
        }

        Rectangle {
            color: "lightGreen"
            Layout.fillWidth: true
            Layout.fillHeight: true
            Layout.preferredHeight: 128
        }

        Rectangle {
            color: "lightBlue"
            Layout.fillWidth: true
            Layout.fillHeight: true
            Layout.preferredHeight: 64
        }
    }
}

The pink rectangle is very small… it's easy to miss, but it's there. It does grow and shrink a little when resizing, but not much. The proportions of the green & blue rectangles are correctly maintained, but the QML engine doesn’t really have much information about what to do with the pink one. For the sake of being thorough, let’s reduce the Layout.preferredHeight of the green & blue rectangles to a simple 2:1 ratio:

import QtQuick
import QtQuick.Layouts

Window {
    width: 320
    height: 480
    visible: true
    title: qsTr("Hello World")

    ColumnLayout {
        anchors.fill: parent

        Rectangle {
            color: "pink"
            Layout.fillWidth: true
            Layout.fillHeight: true
            // Layout.preferredHeight: 4
        }

        Rectangle {
            color: "lightGreen"
            Layout.fillWidth: true
            Layout.fillHeight: true
            Layout.preferredHeight: 2
        }

        Rectangle {
            color: "lightBlue"
            Layout.fillWidth: true
            Layout.fillHeight: true
            Layout.preferredHeight: 1
        }
    }
}

The QML engine still maintains the correct ratio for the child elements with a specified Layout.preferredHeight, but the amount of space allocated for the pink one is different. This is all academic… I don't know what the use case for this would be. My only goal is to demonstrate how things behave and establish some rules.

Lesson 6: If all children of a layout are set to fill in the direction of the layout, either give all of them a Layout.preferredHeight/Width, or none of them.

Step 9: Additional Constraints

The minimum/maximum height and width properties can also be set. This allows the rectangles to resize, but only to a certain minimum or maximum size. In this case, we’ll put a maximum height on the pink rectangle. When resized, that rectangle will grow up to 384 pixels. It won’t get any taller, but the blue and green rectangles will continue to maintain their 2:1 ratio. In this case, it doesn’t matter if we use 256:128:64 or 4:2:1, the behavior is the same:

import QtQuick
import QtQuick.Layouts

Window {
    width: 320
    height: 480
    visible: true
    title: qsTr("Hello World")

    ColumnLayout {
        anchors.fill: parent

        Rectangle {
            color: "pink"
            Layout.fillWidth: true
            Layout.fillHeight: true
            Layout.preferredHeight: 4
            Layout.maximumHeight: 384
        }

        Rectangle {
            color: "lightGreen"
            Layout.fillWidth: true
            Layout.fillHeight: true
            Layout.preferredHeight: 2
        }

        Rectangle {
            color: "lightBlue"
            Layout.fillWidth: true
            Layout.fillHeight: true
            Layout.preferredHeight: 1
        }
    }
}

We can continue to add additional constraints, and the QML engine will do its best to resize based on all of them.

Step 10: A Fancy Window with Nested Layouts

Let’s take it up a notch by maintaining the 4:2:1 ratio of rows, and in each of those, let’s put a RowLayout. We’ll add more rectangles in various proportions in each of those rows. Let’s also set the spacing on the rows to zero, but leave the column layout’s spacing as the default value (5 pixels). A lot changed here, so I'll skip the highlighting:

import QtQuick
import QtQuick.Layouts

Window {
    width: 640
    height: 480
    visible: true
    title: qsTr("Hello World")

    ColumnLayout {
        anchors.fill: parent

        RowLayout {
            Layout.fillWidth: true
            Layout.fillHeight: true
            Layout.preferredHeight: 4

            spacing: 0

            Rectangle {
                color: "pink"
                Layout.fillWidth: true
                Layout.fillHeight: true
            }

            Rectangle {
                color: "darkRed"
                Layout.fillWidth: true
                Layout.fillHeight: true
            }
        }

        RowLayout {
            Layout.fillWidth: true
            Layout.fillHeight: true
            Layout.preferredHeight: 2

            spacing: 0

            Rectangle {
                color: "lightGreen"
                Layout.fillWidth: true
                Layout.preferredWidth: 1
                Layout.fillHeight: true
            }

            Rectangle {
                color: "green"
                Layout.fillWidth: true
                Layout.preferredWidth: 2
                Layout.fillHeight: true
            }

            Rectangle {
                color: "darkGreen"
                Layout.fillWidth: true
                Layout.preferredWidth: 3
                Layout.fillHeight: true
            }

        }

        RowLayout {
            Layout.fillWidth: true
            Layout.fillHeight: true
            Layout.preferredHeight: 1

            spacing: 0

            Rectangle {
                color: "lightBlue"
                Layout.fillWidth: true
                Layout.fillHeight: true
            }

            Rectangle {
                color: "steelBlue"
                Layout.fillWidth: true
                Layout.fillHeight: true
            }

            Rectangle {
                color: "blue"
                Layout.fillWidth: true
                Layout.fillHeight: true
            }

            Rectangle {
                color: "darkBlue"
                Layout.fillWidth: true
                Layout.fillHeight: true
            }

            Rectangle {
                color: "midnightBlue"
                Layout.fillWidth: true
                Layout.fillHeight: true
            }
        }
    }
}

Step 11: Using Real Controls

I mentioned previously that Rectangle objects don’t have an implicit size, but Button objects do (note that we need to add import QtQuick.Controls to get the Button object). Let’s see how that changes things. First, let’s strip it down to something like what we had way back in Step 4. The layout is not anchored, and it contains two buttons:

import QtQuick
import QtQuick.Layouts
import QtQuick.Controls

Window {
    width: 320
    height: 480
    visible: true
    title: qsTr("Hello World")

    ColumnLayout {
        Button {
            text: "Top button is...short"
        }

        Button {
            text: "Bottom button is...long"
        }
    }
}

Nothing terribly interesting there, but when the layout wasn't anchored before (with just Rectangles), we couldn't see the contents at all. However, Buttons do have an implicit size, whereas Rectangles don't. You can see that the Buttons are as wide as they need to be to accommodate their text, and the layout stays "fitted" to the implicit size of those buttons. As a consequence, the buttons neither resize nor reposition when the window is resized.

Now let’s say we want to fill the buttons in both directions:

import QtQuick
import QtQuick.Layouts
import QtQuick.Controls

Window {
    width: 320
    height: 480
    visible: true
    title: qsTr("Hello World")

    ColumnLayout {
        Button {
            text: "Top button is...short"
            Layout.fillWidth: true
            Layout.fillHeight: true

        }

        Button {
            text: "Bottom button is...long"
            Layout.fillWidth: true
            Layout.fillHeight: true

        }
    }
}

That’s a little more interesting. The buttons don’t fill the height and width of the whole Window, they fill the height and width of the ColumnLayout, and we didn’t say anything about how big the ColumnLayout should be. That is, we didn’t anchor it to the Window. It is not itself inside another layout, etc. The layout therefore just has its implicit size. The implicit height is the sum of its children’s implicit heights (plus spacing), and its implicit width is the maximum width of its children. So, when the children tell their parent layout to resize them to fill the layout’s height, nothing changes (because they already do), and, when they tell the parent layout to fill the layout’s width, really all that happens is that the narrower button fills to the same width as the wider one. Obviously, a RowLayout will behave the same, but in the horizontal direction instead.

Let’s anchor the layout to the Window and fill width (but not height):

import QtQuick
import QtQuick.Layouts
import QtQuick.Controls

Window {
    width: 320
    height: 480
    visible: true
    title: qsTr("Hello World")

    ColumnLayout {
        anchors.fill: parent

        Button {
            text: "Top button is...short"
            Layout.fillWidth: true
            // Layout.fillHeight: true
        }

        Button {
            text: "Bottom button is...long"
            Layout.fillWidth: true
            // Layout.fillHeight: true
        }
    }
}

Now both buttons stretch across the Window and are spaced out such that the column is divided in half vertically, and each button is in the vertical center of its half.

Step 12: Alignment

We probably don’t want those buttons to be haphazardly floating in the middle of their space. We probably want them either both at the top, both at the bottom, or one at the top and the other at the bottom. All of this can be achieved, but first we need to understand the quirks of the Layout.alignment property.

First, look at the window we created and notice the language I used above. The two buttons are not spaced equally relative to the height of the layout. There's twice as much space between the buttons as there is between the top button and the top of the window. Conceptually, the layout’s full height is divided in half, creating two "cells," and each button hovers in the center of its “cell.” This is important because the Layout.alignment property will align a button within its “cell,” not relative to the whole layout! So, if we align both buttons to the top, they will not both be at the top of the window. They are each at the top of their own “cell,” which leaves the bottom button hovering somewhere in the middle of the window:

import QtQuick
import QtQuick.Layouts
import QtQuick.Controls

Window {
    width: 320
    height: 480
    visible: true
    title: qsTr("Hello World")

    ColumnLayout {
        anchors.fill: parent

        Button {
            text: "Top button is...short"
            Layout.fillWidth: true
            Layout.alignment: Qt.AlignTop
        }

        Button {
            text: "Bottom button is...long"
            Layout.fillWidth: true
            Layout.alignment: Qt.AlignTop
        }
    }
}

If the bottom button had instead used Layout.alignment: Qt.AlignBottom, then the top button would be at the top of the window and the bottom button would be at the bottom of the window. If we want both at the top, Layout.alignment isn’t what we want. We have a couple options for that:

  1. Instead of anchoring the ColumnLayout with anchors.fill: parent, we can just anchor it to the Window’s left and right sides. Then, the buttons will fill the width of the Window, but the layout will only take up as much height as the two buttons (plus any ColumnLayout spacing). The buttons will both be stacked at the top.
  2. We can let the layout take up the whole height, and then add a dummy Item after the second button that will eat up all available height, pushing the two buttons up to the top, like this:
import QtQuick
import QtQuick.Layouts
import QtQuick.Controls

Window {
    width: 320
    height: 480
    visible: true
    title: qsTr("Hello World")

    ColumnLayout {
        anchors.fill: parent

        Button {
            text: "Top button"
            Layout.fillWidth: true
        }

        Button {
            text: "Middle button"
            Layout.fillWidth: true
        }

        Item { Layout.fillHeight: true }

        Button {
            text: "Bottom button"
            Layout.fillWidth: true
        }

    }
}

This is useful because now we can also add a button at the very bottom. We wouldn’t be able to achieve this with the alignment properties unless we divided up the layout’s “cells” just right, which wouldn’t be worth the hassle. Otherwise, we'd have to nest another ColumnLayout inside the outer one and use it to keep the top two buttons together.

Lesson 7: Dummy Items used as spacers can often give you better control of where things are positioned in the direction of the layout’s ordering of elements.

I prefer to use spacer Items instead of Layout.alignment, at least with regard to alignment in the direction of the layout. If we wanted things to be aligned left or right in a ColumnLayout (or top/bottom in a RowLayout), then Layout.alignment makes sense:

import QtQuick
import QtQuick.Layouts
import QtQuick.Controls

Window {
    width: 320
    height: 480
    visible: true
    title: qsTr("Hello World")

    ColumnLayout {
        anchors.fill: parent

        Button {
            text: "Top button"
            // Layout.fillWidth: true
            Layout.alignment: Qt.AlignRight
        }

        Button {
            text: "Middle button"
            // Layout.fillWidth: true
            Layout.alignment: Qt.AlignLeft
        }

        Item { Layout.fillHeight: true }

        Button {
            text: "Bottom button"
            // Layout.fillWidth: true
            Layout.alignment: Qt.AlignHCenter
        }
    }
}

Lesson 8: When positioning items on the axis that the layout does not control, the Layout.alignment property is more useful than it is on the axis that the layout does control.

Summary

Layouts are a modern and flexible method for creating nicely-resizable QML user interfaces. Until you have some time to play with them and see how they behave, they can seem unintuitive. I have learned several lessons the hard way, and I will reiterate them here for your comfort and convenience:

  1. The implicit height and width of Rectangles are zero.
  2. Unless you tell Qt where to put stuff, it won’t know, and it has no problem letting things overlap.
  3. Layouts, like Rectangles, have zero implicit width/height.
  4. If an object is a child of a Layout, don't set explicit size properties of the child: only use the Layout.* properties to delegate those duties to the Layout.
  5. If the layout itself has a specified size, AND all child objects use Layout.fillWidth/Height, AND all child elements have a Layout.preferredWidth/Height set, then the proportion of the fill allocated to each child will be the ratios of the preferredHeight/Widths!
  6. If all children of a layout are set to fill in the direction of the layout, either give all of them a preferredHeight/Width, or give them neither.
  7. Dummy Items used as spacers can often give you better control of where things are positioned in the direction of the layout’s ordering of elements.
  8. When positioning items on the axis that the layout does not control, the Layout.alignment property is more useful than it is on the axis that the layout does control.

Building a Qt App?

I’d love to help! Give us a call or send us an email to discuss! Learn more about our Web Application Development solutions and contact us today for your next project!

The post Resizing UIs with QML Layouts appeared first on DMC, Inc..

]]>
A Brief Tutorial on Qt’s Resource Files https://www.dmcinfo.com/blog/18025/a-brief-tutorial-on-qts-resource-files/ Wed, 07 Dec 2022 17:32:56 +0000 https://www.dmcinfo.com/blog/18025/a-brief-tutorial-on-qts-resource-files/ One of the many tools Qt provides for you is what’s known as the “resource compiler.” The idea is that you might have some data (say, an icon or image file) that your application needs. You could place that file in a particular location on the file system, and your application could load it at […]

The post A Brief Tutorial on Qt’s Resource Files appeared first on DMC, Inc..

]]>
One of the many tools Qt provides for you is what’s known as the “resource compiler.” The idea is that you might have some data (say, an icon or image file) that your application needs. You could place that file in a particular location on the file system, and your application could load it at run time, but you would need to either ensure that it’s there every time the app runs or ensure that the app will still work without it. The resource compiler gives you an alternative: load that file at compile time and bake the data directly into your executable. Then you never need to worry about finding the file at runtime.

It's a simple and useful system that is worth understanding. The first section is an overview of what it is, why it’s useful, and how to use it. The second section details a few of the things I’ve run into that have tripped me up.

Part 1: Details of the Qt Resource System

The Guts of a QRC File

A resource file (usually with a “.qrc” extension) is just XML that allows you to organize the app’s resources to your liking (you don’t need to write XML yourself if you’re using Qt Creator). When they’re compiled into the executable, you won’t have the luxury of a file system to help you organize and identify your files. This is the job of the QRC file, which is processed as follows at compile time:

  • The resource compiler reads the QRC file
  • The resource compiler loads the resources listed in the QRC file
  • The resource compiler generates a C++ source file that contains a huge array of bytes containing the exact bytes of those resources
  • Your regular compiler toolchain compiles the generated C++ source code into an object file that can be linked with the rest of your application
  • Other parts of your code use a special URL notation to reference resource files, and Qt knows how to point it toward the right part of the compiled byte array

Let’s take a quick look at what it does behind the scenes. Let’s say I have a file that the app uses as a background image. The file is 1462239 bytes:

$ wc -c background.png
1462239 background.png

In hex, 1462239 bytes is 0x164fdf. The generated C++ source looks like this:

static const unsigned char qt_resource_data[] = {
    // /home/markl/path/to/background.png
    0x0,0x16,0x4f,0xdf,
    0x89,
    0x50,0x4e,0x47,0xd,0xa,0x1a,0xa,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,...

Note that this byte array starts with the four-byte length of the file (0x00 0x16 0x4f 0xdf) followed by the exact bytes of the png file (which we can recognize by the standard magic bytes at the beginning of every png file, 0x89 0x50 0x4e 0x47 0x0d 0x0a 0x1a 0x0a). Additional resource files are concatenated onto that same byte array after the background image. That is, the array contains the size of background.png, then the bytes of background.png, then four bytes denoting the length of the next file, then the bytes of the next file, and so on.

Finally, at the bottom of the generated C++ file are some macros and other helpers. Note that you don’t need to interact with anything in this file directly. We’re only peeking behind the curtain here for educational purposes. Qt’s resource compiler will generate that C++ file for you, and your system’s toolchain will handle compiling and linking it. All you need to do is use URLs in your code to reference the resources you want to use.

URLs and aliases

Once the data is compiled into your application, it no longer has a filesystem path for you to refer to it in your code. So, Qt provides a URL format to conveniently identify resource files listed in your QRC file. You can optionally give each data file an easy-to-remember alias so you can refer to it more conveniently (and, later, you might map that same alias to a different resource file so you can change a resource without changing any of your source code). Neat!

Let’s look at some examples. Figure 1 shows an example in which all QML resources are in a directory called “qml_dir,” and all images are in a directory called “image_dir.” In the QRC file (the right half of the figure shows Qt Creator’s QRC editor), the resources are organized into groups based on “prefixes” that you can define yourself. Here, I created three prefixes: ui, images, and icons.

Resources as they are arranged on the file system (left side) and in the QRC file (right side).

Figure 1: Resources as they are arranged on the file system (left side) and in the QRC file (right side)

The URL that you will use has the following general form: qrc:/prefix/file_path_relative_to_qrc_file

Based on the organization of my QRC file, I can refer to each resource using the following URLs:

File path URL
qml_dir/main.qml qrc:/ui/qml_dir/main.qml
image_dir/background.png qrc:/images/image_dir/background.png
image_dir/image1.png qrc:/images/image_dir/image1.png
image_dir/image2.png qrc:/images/image_dir/image2.png
image_dir/icon1.png qrc:/icons/image_dir/icon1.png
image_dir/icon2.png qrc:/icons/image_dir/icon2.png

For example, using a QML Image object to display background.png would look like this:

Image {
    source: "qrc:/images/image_dir/background.png"
}

You also have the option of applying aliases to items in the resource file. Imagine that your graphic design team gives you an image with a name like Main Screen Background (No Transparency)_144p-FINAL v2_11-21-2022 final.png. Perhaps you would black out with rage, and when you awoke, you would recognize this as a great use case for aliases. Let’s give that image a short (readable and type-able) alias, say, bckgnd, which will appear in parentheses at the end of the line in the QRC file (Figure 2):

An alias applied to an image with an absurd file name

Figure 2: An alias applied to an image with an absurd file name

Now, instead of typing

Image {
    source: "qrc:/images/image_dir/Main Screen Background (No Transparency)_144p-FINAL v2_11-21-2022 final.png"
}

We can use only the prefix and the alias. The alias takes the place of the relative file path in the URL, so it has form qrc:/prefix/alias. For example:

Image {
    source: "qrc:/images/bckgnd"
}

The next week, when you get Main Screen Background (No Transparency)_144p-FINAL v2_11-21-2022 final FINAL.png, you can update the QRC file with the new file path but keep the same alias (bckgnd). None of your code needs to change at all, because the URL you use to reference that file doesn't need to change!

Returning to my original example, I might apply the following aliases:

Aliases applied to all resources

Figure 3: Aliases applied to all resources

All resources can now be used in code using nice short URLs:

File path URL
qml_dir/main.qml qrc:/ui/main_screen
image_dir/background.png qrc:/images/bckgnd
image_dir/image1.png qrc:/images/img1
image_dir/image2.png qrc:/images/img2
image_dir/icon1.png qrc:/icons/start_symbol
image_dir/icon2.png qrc:/icons/stop_symbol

Why Would I Do This?

For me, the most compelling reason is to manage QML files in a QtQuick application. You probably don’t want users to be exposed to the actual QML files that make up the UI, so I list all my QML files as resources. The application is probably useless without the QML files, so you’re likely to want to ensure that they can always be found, and that they can’t change.

Another use case, as described above, is to provide some abstraction between the URL that you use to refer to resources in code and the file name that it has on disk. This helps you manage files with horrific names and allows you to easily update the file path of an aliased file without changing any of your code.

You might also benefit from the reduced amount of code you need to write & maintain. To load a file from disk, you need to know where to look for it. Is it at an absolute path? If so, how do you know what that path is on all systems? Is it a relative path? Relative to what? Once you know what path to look for, what do you do if the file isn’t there? What do you do if it is there, and there’s a problem with permissions? You can neatly sidestep all those problems if you can just compile the file directly into your app and refer to it with an easy-to-remember URL.

Part 2: Limitations and Gotchas

This is all pretty simple, but there are a few things that can get you tripped up that aren’t addressed clearly in the documentation.

Using a full URL instead of an alias

Let’s look at the example above. We’ve got the following:

File path image_dir/image2.png
URL (without the alias) qrc:/images/image_dir/image2.png
URL (using the alias) qrc:/images/img2

Let’s say you’ve recently learned about aliases, and you applied the img2 alias to image2.png. You changed to the shorter URL several places in your code, but you missed one instance and left the full (non-aliased) URL somewhere. Qt’s resource URL resolver will fail to locate that resource. Even though the URL used to be valid, and the relative path to the file has not changed, if you apply an alias, you must use it.

I find this unintuitive. The word “alias” to me implies that you can use it, but you don’t have to. Nevertheless, as of Qt 6.2.1, be aware that this is the case. Note that you can have a mix of aliased and not-aliased items in the same QRC file.

Accidental Alias Collisions

Suppose you accidentally apply the same alias to different items in a QRC file. For example, you alias both icon1.png and icon2.png as “start_symbol.” You will be able to use qrc:/icons/start_symbol as usual in your code, and no errors or warnings will appear at compile time or run time. This is, in fact, normal and useful behavior, but if you aren’t checking for it, it could cause a subtle bug.

Why is this normal and useful? Because one of the common uses for resource files is to provide language translation features. Say you have an image of a stop sign for users in the US (“STOP”) and an image of a stop sign for users in Mexico (“ALTO”). You can use the same alias to refer to one of many resources, and Qt will choose the appropriate image based on the user’s configured locale. We won’t dig into it any more here, but for more information on advanced usage and localization, see the Qt documentation.

Multiple QRC files

You might be tempted, as I was, to separate your resources into multiple QRC files. Each QRC file needs to generate a C++ source file, and then that file needs to be compiled, so you might think that it makes sense to put your QML files (which will change frequently as you develop) in their own QRC file separate from icons, images, and other relatively large files that won’t change much. This makes a lot of sense and seems like a perfectly fine practice… as long as you realize that there is no mechanism for addressing a particular QRC file in your URLs.

Recall that the format of a URL is qrc:/prefix/file_path_relative_to_qrc_file or qrc:/prefix/alias. Nowhere in there do you have the option of specifying a particular QRC file, only the prefix and path or alias. It will still work, and it will not complain if both QRC files each contain the same prefix and the same alias, so be careful that you don’t have colliding prefixes or aliases. There’s not a lot of error checking being done, so be careful!

Big Files

Recall from the first section that the resource compiler puts the bytes of your files into an array of bytes in a C++ source file. As you might imagine, that array can get pretty big. I have seen some instances (generally on smaller ARM systems) in which the generated (huge) C++ file fails to compile without a terribly helpful error message. In this case, you can direct the resource compiler to skip the C++ source file step and compile your resources directly into an object file.

By default, in CMake, you can just list your resource file with the rest of your sources and tell it to automatically run the resource compiler (rcc):

set(CMAKE_AUTORCC ON)       # Run rcc automatically
set(CPP_SRC main.cpp)       # These are my C++ source files
set(QRC_SRC resources.qrc)  # These are my QRC files
qt_add_executable(${PROJECT_NAME} MANUAL_FINALIZATION ${CPP_SRC} ${QRC_SRC})

To skip the C++ generation step, use the qt_add_big_resources() function:

set(CMAKE_AUTORCC ON)                        # Run rcc automatically
set(CPP_SRC main.cpp)                        # These are my C++ source files
qt_add_big_resources(QRC_SRC resources.qrc)  # These are my QRC files
qt_add_executable(${PROJECT_NAME} MANUAL_FINALIZATION ${CPP_SRC} ${QRC_SRC})

The resource compiler will then produce object files without bothering with the intermediate C++ source. Most of the time, you might as well just use qt_add_big_resources()… you know what’s in the generated C++ source now, and it’s probably not that useful to you.

Summary

Qt’s resource compiler can be a nice way to simplify access to data files in your application. You just need to understand the rules regarding URL formats, and make note of a few potential pitfalls:

  • Nothing’s stopping you from giving multiple things the same alias, so be careful
  • Nothing’s stopping you from having multiple QRC files, and nothing’s stopping you from duplicating prefixes and aliases across multiple files
  • If you applied an alias to a resource, you must use it in the URLs
  • Unless you’re especially interested in looking at the C++ source file generated by the resource compiler, feel free to skip that step and use the qt_add_big_resources() function in CMake

Building a Qt app?

I’d love to help! Give us a call or send us an email to discuss!

The post A Brief Tutorial on Qt’s Resource Files appeared first on DMC, Inc..

]]>
Advantages of .NET and Python for Test & Measurement Applications https://www.dmcinfo.com/blog/18030/advantages-of-net-and-python-for-test-measurement-applications/ Wed, 07 Dec 2022 16:57:14 +0000 https://www.dmcinfo.com/blog/18030/advantages-of-net-and-python-for-test-measurement-applications/ Prologue In May 2019, at what would become the last NI Week ever, I led a session called “Learning to Love Text Again With Measurement Studio.” It was scheduled for 8:30 AM on the last day of the conference, so I was surprised that so many travel-weary engineers stumbled in, red-eyed, with clumps of taco […]

The post Advantages of .NET and Python for Test & Measurement Applications appeared first on DMC, Inc..

]]>
Prologue

In May 2019, at what would become the last NI Week ever, I led a session called “Learning to Love Text Again With Measurement Studio.” It was scheduled for 8:30 AM on the last day of the conference, so I was surprised that so many travel-weary engineers stumbled in, red-eyed, with clumps of taco still stuck in their hair, ready to listen to me talk about what I thought was a fairly niche topic. I was wrong! The room was filled with enthusiasm, although some of it was already about where to get lunch.

It is now December 2022. NI Week is gone, but the Test & Measurement community is still hungry for Austin’s spectacular tacos and alternative software development platforms. A mere three-and-a-half years later, I have finally found time to convert that presentation into a series of blog posts for those of you who weren’t there (or who slept through it). The intent is to motivate the use of Python or Microsoft’s .NET platform as programming environments for Test & Measurement Automation, and to provide you with some guidance toward getting started.

Introduction

The purpose of this post is to describe the advantages of Python and .NET software development for Test & Measurement applications. The de facto standard is generally National Instruments LabVIEW, due to the shallow learning curve and the quality and feature set of NI hardware; however, NI is also very good about offering hardware APIs for other languages — which gives you the flexibility to choose another option if it’s the right tool for the job. This post will describe why, in certain circumstances, the right tool may be Python or .NET.

In a separate post, I also offer a brief overview of NI’s Measurement Studio — which is a very useful set of tools that makes it easier for LabVIEW developers to get started with .NET.

Measurement Studio offers:

  • .NET classes and functions analogous to LabVIEW’s data analysis VIs
  • .NET data types analogous to those used by LabVIEW (e.g., analog and digital waveform types)
  • A .NET API for creating and managing TDMS files
  • Controls, indicators, and graph elements that can be used in .NET UIs

If I make a compelling case here and you’d like to try building your next Test & Measurement application in .NET, I recommend getting started with the free trial of Measurement Studio. The familiar tools that it provides will help you leverage your existing LabVIEW knowledge to work efficiently in a new environment.

Strengths of LabVIEW Development

From the beginning, NI’s mission for LabVIEW was to make it easy for scientists and engineers to build Test & Measurement applications. Since the mid-80s, the guiding principles of NI’s LabVIEW investment were to:

  • Enable fast, easy software development
  • Make hardware integration as simple as possible
  • Provide a standard library with a focus on engineering functionality
  • Lower the barrier to producing graphical user interfaces

For these reasons, there are some use cases for which LabVIEW is an obvious win. If you need to get an application up and running quickly, if it needs to acquire, process, and visualize data, and if it will neither be deployed widely nor maintained for a very long time, then you will likely benefit from the productivity of developing in LabVIEW.

Where Can We Do Better?

Larger projects require larger teams in order to meet delivery deadlines or maintain the software throughout its lifecycle. However, larger teams need to be able to work in parallel without stepping on each other’s toes. Beyond the initial delivery, maintaining the software can become a challenge as it ages and evolves and as developers drift in and out of the project. Here, we will discuss some advantages to Python and C# that reduce the complexity of maintaining a large software project throughout its lifecycle.

Enabling Technology

Oddly enough, the graphical nature of LabVIEW, which makes it one of the most beginner-friendly programming languages, also makes it extremely difficult to compare two pieces of code. It is easy to navigate and visually parse LabVIEW code, but very difficult to graphically represent the difference between two pieces of code.

Consider the case in which you wrote a VI and shared it with a colleague. If that colleague edited it and gave you a new version, how would you find all of the differences? You can hunt them down on your own, but you’ll need to search every case structure and check the default value of every control. Even if it were a simple VI that could be visually compared easily, you’d have to identify each change as either functional or cosmetic (i.e., an additional wire bend is a difference, but an inconsequential one). Some automated tools exist, but they tend to be hard to configure and use. Furthermore, once all of the differences are identified, how do you visualize them concisely?

Text source code is much more limited in terms of layout. It’s very easy for a computer to parse two chunks of text, compare them, and produce a simple visualization of the differences (often referred to as a “diff”). Simple as it may seem, this enables a substantial number of tools and techniques for managing source code that are not available for complex binary files like VIs. This is critical for code reviews as a quality assurance technique. Senior developers can easily review only the diffs, which are both concise and automatically generated.

Multi-developer Workflows

Being able to see only the differences in two chunks of code is important for quality assurance, but it gets even better: a diff can also be used to easily merge changes into source code as well. This is a major enabler for multi-developer scenarios because it means that two or more developers can make changes to different parts of the same source file, and those changes can be blended together easily (i.e., they won’t collide unless there are different variations of the same lines of text).

DMC’s platform of choice is GitLab, which is a web-based software development management tool.

GitLab provides:

  • A revision-control system (git)
  • Source code navigation, viewing, and comparing
  • Organized methods for users to track issues and resolutions
  • Automations for testing and building applications
  • Different user roles that enable better collaboration with both internal and external teams
  • Many other features

Of particular importance is the issue resolution workflow, which can be separated into individual workflows for different user roles, such as:

In this diagram, we have several people working in parallel (from top to bottom): a technical lead, a few developers, and anyone else with access to the source. Anyone with access to the source code repository can test the code and report issues (bottom). Developers (center) can select a reported issue, create a branch of code dedicated to its resolution, resolve the issue, and submit a “merge request.” The technical lead of the project (top) can then review those merge requests, ensure the code meets quality standards (by viewing the diff of that branch with the main line of development), and merge the updated code into the main line of development independently and in parallel with the developers.

We have found that the cadence of code reviews is easier to maintain with this workflow. GitLab serves as a portal to view only the parts of the code that have changed and allows you to discuss those changes with the developer (asynchronously, via the web interface) before merging them. Instead of sitting down in a conference room and having the developer  walk the technical lead through all the changes in a branch, the technical lead can simply view a diff those changes and enter comments or suggestions that the developer can then address later (as shown below). Once all comments have been addressed and the technical lead is satisfied with the changes, they can be merged.

The GitLab comparison tool shows the source code diff and allows a code reviewer to leave comments & questions for the code's author.

This substantially improves the code review workflow, allowing the tech lead and developers to work asynchronously, communicate effectively, and collaborate productively.

Separation of Concerns

One objective of good software design is “separation of concerns.” One must break down the problem into smaller and smaller problems, and then those into even smaller problems, continuing until there is a tree of convenient fun-size problems that can each be solved easily. Implicit in this methodology is the idea that each problem should be independent of the others. The programmer should establish clear interfaces between problems to keep them logically separate.

If you haven’t heard of SOLID design principles, consider reading through a blog post by my colleague and LabVIEW aficionado Steven Dusing. The “S” in SOLID stands for “single responsibility,” meaning that, as you separate your concerns into smaller, more manageable problems, you should end up with chunks of code that do one thing and one thing only.

If different pieces of code are truly separated, you can even delegate them to people with different skill sets. Wouldn’t it be nice to have a UI designer handle the graphical layout of your application while engineers develop the business logic? A LabVIEW VI has a user interface (front panel) that is inextricably tied to the business logic (block diagram). This makes it dirt-simple to produce a GUI application, but it also makes it very hard to have a complex UI that is separate from the business logic and can be developed by a different person.

As an example, consider WPF, one of the graphical frameworks available on the .NET platform. The UI layout is specified with an XML-style language completely separately from the run-time logic. Since the graphical layout is specified as a text language, diffs are supported for easy review, and, since it exists in its own file, your UI/UX designer can work in parallel with the engineering team. We have effectively separated our concerns: UI things get handled in a UI file by a UI expert, and business logic things get handled in C# code by C# experts.

Outside of the .NET framework, various graphical toolkits are available but my personal preference is Qt — which is available under the terms of the LGPL (mostly). Qt provides QML, which is conceptually similar to XAML (i.e., like XAML, QML is a declarative UI description language), and gets used in the same way to separate your UI design from business logic. Business logic can be implemented in Python, C++, or some other languages with third-party bindings to Qt.

Best Practices for Software Development

In LabVIEW 8.2, object-oriented design principles were introduced to LabVIEW, which was a difficult balancing act. NI’s team aimed to make Object-Oriented Programming (OOP) accessible to scientists and engineers who didn’t necessarily have a computer scientist’s background, without compromising the fundamentals on which LabVIEW was built.

This was done out of a recognition that:

Object-oriented programming has demonstrated its superiority over procedural programming as an architecture choice in several programming languages. It encourages clear divisions between sections of the code, it is easier to debug, and it scales better for large programming teams. LabVIEW R&D wanted this power to be accessible by our customers. We wanted the language to be able to enforce some of these software best-practices.

There are a lot of compelling reasons to use an object orientation (OO) approach, and NI needed to balance that with the need to maintain the concept of dataflow, and a recognition that this might not be the right approach for every person, every project, and every situation. Consequently, the implementation of OO in LabVIEW is very useful and very easy to use, but it is substantially different from traditional OO. For larger projects with larger teams, leveraging the scalability and modularity of a truly object-oriented language can lead to a better product and a more efficient development experience.

Deployment

The three languages we’ve been discussing most (LabVIEW, C#, and Python) all need to run in the context of some set of libraries on the target machine. That may be the LabVIEW Runtime Engine, the .NET Framework, or the Python interpreter, respectively.

Applications that will be widely distributed will generally benefit from the fact that all Windows installations either have the .NET runtime installed already, or they make it easy to do so automatically. Python even has the advantage of creating virtual environments which allow multiple Python interpreters to co-exist on a system, and for each project to have its own set of dependencies installed without any of them colliding with the others. Virtual environments are important for development, but they can also be used to ensure that applications are installed consistently across various systems.

For large development projects, the deployment procedure should be considered early. In many cases, the .NET framework is already installed and available on Windows machines, and if the target will be Linux, Python is widely available and often distributed by default with an OS installation.

Community Support and Engagement

The largest and most active development communities generally produce some of the most useful third-party tools, often under open-source license terms. Based on the relatively informal TIOBE index, both C# and Python land in the top 5 most searched programming languages. Using that as a proxy for the activity of their respective communities, it is not surprising that the package managers for these platforms runneth over with useful tools and libraries that can be readily used in your projects.

The .NET framework provides a package management system called NuGet, which currently hosts over 330,000 unique packages. For Python, the pip module is used to connect to pypi (over 420,000 unique packages) or other package repositories. While LabVIEW does have both JKI’s VI Package Manager (with over 1000 packages) and NI’s own package manager, neither offers the same level of community engagement.

As projects get larger, the availability and quality of reusable tools and libraries can help drive down the amount of code you need to maintain yourself.

Summary

In recent years, NI has made substantial investments in support for .NET and Python. While LabVIEW continues to be a good option for some Test & Measurement applications, many (particularly large projects) could benefit from different tools.

For some engineers, the learning curve of training up on a language other than LabVIEW may be prohibitive. In these cases, I recommend considering NI Measurement Studio as a stepping stone. Click here to read my overview of how Measurement Studio can fill the gaps between LabVIEW and C#.

Learn more about DMC's Test & Measurement Automation solutions, and contact us today for your next project.

The post Advantages of .NET and Python for Test & Measurement Applications appeared first on DMC, Inc..

]]>
Measurement Studio: .NET Programming for NI Enthusiasts https://www.dmcinfo.com/blog/18038/measurement-studio-net-programming-for-ni-enthusiasts/ Tue, 06 Dec 2022 11:56:45 +0000 https://www.dmcinfo.com/blog/18038/measurement-studio-net-programming-for-ni-enthusiasts/ Prologue In May 2019, at what would become the last NI Week ever, I led a session called “Learning to Love Text Again With Measurement Studio.” It was scheduled for 8:30 AM on the last day of the conference, so I was surprised that so many travel-weary engineers stumbled in, red-eyed, with clumps of taco […]

The post Measurement Studio: .NET Programming for NI Enthusiasts appeared first on DMC, Inc..

]]>
Prologue

In May 2019, at what would become the last NI Week ever, I led a session called “Learning to Love Text Again With Measurement Studio.” It was scheduled for 8:30 AM on the last day of the conference, so I was surprised that so many travel-weary engineers stumbled in, red-eyed, with clumps of taco still stuck in their hair, ready to listen to me talk about what I thought was a fairly niche topic. I was wrong! The room was filled with enthusiasm, although some of it was already about where to get lunch.

It is now December 2022. NI Week is gone, but the Test & Measurement community is still hungry for Austin’s spectacular tacos and alternative software development platforms. A mere three-and-a-half years later, I have finally found time to convert that presentation into a series of blog posts for those of you who weren’t there (or who slept through it). The intent is to motivate the use of Python or Microsoft’s .NET platform as programming environments for Test & Measurement Automation, and to provide you with some guidance toward getting started.

Introduction

This post will focus on NI’s Measurement Studio, what it is, and why it’s useful. In another post, I will write about the procedural advantages (code reviews, separation of concerns, multi-developer teams) of developing Test & Measurement applications in .NET or Python. In yet another, I will write about the abundance of tools available to you in those environments (like ORM, web applications, etc.).

Motivation: Using the Right Tool for the Job

As engineers, we know that there are many ways of doing a thing, and that there are many tradeoffs that need to be considered before deciding how the thing will get done. Programming languages are a dime a dozen, and they all pretty much do the same thing. So, if you’ve already got LabVIEW competency, why invest the time in learning Microsoft’s (extremely large, complicated, and intimidating) .NET stack? On the face of it, that doesn’t seem like a worthwhile tradeoff if both languages get the thing done.

LabVIEW’s dataflow language is a great way for engineers without programming experience to write data-acquisition software quickly, without fear of encountering some freakish horror like:

/usr/lib/../lib/crt1.o: In function `_start': (.text+0x20): undefined reference to `main'

This is one reason why LabVIEW will always occupy a 32×32 pixel area of our hearts: quickly acquiring and visualizing data just doesn’t get any easier; however, as software reaches a certain level of maturity or complexity, other development environments can substantially simplify the management of the software’s lifecycle. DMC develops extremely large Test & Measurement applications with deep inheritance hierarchies and complex sets of dependencies. As a project gets larger and larger, it becomes more and more difficult to manage LabVIEW code, work on it with a team, control & assure its quality, and deploy it to customers.

Over the last several decades, the software community (separately from the engineering community) built itself many tools to achieve those same goals. As the two communities have converged over the years, their tools have become available off-the-shelf not just for the software developers, but for the mechanical engineers, the electrical engineers, and everyone else. Test & Measurement applications can stand to benefit from improved processes that already exist thanks to those tools. For example, .NET, Python, and other languages offer:

  • The humble and mighty diff (an easy way to compare two versions of textual source code)
  • Tools for multi-developer scenarios (for example, GitLab and GitHub — which work much better with text than with VIs)
  • Tools for code quality (such as linters, formatters, and so on)
  • Huge libraries of third-party packages
  • Really sweet dynamic UIs
  • More opportunities to adhere to good programming practices with object-oriented programming paradigms

Note: again, there will always be a time and a place for LabVIEW, but it is not “all the time” and “everywhere.”

What’s the Problem, Then?

The problem is that you’re busy, and you don’t have time for this.

But what if I told you that getting started with .NET doesn’t mean you have to start from scratch? What if I told you that NI predicted this traumatic event and already provided the tools that you, a LabVIEW developer, would need to productively develop an application in C#?

Measurement Studio

A lot of people familiar with NI’s ecosystem have never used Measurement Studio and don’t really know what it is. Here’s what it is not:

  • Measurement Studio is not a new language (it’s regular ol’ C#)
  • Measurement Studio is not a new IDE (it’s regular ol’ Microsoft Visual Studio)
  • Measurement Studio is not the same as LabWindows/CVI
  • Measurement Studio is not a set of hardware drivers

What Measurement Studio is, however, is the bridge between LabVIEW and .NET. It is a set of tools that are analogous to the ones you’re used to in LabVIEW, so you can take everything you know about LabVIEW programming and easily transition it to C#.

Same Concepts, Different Angle

You already know how to acquire data from DAQmx in LabVIEW:

Typical usage of the LabVIEW DAQmx API.

If you wanted to do the same thing in .NET, your code would conceptually be the same, just… rotated 90 degrees clockwise:

The .NET API for DAQmx is directly analogous to the LabVIEW API.

As you can see, NI provides a consistent API for DAQmx across multiple languages. There’s the familiar LabVIEW API for DAQmx and a parallel .NET API (i.e., there’s basically a one-to-one mapping of LabVIEW VIs to class methods in C#), as shown above. For Python fans, I would also recommend NI’s Python API for DAQmx, which we’ve also used.

Hardware Drivers are Already Available!

Any time you install the DAQmx drivers, you have the option of installing the .NET language bindings, too. The DAQmx drivers are not a part of Measurement Studio (they come with all NI hardware that supports DAQmx). The same goes for some other drivers, so DAQmx, NI-VISA, and NI-GPIB drivers all have “.NET Development Support” options in the installers — even without Measurement Studio. Some have even been open-sourced and hosted on GitHub, like the Vision Development Module for .NET! Others are available as a separate download, like .NET APIs for NI-Switch, NI-DMM, and NI-FGEN. See NI’s documentation here for more information.

Measurement Studio’s Value-Add

You already have your hardware drivers available, and you already know how to use them, so whether you’re working in LabVIEW or C#, you can acquire some data. Now you need to do stuff to it. In LabVIEW, you know how to do stuff to data. You do stuff to data all day long; it’s literally your job! So, if you’re going to work productively in C#, you’ll need to know how to do the same things. This is where Measurement Studio makes your life substantially easier. Once you install Measurement Studio, you will have access to .NET libraries that provide the same functionality that you’re already using in LabVIEW, and they’re even organized in the same way:

The NationalInstruments namespace includes APIs for data acquisition, data analysis, TDMS logging, and more.

Butterworth Filter Example

The parallels extend down to individual VIs, which typically map to a single .NET function or class. For example, consider the Butterworth Filter VI.

The analogous .NET library is provided by Measurement Studio and would be used like this in C#:

using NationalInstruments.Analysis.Dsp.Filters;
var newFilter = ButterworthLowpassFilter(order, fs, fh);
var filteredX = newFilter.FilterData(X);

You simply create a ButterworthLowpassFilter object with the same order, fs, and fh parameters that you’d wire to the VI. You then call that object’s FilterData() method on the input array (X), and it gives you the filtered output array (filteredX).

There’s a specific lowpass filter class, so you don’t need the filter type input, and the fl value isn’t applicable. Nor do you need the init/cont input (the nature of object-oriented programming means you don’t need to manipulate the filter’s state at the same time you’re trying to use it). I would make the bold claim that, while this may seem unfamiliar at first, the VI is trying to do too much, so the C# code is ultimately more intuitive & readable than the VI is!

TDMS example

Whether you’re using LabVIEW or C# to glue the parts together, you’ve acquired your data with DAQmx, filtered it with a Butterworth filter, and now you need to store that data in a TDMS file. In LabVIEW, you might have something like:

Typical usage of the LabVIEW TDMS API.

And the equivalent C# code, using libraries provided by Measurement Studio, is:

using NationalInstruments.Tdms;
var tdmsFile = new TdmsFile("C:\filePath.tdms", new TdmsFileOptions());
var channel = tdmsFile.AddChannelGroup("DataGroup").AddChannel("DataChannel", TdmsDataType.Double);
channel.AppendData<double>(123.45);
tdmsFile.Close();

Unlike hardware drivers, which already have .NET language bindings available in their installers, the data analysis and TDMS methods in C# are provided by Measurement Studio.

What I hope is evident from these examples is that you don’t need to start from scratch when you use Measurement Studio. It provides equivalent classes & functions to the VIs you already know you want. Measurement Studio is just a parallel universe where the only difference is that you develop data acquisition software with your keyboard instead of your mouse!

Debugging

The code itself is only one piece of the puzzle. The IDE is a big part of the programming experience, too. As I mentioned before, Measurement Studio is not a new language or a new IDE. The language is just standard C#, and Measurement Studio installs a few extensions to Microsoft’s Visual Studio IDE, which is an extremely powerful programming and debugging environment.

It’s easy to acquire, filter, and write your data to disk, but, as you develop your application, you’ll eventually need to debug a problem. In LabVIEW, you’ve got your trusty probes, breakpoints, and execution highlighting. What do you do in Visual Studio? Exactly the same things!

You can easily add breakpoints (yes, conditional ones, too) and step through your code line-by-line. This is standard in Visual Studio (and virtually all IDEs).

Microsoft Visual Studio's breakpoint feature

Since Measurement Studio’s .NET class libraries provide data types like the ones you’re familiar with (for example, AnalogWaveform), it also provides “data tips” so that the debugger can inspect (“probe”) those data types while you’re debugging.

Visual Studio brings some of its own fun debugging tricks, too. For example, you can:

  • Edit code while it’s running in debug mode!
  • Change the flow of execution (i.e., manually move to particular lines of code) while you’re debugging!
  • Change the values of variables in memory while you’re debugging!

Visualization

When it comes to graphs & charts, nothing’s easier than LabVIEW. In Test & Measurement applications, you’re almost certainly going to need to plot some data, and Measurement Studio makes it easy.

The current standard for GUI development in .NET is WPF, which provides standard controls and indicators like buttons, text boxes, sliders, etc; however, as a LabVIEW developer, you’re going to want some LEDs, some touchscreen-friendly switches, and some high-visibility dial gauges alongside your graphs and charts. Measurement Studio has you covered here, too, by providing familiar controls and indicators for your WPF user interface:

A sample of WPF controls. Some are provided by the .NET framework, others are provided by Measurement Studio.

And, as a bonus, WPF provides plenty of customization and styling options — as well as hardware-accelerated vector graphics and easy re-sizing.

Summary

Measurement Studio gives you the following tools to help ease your transition from LabVIEW to .NET:

  • Convenient NI-style controls, indicators, and graphs that you’re used to
  • Convenient NI-style data types that you’re used to
  • Convenient NI-style APIs for all the analysis functions you’re used to
  • A convenient NI-style TDMS API that you’re used to

These tools can help you continue to work productively as you migrate from LabVIEW to an unfamiliar programming environment. With all these convenient, NI-style tools available, you can ease right into it.

All I’ve done here is talk about how Measurement Studio can help if you choose to develop your application in .NET instead of LabVIEW. We haven’t even discussed why you would choose that! If this post has made C# seem more accessible with Measurement Studio but you’re still not sure why it’s worth trying, then I would suggest reading through this related post to give you more insight on how platforms like .NET and Python can help you build and maintain your Test & Measurement software.

Learn more about DMC’s Test & Measurement Automation solutions, and contact us today for your next project.

The post Measurement Studio: .NET Programming for NI Enthusiasts appeared first on DMC, Inc..

]]>
NUnit Testing and Using Moq in C# https://www.dmcinfo.com/blog/18302/nunit-testing-and-using-moq-in-c/ Tue, 30 Aug 2022 11:55:10 +0000 https://www.dmcinfo.com/blog/18302/nunit-testing-and-using-moq-in-c/ *To the tune of Willy Wonka singing* Come with me, and you’ll be, in a world of unit testing informationnnnn. Unit testing! Unit testing is a great way to ensure that any updates or new functionality added to your code runs smoothly. With well-written tests, you can catch anything that may have been broken by changed […]

The post NUnit Testing and Using Moq in C# appeared first on DMC, Inc..

]]>
*To the tune of Willy Wonka singing*
Come with me, and you’ll be, in a world of unit testing informationnnnn.

Unit testing! Unit testing is a great way to ensure that any updates or new functionality added to your code runs smoothly. With well-written tests, you can catch anything that may have been broken by changed methods. 

Getting started can be a little tricky as there are some caveats and neat tricks that are hard to identify at first. In this blog you’ll learn how to get started with NUnit unit testing in C#, use Moq to help enhance these tests, and get testing like a pro. 

Getting Started

To begin, open your project in Visual Studio Enterprise. If the project is opened in a Community or other edition of Visual Studio, you will not be able to view specific breakdowns of the code coverage by section. If you are not concerned with looking at code coverage, this shouldn’t be an issue.

Once the project is opened, select “Test” in the top menu and navigate down to “Test Explorer” to view a layout of all tests.

 

Navigating to the Test explorer in Visual Studio Enterprise

This should open the Test Explorer for you.

The Test Explorer

The Test Explorer is where you can view all the tests built in your project. To run all the tests at once, select the multi-layered play button in the top left corner. To run individual groups of tests, you can open nested tests and run either groups or individual tests by using the right-hand play button shown in the snapshot below.

Once you run all the tests using the play button in the top left corner, you will be able to view the passing or failing status of the test grouping based on the icon next to the grouping. 

In this case, all our tests have passed and have a green check mark next to them. If a test inside the grouping fails, the grouping will be marked by a red X. 

You can also use the highlighted icons to filter by passing tests, failing tests, or tests that are not run.

Where Tests Are Located

To locate tests, drill down into the test explorer and double click a test. This will take you to its location in your project

Viewing Code Coverage

To view the code coverage, once you have successfully opened the Test Explorer and run all your tests, return to the “Test” menu and select “Analyze Code Coverage for All Tests.”
 
This will bring up the “Code Coverage Results” window, which you can drill down into to view coverage by sections of the project.

Installing Necessary NuGet Packages

In order to run the tests, you will need to have a few NuGet packages installed in each section of the project with tests present. In this case, tests are present in both Mars.NUnit and in NUnit under Reports, so we will want our packages installed in both sections.

To do this, right click on the portion of the project you would like to install the packages and select “Manage NuGet Packages."

From here, the packages you will need to install are:

  • Microsoft.CodeCoverage
  • Microsoft.NET.Test.Sdk
  • MSTest.TestAdapter
  • NUnit
  • NUnit3TestAdapter

These packages and their respective versions are also listed in the screenshot below.

Writing your Tests and Using Moq

Writing unit tests is straightforward; the process can be as simple or as complex as you would like. For example, I created a method called AddTwo, which does exactly as the name implies: adds two to my input. 

Writing tests using Moq

I also wrote a test with three test cases. This checks that when I add two my answer is what I expected.

Writing tests using Moq

 As you can see, this is straightforward, and my tests are all passing. Let’s say, however, I had a method called ‘AddThree,’ which depended on a function ‘AddOne’ (the desired result), and ‘AddOne’ had either not been completed or was still in development. 

This situation is a great example of where we can use Moq to make our lives easier. As shown below, I’ve created my function AddThree():

Writing tests using Moq

I’ve also defined AddOne to be a virtual method. This is so that we can use Moq to mock the method. If your method is private and unable to be scoped to, or virtual, you won’t be able to use Moq to get around this issue. I’ve now written a new test called TestAddThreeMethod and used Moq to mock a call of our AddOne method.

In this case, I’ve updated my AddOne function to erroneously try and add 5 to my function, which would throw off our expected addition of just one.

Writing tests using Moq

Using Moq, we can get around the AddOne method that our AddThree method uses and isolate AddThree, only testing AddThree's functionality.

Writing tests using Moq

What I’ve done in the above image is set up a Mock class of my AddingFunctionsClass. In the line below, I’ve instantiated that whenever AddOne gets called inside my mockFunctionClass, it will instead default to using what I have entered in my Returns(), which is my number + 1, the correct output of AddOne. 

You could also hardcode AddOne to return a single value each time, however, this would then no longer show our tests as passing.

Using SetupSequence

SetupSequence is another powerful tool to use in Moq. Let’s say we have a method that gets called multiple times in a function, but we want it to return variable elements for each time it is called. This can be accomplished by using the SetupSequence. For example, we could set our AddOne method to return different results for each time it was called, as shown below.

Using SetupSequence in Moq

If we had called AddOne three times in this scenario, the first time it would return our input plus one, then 3, then 5. 

Another handy feature of Moq is being able to simply skip over methods altogether. Let’s say you have a method that doesn’t return a result but does some Initialization features. You could follow this process, and simply not set a return for the method. This will then simply skip over the method any time it is called. 

 

Using SetupSequence in Moq

This statement says any time we call AddOne, don’t return anything, and don’t run AddOne either.

These are some basics and helpful tricks with Moq to get you going. There is a multitude of more options that can make Moq a powerful tool. With these tools, you’ll be able to write effective and useful tests quickly and make sure that your code runs smoothly and as expected!

NuGet NUnit window

Troubleshooting

It is possible that when unzipping the project and after building the project, the tests refuse to run. Here are some common tricks that we used to ensure the tests were properly compiling:

  • Attempt a Rebuild of the whole project and then try running all tests again.
  • Attempt doing a Clean of the whole project and then building. 
  • Try uninstalling the NUnit and NUnit 3 Adapters Nuget packages and clean the project. Reinstall the packages, rebuild the full project, and then try running all tests again.
  • Some tests may not run after getting errors about other sections in the Report folder. Try building these sections in the Reports folder individually, doing a Build all, then running the tests.

Learn more about DMC's C# programming services and contact us today for your next project!

The post NUnit Testing and Using Moq in C# appeared first on DMC, Inc..

]]>
6 Tips for Working with Legacy Code That You Did Not Write https://www.dmcinfo.com/blog/19746/6-tips-for-working-with-legacy-code-that-you-did-not-write/ Tue, 08 Sep 2020 12:09:30 +0000 https://www.dmcinfo.com/blog/19746/6-tips-for-working-with-legacy-code-that-you-did-not-write/ There may come a time as a programmer that you have to support a crucial existing system and while the task is daunting, there are still many actions you can take to make that process a lot easier. When you have to work with a deep expansive legacy system it’s easy to get lost in […]

The post 6 Tips for Working with Legacy Code That You Did Not Write appeared first on DMC, Inc..

]]>
There may come a time as a programmer that you have to support a crucial existing system and while the task is daunting, there are still many actions you can take to make that process a lot easier. When you have to work with a deep expansive legacy system it’s easy to get lost in a lot of the details. Here are six tips that definitely would’ve saved me, and hopefully you, some time when getting introduced to a legacy system regardless of web or desktop.

1. Ask as Many Questions as Possible

One of the fastest ways of getting familiarized with legacy code is to simply ask a lot of questions from the previous developer if that is possible. Starting off, it might not be possible to know of all the specific questions someone may have about the code. Ideally, you should try to ask questions about the overall structure and algorithm of the program. This helps create a more defined mental map and allows you to quickly identify places to edit/change in the event you need to implement new features.

It is also valuable to ask about previous places the developer had difficulty in or code they believe is not as robust. There is no reason to try and resolve an issue twice if the previous developer already identified the problem. Simply keeping this information in the back of your mind whenever you write a new code that interacts with that section can help to debug smoother. However, not everyone can have the luxury of having the previous developer available, in which case you can try your luck with the client or userbase actively using the code for any history they may remember, in regards to bugs or already implemented fixes.

If either way is not possible, there are still many things you can do to make working with legacy code a lot less painful, which I will cover in the following tips.

2. Document Call Hierarchy

When working with a large spanning piece of code, it can get difficult tracing down how a function or procedure works, especially when you have to make edits that may have far extending effects that you need to account for. You can create a simple call hierarchy by noting down the specific function you call, and all the subsequent calls that function makes in sequential order.

For instance, if you have a function Run which calls function GetItems and later on SortItems, you can keep track of the process and all the necessary places you have to change the code to maintain functionality. This is especially true if there are further calls in the subfunctions such as GetItems calling its own subfunction of LoadFile. This process of documenting the hierarchy should be kept relatively simple and easy to read, something as simple as the name of the function should work.

  • Run
    • GetItems
      • LoadFile
    • SortItems

A key point is that you should not go in-depth as to what each function does, especially if you are looking at the code for the first time. The main reason you want to avoid that is that by documenting excessively, you might develop biases or assumptions that are initially inaccurate.

After working with the legacy code, you’ll discover nuances and small details that you may have overlooked in the first pass through that change how the process runs, like a flag being set on a variable or specific timings that may have been missed.

Keeping the documentation of hierarchy simple will allow you to identify and understand the structure, but allow you to avoid common pitfalls and red herrings that comes from not being unacquainted with the code. As you get familiarized with the code, you can develop more diagrams and discover how your new or edited code interacts with the rest of the system.

3. Double Check Your Assumptions

A subsequent tip that was briefly touched on is that you may develop assumptions after going through the first pass of the code. When bugs occur, one of the first things you should immediately look at when you have reached a dead-end is to re-check your initial assumptions. In most cases, a lot of bugs that come from editing legacy code comes from a misunderstanding over what you believe something does, and what it actually does. After reviewing changes that you implemented and making sure it works, the next step should always be to double-check assumptions that you previously made.

For example, this can occur in conditional statements, where you believe the code would go into a specific conditional branch but instead goes somewhere else entirely. Another place this can occur is in function calls that have side effects that might have been missed, setting flags in other places of the code.

4. Testing, Testing, Testing

This might be obvious, but after you make any changes, extensive testing should be done to guarantee not only that the new function works, but that everything else behaves as it did before the changes. Setting up unit testing and functional testing ahead of time can go a long way in saving time and effort by having a set of pre-defined test suites to verify that everything is working in tip-top shape.

In cases where the code is too old and setting up testing cases do not make sense, setting up logging in as many places as possible can be key in debugging potential issues. Debugging and viewing the watch screen is very viable. Having logging sequences in place for each event is just as valuable as an issue could occur and you aren't around to watch it happen, logging will help narrow down exactly where the issue occurred, and some valuable replicable context that can help track down the issue.

For each new addition, there should be at the very least, a new line of logging or monitoring that should stay until it has been verified that the program is running consistently.

5. Start Small and Go Slow

Starting small goes hand in hand with testing everything. Sometimes, a change doesn’t require knowing every single nook and cranny of a large piece of code. In other cases, it requires changing code in many different places that you may not have been identified yet. It can get overwhelming changing code in multiple places and remembering to do it all.

Making sure to go slow and looking at one section of code at a time can reduce a lot of mistakes. This isn’t just applicable to legacy code but can also be applied to most aspects of coding.

6. Document Everything

Lastly, you want to document everything. From the discoveries you’ve made at looking at the legacy code, to the new additions that you’ve added. This can as simple as notes that you can print out, or comments in the code itself. Documentation not only helps you but also helps future developers that may have to work on this software. This information will create stability for yourself when months down the line, you may have to go back and look at your own work. By setting it in a sort of electronic stone, it legitimizes the changes done to the project, beyond just being key points to remember internally.

The post 6 Tips for Working with Legacy Code That You Did Not Write appeared first on DMC, Inc..

]]>
Sorting Multiple Columns in a Table with React https://www.dmcinfo.com/blog/19887/sorting-multiple-columns-in-a-table-with-react/ Tue, 02 Jun 2020 14:08:49 +0000 https://www.dmcinfo.com/blog/19887/sorting-multiple-columns-in-a-table-with-react/ Tables are a fast way to show a lot of valuable data. There are many guides on sorting tables by a column in React, but sorting multiple columns is something that can take some effort. At the end of this guide, you should have a sortable table where you can click multiple columns. For this […]

The post Sorting Multiple Columns in a Table with React appeared first on DMC, Inc..

]]>
Tables are a fast way to show a lot of valuable data. There are many guides on sorting tables by a column in React, but sorting multiple columns is something that can take some effort. At the end of this guide, you should have a sortable table where you can click multiple columns.

Sorting mulitple columns with React

For this tutorial, we will be leveraging the use of LINQ to make sorting quite painless, and optionally Material-UI for styling the table. At the very end, I will also briefly describe a way of implementing this table without using LINQ and only using base React.

Tools Used in this Blog:

  • LINQ 
  • Material-UI (Optional)
  • Basic React hooks
  • Functional React

Setup

First, we need some data. For the purposes of this example, I just created some dummy data that we can sort. I also defined an interface for the data type and an enum for departments that a person can belong to.

TypeScript
 const dataList=[
    {name: 'Ryan H.',   hours: 30, startDate: new Date('2019-01-14'), department: Department.Marketing},
    {name: 'Ariel P.',  hours: 22, startDate: new Date('2017-03-12'), department: Department.Sales},
    {name: 'Ryan Y.',   hours: 31, startDate: new Date('2015-09-12'), department: Department.Marketing},
    {name: 'Ed T.',     hours: 22, startDate: new Date('2017-03-12'), department: Department.Engineering},
    {name: 'Matt G.',   hours: 30, startDate: new Date('2017-03-12'), department: Department.Marketing},
    {name: 'Olivia H.', hours: 32, startDate: new Date('2018-05-10'), department: Department.Engineering}] as TableData[];

Our interface is TableData. We have an enum called Departments which lists the available department’s people may be in:

TypeScript
interface TableData{
    name: String,
    hours: number,
    startDate: Date,
    department: Department
}

enum Department{
    Marketing = 'Marketing',
    Sales = 'Sales',
    Engineering = 'Engineering',
}

It’s important to note that this interface and enum is solely for structuring our table, and not for sorting. We will eventually add another interface and enum for sorting.

For the case of simulation, I’ll be placing the data into another file and returning it from a function call called fetchData.

TypeScript
export const fetchData = () => { return dataList;}

For the table, we’ll be using Material-UI, but this method will absolutely work with just a regular table. If you don’t want to use Material-UI tables, just replace the tags with the corresponding ones below. Everything but TableBody has a corresponding tag which just contains a collection of <tr> tags.

MaterialUI HTML
TableHead th
TableBody
TableRow tr
TableCell td

Now we pull our data from the other file by calling fetchData, set up the appropriate table headers, and map our data to the corresponding table cells. For the sake of keeping clean code, I moved the sortable table headers to another component, so our code should look somewhat like this:

TypeScript
import React from "react";
import {Table, TableRow, TableCell, TableBody } from '@material-ui/core';
import { fetchData, TableData } from "./fetchData";
import { SortableHeader } from "./sortableTableHeader";

export const TableExample = () => {
    const dataList = fetchData();
    return(
        <div>
            <Table>
                <SortableHeader/>
                <TableBody>
                    {dataList.map((data) => { return(
                        <TableRow>
                            <TableCell>
                                {data.name}
                            </TableCell>
                            <TableCell>
                                {data.hours}
                            </TableCell>
                            <TableCell>
                                {data.startDate.toDateString()}
                            </TableCell>
                            <TableCell>
                                {data.department}
                            </TableCell>
                        </TableRow>
                        )})}
                </TableBody>
            </Table>
        </div>
    )
}
TypeScript
import React from "react";
import { TableHead, TableRow, TableCell } from "@material-ui/core";

export const SortableHeader = () => {
    return (
        <TableHead>
            <TableRow>
                <TableCell>Name</TableCell>
                <TableCell>Hours</TableCell>
                <TableCell>Date</TableCell>
                <TableCell>Department</TableCell>
            </TableRow>
        </TableHead>
    );
};

Which outputs something like this:

Implementation

Our overall goal is to create an ordered queue of sorting configurations that will apply each sorting in order through the use of LINQ to our data. We will then update our data list which will be re-rendered and displayed in the desired sorting configuration.

Step 1: Define a Sorting configuration

A sorting configuration is essentially an object that has two things. The property/column of the list that we’re trying to sort, and the sorting type such as ascending or descending. Since our rows are all the same object type: TableData, we can use keyof TableData to get all the properties/columns of our table. The only thing we’d have to do is make an enum of the different sorting types that we need, in this case we’ll stick with the standard ascending/descending.

TypeScript
interface SortingConfiguration{
    propertyName: keyof TableData,
    sortType: SortingType,
}

enum SortingType{
    Ascending,
    Descending,

Step 2: Utilize useState to Maintain a List of our Sorting Configurations

If you’ve never used useState, it’s a React hook that returns an array of two things: the first being the state of an object that is being maintained. The second is basically a function/dispatch that you can call to change the state. Here we’re initializing our list of sorting configurations to be an empty list since we want to start off with an unsorted table.

Note: If you do want a table column to be sorted by default, all you would need to do is add a SortConfiguration object in between the square brackets

TypeScript
  const [sortConfig, updateSortConfig] = useState<SortingConfiguration[]>([]);
    //Below is an example of sorting columns by name by default.
    const [initializedSortConfig, updateInitSortConfig] = useState<SortingConfiguration[]>([
        {propertyName: 'name', sortConfig: SortingType.Descending}
    ]);

Step 3: Create a Function That Adds, Modifies, and Removes Our Sorting Configurations

Most sortable headers are clickable headers that switch between a cycle of unsorted -> descending -> ascending -> unsorted -> etc, but there are many different orders or ways in which you may wish to set this up. You should be able to create your own custom function that edits the sorting configuration given three basic criteria:

  1. Add a sorting configuration to a column when there is not an existing configuration
  2. Modify a sorting configuration when there is an existing configuration
    • For example, going from descending to ascending, or vice versa
  3. Provide a way to remove a sorting configuration
    • This can be a part of criteria #2, but it is important enough to warrant its own criteria

We’ll be following a cycle of unsorted -> descending -> ascending -> unsorted -> etc… for the purposes of this guide.

We will be making a function called sortBy that will take in a key of TableData. Basically, whenever we call sortBy on a given property/column, it should automatically fulfill our three criteria: add a new configuration when we don’t have one, alter a configuration if we do, and remove a configuration when we need to. We wrap it in a useCallback because, in the next step, we will pass this function down to a prop.

So our basic structure looks like this:

TypeScript
 const sortBy = useCallback(
        (propertyName: keyof TableData) => {
            let pendingChange = [...sortConfig];
            const index = pendingChange.findIndex((config) => 
       						config.propertyName === 	propertyName)
            if(index > -1){
                // Existing configuration
            } else {
                // No existing configuration
            }
            updateSortConfig([...pendingChange]);
        },
        [sortConfig]
    )

We use the spread operator to make a copy of the sortConfiguration. Then we use findIndex to search our copied list and see if there is an existing configuration that’s in our list. We’ll be using a lot of the spread operator, so if you aren’t experienced with using it, you can see more examples in this blog by Jacob Bruce. Now that we have the basic structure laid out, let’s address the first criteria. This one is relatively simple since all we’re doing is adding a sorting configuration given the propertyName that was passed in:

TypeScript
  if(index > -1){
                // Existing configuration
            } else {
                pendingChange = [
                    ...pendingChange,
                    { propertyName: propertyName, sortType: SortingType.Descending },
                ];
            }

Here we use the spread operator to push a new sorting configuration to the end of the queue.

Now we have to handle our second and third criteria which you can only do on an existing configuration. We can remove extra code if instead of modifying we save the existing sort direction, remove the current configuration and lastly, add in a new configuration if it’s necessary to ‘modify’. Technically, our modification is removing a configuration of a property and adding in the same property back, but with a different sortType, but doing it this way makes it so we don’t have to write two similar code branches.

In short, we use our index we found to store the existing sort type. We then remove that configuration using splice. Lastly, we use our saved sorting type to determine if we needed to add a new configuration. In this example, we check if it’s Descending, and if that’s the case, we change that sortType to Ascending. Note how, if the sorting type is Ascending, we do not ‘modify’ our sorting configuration list, but rather just let the code remove the configuration.

Put together we have:

TypeScript
 const sortBy = useCallback(
        (propertyName: keyof TableData) => {
            let pendingChange = [...sortConfig];
            const index = pendingChange.findIndex((config) => 
                                    config.propertyName === propertyName)
            if(index > -1){
                //Save the sortType
                var currentSortType = pendingChange[index].sortType;
                //Remove existing config
                pendingChange.splice(index, 1);
                //check if the sort type we saved is descending
                if (currentSortType === SortingType.Descending) {
                    pendingChange = [
                        ...pendingChange,
                        { propertyName: propertyName, sortType: SortingType.Ascending },
                    ];
                }
            } else {
                pendingChange = [
                    ...pendingChange,
                    { propertyName: propertyName, sortType: SortingType.Descending },
                ];
            }
            updateSortConfig([...pendingChange]);
        },
        [sortConfig]
    )

Now the loop of unsorted -> descending -> ascending -> unsorted -> etc… is complete.

Step 4: Attach Our Function to the Table Headers

Now that we created our function, we need to attach it to table headers and make them something we can click. Since we’re passing down our props to our SortableTableHeader don’t forget to export our interfaces and enums to avoid errors. We’ll need to pass two things to our SortableTableHeader the first is our newly created sortBy function, and the second is our current sortConfig which will be used for arrow indicators to show what direction we’re sorting.

TypeScript
<Table>
    <SortableHeader 
          sortBy={sortBy} 
          sortConfig={sortConfig}
          />
      <TableBody>
		………
</Table>
</TableBody>
</pre>

<pre class="brush:js">
interface SortableHeaderProps{
    sortBy: (string: keyof TableData) => void;
    sortConfig: SortingConfiguration[];
}

export const SortableHeader = ({sortBy, sortConfig}:SortableHeaderProps) => {
    return (
        <TableHead>
            <TableRow>
                <TableCell>Name</TableCell>
                <TableCell>Hours</TableCell>
                <TableCell>Date</TableCell>
                <TableCell>Department</TableCell>
            </TableRow>
        </TableHead>
    );
};

To keep code DRY, I’ll be making a list of the columns that we want, and the TableData property they correspond to. This way, we can map over the function without copy/pasting TableCell four times.

TypeScript
const tableColumn = [
        {label:'Name', property:'name'},
        {label:'Hours', property:'hours'},
        {label:'Date', property:'startDate'},
        {label:'Department', property:'department'}
        ] as TableColumn[];

Now, we can attach our function to the tablecell by adding the onClick prop to the TableCell, and pass in the corresponding property for the column being mapped. We can also apply our CSS stylings to the table cells at this point. Here I use Material UI’s styling solution, but the key point when doing this is setting the CSS property cursor: ‘pointer’.

TypeScript
 <TableHead>
            <TableRow>
                {tableColumn.map((column, index) => {
                    return(
                        <TableCell key={index} 
                            className={headerCell}
                            onClick={()=>sortBy(column.property)}
                        >
                            {column.label}
                        </TableCell>
                    )
                })}
            </TableRow>
        </TableHead>

Lastly, we’ll add an indicator arrow through conditional rendering to show the current sort direction. To do this, we have two conditionals to check what we render. The first is checking if there is an existing sorting configuration, and the second is checking if the sortType is ascending/descending. To make this easier, we can write a simple function that gets us the current sortType from the sorting configuration and returns an appropriate icon.

TypeScript
 const getSortDirection = (property:keyof TableData) => {
        var config = sortConfig.find((sortConfig) => sortConfig.propertyName === property)
        if(config){
            if(config.sortType === SortingType.Descending){
                return <ArrowDownwardIcon/>
            }
            else {
                return <ArrowUpwardIcon/>
            }
        }
        return null;
    }

Our SortableTableHeader code should now look like this, with the appropriate CSS stylings applied in place of headerCell and sortLabel, if you aren’t using Material-UI.

TypeScript
export const SortableHeader = ({sortBy, sortConfig}:SortableHeaderProps) => {

    const {headerCell, sortLabel} = useStyles();

    const tableColumn = [
        {label:'Name', property:'name'},
        {label:'Hours', property:'hours'},
        {label:'Date', property:'startDate'},
        {label:'Department', property:'department'}
        ] as TableColumn[];

    const getSortDirection = (property:keyof TableData) => {
        var config = sortConfig.find((sortConfig) => sortConfig.propertyName === property)
        return config ?
            config.sortType === SortingType.Descending ?
                <ArrowDownwardIcon/>
                :<ArrowUpwardIcon/>
            :null
    }

    return (
        <TableHead>
            <TableRow>
                {tableColumn.map((column, index) => {
                    return(
                        <TableCell key={index} 
                            className={headerCell}
                            onClick={()=>sortBy(column.property)}
                        >
                            <span className={sortLabel}>
                                {column.label}
                                {getSortDirection(column.property)}
                            </span>
                        </TableCell>
                    )
                })}
            </TableRow>
        </TableHead>
    );
};

If everything went correctly, you should now see table headers with arrows showing the corresponding sort directions on them.

Step 5: Applying the Sort Configuration with LINQ

The last step is to finally apply the sorting in order.

We’ll be defining our variable, ‘sortedRows’ by a useMemo function, with a dependency on sortConfig and dataList, so that we re-render our sortedRows whenever our sort configuration changes, or whenever our data list changes.

Next, we can turn our standard list of items into a LINQ list by putting it in a LINQ.from call. Now, in order to leverage LINQ’s sorting functionality, our list of values has to be an IOrderedEnumerable object. We can get this by applying a basic orderBy function to our LINQ and passing it a lambda that just returns one, which basically says sort this by the key ‘nothing’, and return ‘1’ in all cases to avoid doing any comparison. This returns us our list in an unmodified order, but as an IOrderedEnumerable type.

TypeScript
 //Set up default ordering
        let sorted = linq.from(dataList).orderBy(() => 1);

The reason why we want to do this is so we can apply multiple sorts on top of this which is possible by using the thenBy and thenyByDescending functions from LINQ and using those requires an IOrderedEnumerable. Now that that’s established, we can loop through each of the sorting configurations in order and check which sort type we’re applying.

TypeScript
let sorted = linq.from(dataList).orderBy(() => 1);
        //Loop through the queue
        sortConfig.forEach((sortConfig) => {
            if (sortConfig.sortType === SortingType.Ascending) {
                //Ascending sorting
            } else {
                //Descending sorting
            }
        });

Our last step in sorting is to apply the thenBy conditional on the corresponding property, and LINQ will handle the complicated sorting for us, including comparing integers, strings, and dates. You can read more on LINQ functions, and even pass in your own comparable which is useful for handling sorting between more complex types, but in the case of this guide, we’ll just stick to basic sorting since we’re handling primitive types. If it’s descending, we can use thenByDescending which makes things easier.

TypeScript
 if (sortConfig.sortType === SortingType.Ascending) {
                sorted = sorted
                    .thenBy((dataRow) => dataRow[sortConfig.propertyName]);
            } else {
                sorted = sorted
                    .thenByDescending((dataRow) => dataRow[sortConfig.propertyName]);
            }

In the code above, you see that the arguments for the lambda are an element of the sortedList. In this case, I named it dataRow because each item in the sorted is a LINQ from our dataList so we can think of it as one row. It then selects a corresponding key via the propertyName of our sortConfig, which as you recall is either “name”, “hours”, “startDate’, and “department”. In short, this is us targeting that specific column.

Now, if you have null data, Javascript will run into the null comparison issue, which will mess up the sorting. A quick workaround I do is to sort all the data that is null on the bottom, but prefacing this thenBy with another thenBy that basically checks if the property is null.

Here we return -1 when the data is null which means the object is ‘less than’ an object which does have data, and is sorted lower on the list.

The last thing we need to do is to turn our LINQ into an array by calling .toArray(), and returning that value so that our sortedRows has an array of a sorted dataList. Putting it all together, we have this:

TypeScript
const sortedRows = useMemo(() => {
        //Set up default ordering
        let sorted = linq.from(dataList).orderBy(() => 1);
        //Loop through the queue
        sortConfig.forEach((sortConfig) => {
            if (sortConfig.sortType === SortingType.Ascending) {
                sorted = sorted
                    .thenBy((dataRow) => (dataRow[sortConfig.propertyName] === null ? -1 : 1))
                    .thenBy((dataRow) => dataRow[sortConfig.propertyName]);
            } else {
                sorted = sorted
                    .thenByDescending((dataRow) =>
                    dataRow[sortConfig.propertyName] === null ? -1 : 1
                    )
                    .thenByDescending((dataRow) => dataRow[sortConfig.propertyName]);
            }
        });
        return sorted.toArray();
    }, [sortConfig, dataList]);

You should now have a functioning table with multiple sortable headers! A key thing to note is that you can pass a comparable to the thenBy function instead of just selecting by key. If for some reason things are not sorting as they should, that should be the first point of debugging.

Bonus: Sorting without LINQ

If you’ve made it this far, or just skipped ahead to this section, we’ll be going over the basics of sorting rows based on multiple headers by just using basic comparators.

Comparators are just functions that take in two objects and return a number depending on how they relate to one another. A zero means they’re equal, positive numbers are greater, and negative numbers are less. Now let’s work on converting our sorting function to one that doesn’t use LINQ at all.

Sorting multiple columns may seem complex, but in practice, it’s not really that difficult. Given a list of sorting configurations in the desired order, all you have to check is if the first comparator results in an equality, use the next comparator in order. If we have a list of {a ,1 }, {a , 2}, and {b, 1}. Sorting by the first column results in the equality of a’s. All we’d need to do is check our second column’s comparator to determine what order they should go in.

In this guide, we’ve already implemented a function that can order our sort configurations for us. All we need to do is add a comparators to our SortConfiguration so we can keep track of which compare function we need to use.

TypeScript
export interface SortingConfiguration{
    propertyName: keyof TableData,
    sortType: SortingType,
    compareFunction: TableDataComparable
}

export type TableDataComparable = ((a: TableData, b:TableData) => number);

Next, we add that field to our sortBy function:

TypeScript
const sortBy = useCallback(
        (propertyName: keyof TableData, compareFunction: TableDataComparable) => {
            let pendingChange = [...sortConfig];
            const index = pendingChange.findIndex((config) => config.propertyName === propertyName)
            if(index > -1){
                //Save the sortType
                var currentSortType = pendingChange[index].sortType;
                //Remove existing config
                pendingChange.splice(index, 1);
                //check if the sort type we saved is descending
                if (currentSortType === SortingType.Descending) {
                    pendingChange = [
                        ...pendingChange,
                        { propertyName: propertyName, sortType: SortingType.Ascending , compareFunction: compareFunction},
                    ];
                }
            } else {
                pendingChange = [
                    ...pendingChange,
                    { propertyName: propertyName, sortType: SortingType.Descending, compareFunction: compareFunction },
                ];
            }
            updateSortConfig([...pendingChange]);
        },
        [sortConfig]
    )

Next, we update our interfaces and define comparators for each of our table columns.

TypeScript
interface SortableHeaderProps {
    sortBy: (string: keyof TableData, compareFunction: TableDataComparable) => void;
    sortConfig: SortingConfiguration[];
}
...
const CompareByEquality = (column: keyof TableData) => (a: TableData, b: TableData) => {
        if(a[column] === b[column]){
            return 0
        } else{
            if (a[column] > b[column]){
                return 1;
            }
            return -1;
        }
    }

    const tableColumn = [
        {   label: 'Name', 
            property: 'name', 
            compareFunction: 
            (a: TableData, b: TableData) => {
                 return a['name'].localeCompare(b['name'] as string) 
            } 
        },
        {
            label: 'Hours', 
            property: 'hours',
            compareFunction: CompareByEquality('hours')
        },
        {   label: 'Date', 
            property: 'startDate', 
            compareFunction: CompareByEquality('startDate') 
        },
        {   label: 'Department',
            property: 'department', 
            compareFunction: CompareByEquality('department') }
    ] as TableColumn[];

Here, I just use equality for returning the values, but for more complex objects, you can define your own comparator function. As for the name, I used string’s localCompare as an example of an in-line lambda comparator.

After that, we just add the compareFunction into our onClick, so that our sortBy function has all the correct parameters.

TypeScript
<TableCell key={index}
    		className={headerCell}
   	 	onClick={() => sortBy(column.property, column.compareFunction)}
>
<TableCell key={index}
    		className={headerCell}
   	 	onClick={() => sortBy(column.property, column.compareFunction)}
>

Lastly, we need to change our definition for sortedRows:

TypeScript
const sortedRows = useMemo(() => {
        if(sortConfig.length === 0){
            return [...dataList];
        }
        let sorted = [...dataList].sort(
            (a: TableData, b:TableData) =>{
                 for(const config of sortConfig){
                    const result = (config.compareFunction(a,b))
                    if(result !== 0){
                        if(config.sortType === SortingType.Ascending){
                            return result;
                        }
                        else{
                            return -result;
                        }
                    }
                }
                return 0;
            }
        )
        return(sorted)
    }, [sortConfig, dataList]);

First, we check if there’s a sort configuration and if there isn’t we want to go back to our unsorted state by returning a copy of the data list.

If there is a sort configuration, we make a copy of dataList using the spread operator and sort with a lambda function that goes over our loop of sortConfigs. It then tests each a, and b, TableData against the compareFunction in the sortConfig. If we have an equal match, we try to and apply the subsequent sort config until we have a result that isn’t 0. Next, we check if it’s ascending or descending, and so we flip the result to its opposite value with a negative sign. Lastly, we finish going through our sort configuration and find that all matches, then we know both rows are equal.

That about does it for sorting multiple columns in React.

Learn more about DMC’s Application Development expertise.

The post Sorting Multiple Columns in a Table with React appeared first on DMC, Inc..

]]>