Industry Article

Understanding the Heterogeneous Graphics Pipeline of i.MX RT1170 MCUs

In this article, learn about the heterogeneous graphics pipeline of the i.MX RT1170 MCU and its three main graphics acceleration engines.

Modern consumer and professional-grade embedded devices are becoming increasingly capable of offering a growing range of useful features. This feature-richness, however, leaves designers questioning how to make all of the functions accessible to the users without overwhelming them with a complicated interface. 

Smartphone-like GUIs can be an effective alternative to traditional physical buttons, as they offer a few improvements over classic physical controls. NXP makes developing feature-rich and graphical user interfaces more accessible with various integrated display controllers and graphics accelerators such as those included in the i.MX RT1170 crossover MCU

 

Figure 1: The i.MX RT1170 MCU

Figure 1. The i.MX RT1170 MCU

 

The Three Display Engines of the i.MX RT1170 MCU

While most NXP microcontrollers can support necessary GUI peripherals, some devices (such as the i.MX RT1170 MCU) come with built-in display interfaces and graphics accelerators designed to support rich GUI applications. More concretely, the i.MX1170 includes a 2D vector graphics GPU, a PxP graphics accelerator, and LCDIFV2 support.

The dedicated 2D GPU with vector graphics acceleration helps optimize power consumption and performance of embedded devices by supporting the CPU in rendering scalable vector graphics and composing and manipulating bitmaps. The 2D GPU can also transform images (scale, rotation by any arbitrary angle, reflections, shears) and color convert them on the fly.

The Pixel processing Pipeline (PxP) combines various image transformation operations such as scaling, rotation, and color-space conversion into a single efficient processing engine. 

The LCDIFV2 support enables embedded designers to create and work with up to eight display layers, offering on-the-fly blending capabilities.

 

The 2D Vector Graphics GPU

Compared to pixel graphics, vector graphics don’t rely on individual pixels to form a complete image. The vector graphics model uses commands (such as move, line to, curve to) and coordinates to describe shapes that then will be rasterized to a final image. 

Each pixel in a pixel graphic, such as a photograph stored as a JPEG file, has a constant size, which typically means that transforming a pixel graphic always results in a loss of quality. Vector graphics, on the other hand, are more flexible when it comes to transforming. It’s easy to transform the points of a primitive shape, for example, and then redraw the image without a loss in quality, as vector images operate independently of the final image’s resolution. 

Therefore, using pixel graphics makes sense when capturing images with lots of detail, like photographs. In contrast, vector graphics are best used when working with simple shapes, such as calligraphy, company logos, and graphical user interfaces.

Rendering vector images typically requires a rendering target, path data, fill information, transformation data, color information, and blend rules. The rendering target is the buffer that holds the rendered image once it’s finished. The path data is the most crucial part of a vector image, as it contains the coordinates and path segments that describe the geometry of the elements present in the vector image. It consists of pairs of an operation code and the arguments that go along with each operation, respectively:

 

Rendering vector images typically requires a rendering target, path data, fill information, transformation data, color information, and blend rules.

Figure 2. Rendering vector images typically requires a rendering target, path data, fill information, transformation data, color information, and blend rules.

 

The fill rule describes what rule to apply when determining what part of a closed shape to fill in with a solid color. This property can take one of two possible values: nonzero and even-odd. With the nonzero rule selected, the fill algorithm casts a ray from the point in question to infinity in each direction. It then counts how often that ray passes another line in the vector graphic. If the ray hits a line going from left to right, it adds one to the final sum. If the line goes from right to left, the algorithm subtracts one. If the final number is zero, the point lays on the outside. 

In contrast, the even-odd algorithm counts each line hit without regard to the line’s direction. If the resulting sum is even, the point in question is outside of the shape. Otherwise, it’s on the inside.

Next is the transformation, which is done by manipulating matrices to represent various operations such as translation, rotation, and scaling. Affine transformations are a powerful feature of the built-in 2D vector GPU of the i.MX RT1170 MCU.

When drawing the resulting shape, the programmer can assign color information to each path:

 

Rendering vector images typically requires a rendering target, path data, fill information, transformation data, color information, and blend rules.

Figure 3. Transformation is done by manipulating matrices to represent various operations. When drawing shapes, the programmer is able to assign color information to each path.

 

The blending rule, which states how to blend a path to the extended buffer content, is the last piece of information that makes up a final vector image. The alpha value from a path’s color parameter and the blend function define the effect that the alpha will have on the vector path itself and the destination buffer. 

The VGLite API — one of the options to access the 2D vector engine of the i-MX RT1170 — implements various blend rules that the NXP application note AN13075 discusses in more detail. Apart from the vector pipeline, the VGLite API also provides a pipeline for raster images. More about that part of the API can be found in the AN13075 application note.

 

The PxP 2D Accelerator

The Pixel Processing Pipeline (PxP) is a powerful 2D accelerator that can process graphics buffers or composite video before sending it out to a display. It integrates several commonly used 2D graphics processing operations such as blitting, alpha blending, color-space conversion, fixed angle rotation, and scaling.

One possible use-case of this engine is to blend two buffers to form a single output image sent to an LCD. For example, one of the buffers could contain a background image, while the other holds UI elements such as text labels or buttons. The layers can have different sizes, and the PxP engine also allows for fast and easy scaling. The AN12110 application note discusses a more in-depth example application in which the PxP scales the internal buffer to fit the LCD screen of that project.

Outsourcing common 2D operations to a dedicated hardware controller, such as the PxP, offers a range of benefits compared to implementing the functions on the main CPU of an embedded microcontroller. Software developers don’t have to reinvent the wheel, as the most common functions are readily available. The main CPU also doesn’t have to deal with complex 2D manipulations multiple times a second, meaning it can focus on other calculations instead, which leads to a more fluent user experience and potentially more energy efficiency.

 

The LCDIFV2 Display Controller

The second version of the liquid crystal display interface (LCDIF) also aids the main CPU by fetching the previously created display data from a frame buffer and displaying it on a TFT LCD panel. The frame buffer is the space in memory where the image data to be displayed is stored. It’s possible to use two buffers interchangeably. Doing so allows one of the buffers to get updated while the controller draws the other one. Besides LCDIFv2, the i.MX RT1170 MCU incorporates an additional eLCDIF display controller.

The LCDIFv2 controller within the i.MX supports up to eight layers for programmers to blend and configure at runtime. All of this happens without the involvement of other accelerator modules. Each layer can utilize a different color format, canvas size, position, and fetch contents from buffers at any memory location. 

The LCDIFv2 controller also supports the Index8BPP format, which allows programmers to define a 32-bit-per-pixel image using a color lookup table and an index array that goes along with it. This method makes it possible to define an ARGB8888 without having to sacrifice extra memory. The AN13075 application note and the official SDK give examples of how to accomplish this.

 

The i.MX RT1170 Crossover MCU and its Supported Devices

The heterogeneous graphics pipeline of the i.MX RT1170 consists of three engines, each with its own benefit that helps simplify a project and, when used in unison, improve its performance while saving memory. Several NXP devices already support some of the engines discussed in this article: the i.MX RT1170 supports all three graphics accelerators. The Cortex-M7-based i.MX RT1050 and the i.MX RT106x devices support PxP and an LCD controller. The i.MX RT500 is based on a Cortex-M33 core and incorporates a 2D GPU.

Besides hardware, NXP enables creating small and fast full-featured devices by supporting different APIs and helpful tools for developing GUIs for embedded devices. NXP’s website provides an overview of the various supported APIs and tools and all the supported devices. It also offers different training materials such as application notes, videos, SDK examples, and on-demand webinars.

Industry Articles are a form of content that allows industry partners to share useful news, messages, and technology with All About Circuits readers in a way editorial content is not well suited to. All Industry Articles are subject to strict editorial guidelines with the intention of offering readers useful news, technical expertise, or stories. The viewpoints and opinions expressed in Industry Articles are those of the partner and not necessarily those of All About Circuits or its writers.